Command Palette
Search for a command to run...
EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question Answering
Wang Junjue ; Zheng Zhuo ; Chen Zihang ; Ma Ailong ; Zhong Yanfei

Abstract
Earth vision research typically focuses on extracting geospatial objectlocations and categories but neglects the exploration of relations betweenobjects and comprehensive reasoning. Based on city planning needs, we develop amulti-modal multi-task VQA dataset (EarthVQA) to advance relationalreasoning-based judging, counting, and comprehensive analysis. The EarthVQAdataset contains 6000 images, corresponding semantic masks, and 208,593 QApairs with urban and rural governance requirements embedded. As objects are thebasis for complex relational reasoning, we propose a Semantic OBject Awarenessframework (SOBA) to advance VQA in an object-centric way. To preserve refinedspatial locations and semantics, SOBA leverages a segmentation network forobject semantics generation. The object-guided attention aggregates objectinterior features via pseudo masks, and bidirectional cross-attention furthermodels object external relations hierarchically. To optimize object counting,we propose a numerical difference loss that dynamically adds differencepenalties, unifying the classification and regression tasks. Experimentalresults show that SOBA outperforms both advanced general and remote sensingmethods. We believe this dataset and framework provide a strong benchmark forEarth vision's complex analysis. The project page is athttps://Junjue-Wang.github.io/homepage/EarthVQA.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| visual-question-answering-on-earthvqa | SOBA | Overall Accuracy: 78.14 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.