HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

3D-LLM: Injecting the 3D World into Large Language Models

Hong Yining ; Zhen Haoyu ; Chen Peihao ; Zheng Shuhong ; Du Yilun ; Chen Zhenfang ; Gan Chuang

3D-LLM: Injecting the 3D World into Large Language Models

Abstract

Large language models (LLMs) and Vision-Language Models (VLMs) have beenproven to excel at multiple tasks, such as commonsense reasoning. Powerful asthese models can be, they are not grounded in the 3D physical world, whichinvolves richer concepts such as spatial relationships, affordances, physics,layout, and so on. In this work, we propose to inject the 3D world into largelanguage models and introduce a whole new family of 3D-LLMs. Specifically,3D-LLMs can take 3D point clouds and their features as input and perform adiverse set of 3D-related tasks, including captioning, dense captioning, 3Dquestion answering, task decomposition, 3D grounding, 3D-assisted dialog,navigation, and so on. Using three types of prompting mechanisms that wedesign, we are able to collect over 300k 3D-language data covering these tasks.To efficiently train 3D-LLMs, we first utilize a 3D feature extractor thatobtains 3D features from rendered multi- view images. Then, we use 2D VLMs asour backbones to train our 3D-LLMs. By introducing a 3D localization mechanism,3D-LLMs can better capture 3D spatial information. Experiments on ScanQA showthat our model outperforms state-of-the-art baselines by a large margin (e.g.,the BLEU-1 score surpasses state-of-the-art score by 9%). Furthermore,experiments on our held-in datasets for 3D captioning, task composition, and3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitativeexamples also show that our model could perform more tasks beyond the scope ofexisting LLMs and VLMs. Project Page: : https://vis-www.cs.umass.edu/3dllm/.

Code Repositories

openrobotlab/pointllm
pytorch
Mentioned in GitHub
Yui010206/CREMA
pytorch
Mentioned in GitHub
umass-foundation-model/3d-llm
pytorch
Mentioned in GitHub
qizekun/ShapeLLM
pytorch
Mentioned in GitHub
Pointcept/GPT4Point
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
3d-object-captioning-on-objaverse-13D-LLM
Sentence-BERT: 44.48
Correctness: 1.77
GPT-4: 33.42
Hallucination: 1.16
Precision: 60.39
SimCSE: 43.68
3d-question-answering-3d-qa-on-scanqa-test-w3D-LLM (flamingo)
BLEU-1: 32.6
BLEU-4: 8.4
CIDEr: 65.6
Exact Match: 23.2
METEOR: 13.5
ROUGE: 34.8
3d-question-answering-3d-qa-on-scanqa-test-w3D-LLM (BLIP2-flant5)
BLEU-1: 38.3
BLEU-4: 11.6
CIDEr: 69.6
Exact Match: 19.1
METEOR: 14.9
ROUGE: 35.3
3d-question-answering-3d-qa-on-scanqa-test-w3D-LLM (BLIP2-opt)
BLEU-1: 37.3
BLEU-4: 10.7
CIDEr: 67.1
Exact Match: 19.1
METEOR: 14.3
ROUGE: 34.5
generative-3d-object-classification-on-13D-LLM
Objaverse (Average): 45.25
Objaverse (C): 41.50
Objaverse (I): 49.00

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp