Command Palette
Search for a command to run...
Pierre-Louis Guhur; Shizhe Chen; Ricardo Garcia; Makarand Tapaswi; Ivan Laptev; Cordelia Schmid

Abstract
In human environments, robots are expected to accomplish a variety of manipulation tasks given simple natural language instructions. Yet, robotic manipulation is extremely challenging as it requires fine-grained motor control, long-term memory as well as generalization to previously unseen tasks and environments. To address these challenges, we propose a unified transformer-based approach that takes into account multiple inputs. In particular, our transformer architecture integrates (i) natural language instructions and (ii) multi-view scene observations while (iii) keeping track of the full history of observations and actions. Such an approach enables learning dependencies between history and instructions and improves manipulation precision using multiple views. We evaluate our method on the challenging RLBench benchmark and on a real-world robot. Notably, our approach scales to 74 diverse RLBench tasks and outperforms the state of the art. We also address instruction-conditioned tasks and demonstrate excellent generalization to previously unseen variations.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| robot-manipulation-generalization-on-gembench | Hiveformer | Average Success Rate: 30.4 Average Success Rate (L1): 60.3±1.5 Average Success Rate (L2): 26.1±1.4 Average Success Rate (L3): 35.1±1.7 Average Success Rate (L4): 0.0±0.0 |
| robot-manipulation-on-rlbench | Hiveformer | Succ. Rate (10 tasks, 100 demos/task): 83.3 Succ. Rate (18 tasks, 100 demo/task): 45.3 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.