3D visual grounding involves matching natural language descriptions with their corresponding objects in 3D spaces. Existing methods often face challenges with accuracy in object recognition and struggle in interpreting complex linguistic queries particularly with descriptions that involve multiple anchors or are view-dependent. In response we present the MiKASA (Multi-Key-Anchor Scene-Aware) Transformer. Our novel end-to-end trained model integrates a self-attention-based scene-aware object encoder and an original multi-key-anchor technique enhancing object recognition accuracy and the understanding of spatial relationships. Furthermore MiKASA improves the explainability of decision-making facilitating error diagnosis. Our model achieves the highest overall accuracy in the Referit3D challenge for both the Sr3D and Nr3D datasets particularly excelling by a large margin in categories that require viewpoint-dependent descriptions.
Architecture of our 3D Visual Grounding Model. The main contributions of our work can be summarized
as follows:
• We introduce a scene-aware object encoder that considers the contextual information and increases models ability to understand the object category.
• We present the multi-key-anchor technique, which enhances spatial understanding. This approach redefines coordinates relative to target objects and explicitly assesses the importance of nearby objects through textual context. It addresses the directional ambiguity often found in rotationally invariant models like PointNet++, by using spatial contexts to imply target object orientation.
• We develop a novel, end-to-end trainable and explainable architecture, that leverages late fusion to separately process distinct aspects of the data, thereby
Our novel spatial module captures relative spatial information from a single viewpoint by treating each object in the scene as a potential anchor. This approach generates unique spatial maps, each offering a different perspective of the scene. These maps are then undergo feature augmentation, where distances and angles are calculated, followed by normalization and scaling. Subsequently, a MLP layer is employed to transform these low-dimensional features into higher-dimensional ones for effective fusion with textual data.
Our novel attention-based spatial feature aggregation. Each map designates a different object as the target, while treating all other objects as anchors. The importance of each anchor relative to the potential target object is represented in row i of the score matrix, indicating the relevance of each anchor in the context of the target.
Visual representation of the model's decision-making process in diverse situations. Rows, from top to bottom, depict: (1) Choices determined by category score, (2) Choices determined by spatial score, (3) Our model's final selection after combining both scores, and (4) The established ground truth. Columns from left to right showcase varying scenarios. The green bounding box refers to the chosen object, and the red bounding box refers to the unchosen distractors.
Comparative analysis of the accuracy of our end-to-end solution against others on the Sr3D and Nr3D Challenges, highlighting MiKASA’s enhancements. Notably, in the view-dependent category, our solution demonstrates exceptional performance improvements, underscoring MiKASA’s superior accuracy and effectiveness over previous methods.
@inproceedings{chang2024mikasa,
title={MiKASA: Multi-Key-Anchor \& Scene-Aware Transformer for 3D Visual Grounding},
author={Chang, Chun-Peng and Wang, Shaoxiang and Pagani, Alain and Stricker, Didier},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={14131--14140},
year={2024}
}