all search terms
2024 年 12 月 30 日
Orient Anything Learning Robust Object Orientation Estimation from Rendering 3D Models
title: Orient Anything Learning Robust Object Orientation Estimation from Rendering 3D Models
publish date:
2024-12-24
authors:
Zehan Wang et.al.
paper id
2412.18605v1
download
abstracts:
Orientation is a key attribute of objects, crucial for understanding their spatial pose and arrangement in images. However, practical solutions for accurate orientation estimation from a single image remain underexplored. In this work, we introduce Orient Anything, the first expert and foundational model designed to estimate object orientation in a single- and free-view image. Due to the scarcity of labeled data, we propose extracting knowledge from the 3D world. By developing a pipeline to annotate the front face of 3D objects and render images from random views, we collect 2M images with precise orientation annotations. To fully leverage the dataset, we design a robust training objective that models the 3D orientation as probability distributions of three angles and predicts the object orientation by fitting these distributions. Besides, we employ several strategies to improve synthetic-to-real transfer. Our model achieves state-of-the-art orientation estimation accuracy in both rendered and real images and exhibits impressive zero-shot ability in various scenarios. More importantly, our model enhances many applications, such as comprehension and generation of complex spatial concepts and 3D object pose adjustment.
QA:
coming soon
编辑整理: wanghaisheng 更新日期:2024 年 12 月 30 日