all search terms
2024 年 9 月 30 日
EgoLM MultiModal Language Model of Egocentric Motions
title: EgoLM MultiModal Language Model of Egocentric Motions
publish date:
2024-09-26
authors:
Fangzhou Hong et.al.
paper id
2409.18127v1
download
abstracts:
As the prevalence of wearable devices, learning egocentric motions becomes essential to develop contextual AI. In this work, we present EgoLM, a versatile framework that tracks and understands egocentric motions from multi-modal inputs, e.g., egocentric videos and motion sensors. EgoLM exploits rich contexts for the disambiguation of egomotion tracking and understanding, which are ill-posed under single modality conditions. To facilitate the versatile and multi-modal framework, our key insight is to model the joint distribution of egocentric motions and natural languages using large language models (LLM). Multi-modal sensor inputs are encoded and projected to the joint latent space of language models, and used to prompt motion generation or text generation for egomotion tracking or understanding, respectively. Extensive experiments on large-scale multi-modal human motion dataset validate the effectiveness of EgoLM as a generalist model for universal egocentric learning.
QA:
coming soon
编辑整理: wanghaisheng 更新日期:2024 年 9 月 30 日