title: DynamicVLM Simple Dynamic Visual Token Compression for VideoLLM

publish date:

2024-12-12

authors:

Han Wang et.al.

paper id

2412.09530v1

download

abstracts:

The application of Large Vision-Language Models (LVLMs) for analyzing images and videos is an exciting and rapidly evolving field. In recent years, we’ve seen significant growth in high-quality image-text datasets for fine-tuning image understanding, but there is still a lack of comparable datasets for videos. Additionally, many VideoLLMs are extensions of single-image VLMs, which may not efficiently handle the complexities of longer videos. In this study, we introduce a large-scale synthetic dataset created from proprietary models, using carefully designed prompts to tackle a wide range of questions. We also explore a dynamic visual token compression architecture that strikes a balance between computational efficiency and performance. Our proposed \model{} achieves state-of-the-art results across various video tasks and shows impressive generalization, setting new baselines in multi-image understanding. Notably, \model{} delivers an absolute improvement of 2.7% over LLaVA-OneVision on VideoMME and 10.7% on MuirBench. Codes are available at https://github.com/Hon-Wong/ByteVideoLLM

QA:

coming soon

编辑整理: wanghaisheng 更新日期:2024 年 12 月 16 日