dataset
2024 年 7 月 10 日
Exploring Scalability of SelfTraining for OpenVocabulary Temporal Action Localization
title: Exploring Scalability of SelfTraining for OpenVocabulary Temporal Action Localization
publish date:
2024-07-09
authors:
Jeongseok Hyun et.al.
paper id
2407.07024v1
download
abstracts:
The vocabulary size in temporal action localization (TAL) is constrained by the scarcity of large-scale annotated datasets. To address this, recent works incorporate powerful pre-trained vision-language models (VLMs), such as CLIP, to perform open-vocabulary TAL (OV-TAL). However, unlike VLMs trained on extensive image/video-text pairs, existing OV-TAL methods still rely on small, fully labeled TAL datasets for training an action localizer. In this paper, we explore the scalability of self-training with unlabeled YouTube videos for OV-TAL. Our self-training approach consists of two stages. First, a class-agnostic action localizer is trained on a human-labeled TAL dataset and used to generate pseudo-labels for unlabeled videos. Second, the large-scale pseudo-labeled dataset is combined with the human-labeled dataset to train the localizer. Extensive experiments demonstrate that leveraging web-scale videos in self-training significantly enhances the generalizability of an action localizer. Additionally, we highlighted issues with existing OV-TAL evaluation schemes and proposed a new evaluation protocol. Code is released at https://github.com/HYUNJS/STOV-TAL
QA:
coming soon
编辑整理: wanghaisheng 更新日期:2024 年 7 月 10 日