all search terms
2024 年 12 月 24 日
DataCentric Improvements for Enhancing MultiModal Understanding in Spoken Conversation Modeling
title: DataCentric Improvements for Enhancing MultiModal Understanding in Spoken Conversation Modeling
publish date:
2024-12-20
authors:
Maximillian Chen et.al.
paper id
2412.15995v1
download
abstracts:
Conversational assistants are increasingly popular across diverse real-world applications, highlighting the need for advanced multimodal speech modeling. Speech, as a natural mode of communication, encodes rich user-specific characteristics such as speaking rate and pitch, making it critical for effective interaction. Our work introduces a data-centric customization approach for efficiently enhancing multimodal understanding in conversational speech modeling. Central to our contributions is a novel multi-task learning paradigm that involves designing auxiliary tasks to utilize a small amount of speech data. Our approach achieves state-of-the-art performance on the Spoken-SQuAD benchmark, using only 10% of the training data with open-weight models, establishing a robust and efficient framework for audio-centric conversational modeling. We also introduce ASK-QA, the first dataset for multi-turn spoken dialogue with ambiguous user requests and dynamic evaluation inputs. Code and data forthcoming.
QA:
coming soon
编辑整理: wanghaisheng 更新日期:2024 年 12 月 24 日