all search terms
2024 年 12 月 2 日
Challenges in Adapting Multilingual LLMs to LowResource Languages using LoRA PEFT Tuning
title: Challenges in Adapting Multilingual LLMs to LowResource Languages using LoRA PEFT Tuning
publish date:
2024-11-27
authors:
Omkar Khade et.al.
paper id
2411.18571v1
download
abstracts:
Large Language Models (LLMs) have demonstrated remarkable multilingual capabilities, yet challenges persist in adapting these models for low-resource languages. In this study, we investigate the effects of Low-Rank Adaptation (LoRA) Parameter-Efficient Fine-Tuning (PEFT) on multilingual Gemma models for Marathi, a language with limited resources. Using a translated Alpaca dataset with 52,000 instruction-response pairs, our findings reveal that while evaluation metrics often show a performance decline post-fine-tuning, manual assessments frequently suggest that the fine-tuned models outperform their original counterparts. The observations indicate improvements in target language generation capabilities but a reduction in reasoning abilities following language adaptation. These results underscore the need for improved evaluation methodologies and the creation of high-quality native datasets to accurately assess language-specific model performance in low-resource settings.
QA:
coming soon
编辑整理: wanghaisheng 更新日期:2024 年 12 月 2 日