all search terms
2024 年 7 月 22 日
Evaluating the Reliability of SelfExplanations in Large Language Models
title: Evaluating the Reliability of SelfExplanations in Large Language Models
publish date:
2024-07-19
authors:
Korbinian Randl et.al.
paper id
2407.14487v1
download
abstracts:
This paper investigates the reliability of explanations generated by large language models (LLMs) when prompted to explain their previous output. We evaluate two kinds of such self-explanations - extractive and counterfactual - using three state-of-the-art LLMs (2B to 8B parameters) on two different classification tasks (objective and subjective). Our findings reveal, that, while these self-explanations can correlate with human judgement, they do not fully and accurately follow the model’s decision process, indicating a gap between perceived and actual model reasoning. We show that this gap can be bridged because prompting LLMs for counterfactual explanations can produce faithful, informative, and easy-to-verify results. These counterfactuals offer a promising alternative to traditional explainability methods (e.g. SHAP, LIME), provided that prompts are tailored to specific tasks and checked for validity.
QA:
coming soon
编辑整理: wanghaisheng 更新日期:2024 年 7 月 22 日