Enhancing Fine-Grained Emotion Recognition through the Lens of Prototype Theory

View PDF of the article E-ICL: Enhancing Subtle Emotion Recognition through the Lens of Prototype Theory, by Zhaochun Ren and 8 other authors
No PDF available, click to view other formats
a summary:In-context learning (ICL) achieves great performance in various areas such as knowledge acquisition, logical reasoning, and semantic understanding. However, its performance deteriorates significantly on emotion detection tasks, especially subtle emotion recognition. The reasons behind this are still unclear. In this paper, we identify the reasons behind the poor performance of ICL from the perspective of prototype theory and propose a method to address this issue. Specifically, we conducted extensive experimental experiments and found that ICL is consistent with prototype theory on subtle emotion recognition. Based on this theory, we reveal the following shortcomings of ICL: (1) It relies on semantically similar but emotionally inaccurate prototypes (example and label pairs) to predict emotions. (2) It is susceptible to interference from unrelated classes, which affects the accuracy and robustness of predictions. To address these issues, we propose an emotion context learning (E-ICL) method for fine-grained emotion recognition. E-ICL relies on more emotionally accurate prototypes to predict categories by pointing to emotionally similar examples with dynamic labels. At the same time, E-ICL uses an exclusionary affective prediction strategy to avoid interference from irrelevant classes, thus increasing its accuracy and robustness. Note that the entire process is accomplished with the help of a plug-and-play emotion plugin model, without additional training. Experiments on the EDOS, Empathetic-Dialogues, EmpatheticIntent, and GoEmotions micro-emotion datasets show that E-ICL achieves superior performance in emotion prediction. Moreover, even when the emotion auxiliary model used is less than 10% LLMs, E-ICL can still boost the performance of LLMs by more than 4% on multiple datasets.
Submission date
From: Chu Yang [view email]
[v1]
Tuesday, 4 June 2024, 10:59:43 UTC (301 KB)
[v2]
Friday, 3 January 2025 02:00:01 UTC (301 KB)
[v3]
Monday, 6 January 2025, 01:52:41 UTC (301 KB)
[v4]
Friday, 10 October 2025 01:07:51 UTC (1 KB) (withdrawn)
Don’t miss more hot News like this! AI/" target="_blank" rel="noopener">Click here to discover the latest in AI news!
2025-10-13 04:00:00