SEUF: Is Unlearning One Expert Enough for Mixture-of-Experts LLMs?
Published in ACL 2025 (63rd Annual Meeting of the Association for Computational Linguistics), 2025
Recommended citation: Haomin Zhuang, Yihua Zhang, Kehan Guo, Jinghan Jia, Gaowen Liu, Sijia Liu, Xiangliang Zhang. (2025). "SEUF: Is Unlearning One Expert Enough for Mixture-of-Experts LLMs?" ACL 2025. https://aclanthology.org/2025.acl-long.424.pdf
Machine unlearning in Mixture-of-Experts (MoE) LLMs faces unique challenges due to their dynamic routing mechanism. Standard unlearning techniques cause excessive forgetting when applied naively. We propose SEUF, which targets specific experts for knowledge removal while stabilizing router behavior, achieving up to 5% improvement in forgetting quality and 35% in model utility while modifying only 0.06% of parameters.
Recommended citation: Haomin Zhuang, Yihua Zhang, Kehan Guo, Jinghan Jia, Gaowen Liu, Sijia Liu, Xiangliang Zhang. (2025). “SEUF: Is Unlearning One Expert Enough for Mixture-of-Experts LLMs?” ACL 2025.
