[ACL’25] SEUF: Is Unlearning One Expert Enough for Mixture-of-Experts LLMs?

Published:

The dynamic routing nature of Mixture-of-Experts (MoE) LLMs introduces unique challenges for machine unlearning, causing excessive forgetting when standard techniques are applied naively. We propose SEUF, which targets specific experts for knowledge removal while stabilizing router behavior. SEUF achieves up to 5% improvement in forgetting quality and 35% in model utility while modifying only 0.06% of parameters.

This paper is accepted by ACL’25 [Paper] [ACL Anthology].