Abstract:
Knowledge distillation is a technique that involves transferring knowledge from a larger teacher model to a smaller student model. The latest developments in meta-learning-based knowledge distillation emphasize the significance of fine-tuning the teacher models while taking into account the student’s need for better knowledge distillation. Nevertheless, current MetaKD methods frequently fail to provide incentives for the teacher model to improve itself. We introduce a meta-policy distillation technique aiming to foster both collaboration and competition during the fine-tuning of the teacher model within the meta-learning phase. Additionally, we put forth a curriculum learning framework tailored for the student model within a competitive setting. In this context, the student model endeavors to surpass the teacher model through self-training on a diverse range of tasks. We conduct extensive experiments on two NLU benchmarks GLUE and SuperGLUE [45,46] and validate our methodology’s effectiveness against various KD techniques.