pith. machine review for the scientific record. sign in

arxiv: 1904.09636 · v1 · submitted 2019-04-21 · 💻 cs.CL

Recognition: unknown

Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System

Authors on Pith no claims yet
classification 💻 cs.CL
keywords modelmodelsknowledgeresultsansweringcompressionquestiondistillation
0
0 comments X
read the original abstract

Deep pre-training and fine-tuning models (like BERT, OpenAI GPT) have demonstrated excellent results in question answering areas. However, due to the sheer amount of model parameters, the inference speed of these models is very slow. How to apply these complex models to real business scenarios becomes a challenging but practical problem. Previous works often leverage model compression approaches to resolve this problem. However, these methods usually induce information loss during the model compression procedure, leading to incomparable results between compressed model and the original model. To tackle this challenge, we propose a Multi-task Knowledge Distillation Model (MKDM for short) for web-scale Question Answering system, by distilling knowledge from multiple teacher models to a light-weight student model. In this way, more generalized knowledge can be transferred. The experiment results show that our method can significantly outperform the baseline methods and even achieve comparable results with the original teacher models, along with significant speedup of model inference.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

    cs.CL 2019-10 unverdicted novelty 6.0

    DistilBERT compresses BERT by 40% via pre-training distillation with a triple loss, retaining 97% performance and running 60% faster.