Pruning Hugging Face BERT: Using Compound Sparsification for Faster CPU Inference with Better Accuracy

submited by
Style Pass
2021-09-24 12:30:06

In this post, we go into detail on pruning Hugging Face BERT and describe how sparsification combined with the DeepSparse Engine improves BERT model performance on CPUs. We’ll show:

Over the past six months, we have released support for ResNet-50, YOLOv3, and YOLOv5, showing between 6x and 10x better performance compared to other CPU implementations. Today, we are releasing our initial research on BERT, to be followed by further BERT support (including INT8 quantization combined with pruning) and other popular models in the coming weeks. 

Nearly three years ago, Google researchers released the original BERT paper, establishing transfer learning from Transformer models as the preferred method for many natural language tasks. By pre-training the model on a large text corpus, researchers created a model that understood general language concepts. The model was then fine tuned onto other, downstream datasets such as SQuAD giving new, state-of-the-art results. Today, researchers and industry alike can achieve breakthrough results by applying the same principles on their own private datasets with less iteration and limited training.

General understanding of language is a complex task leading to a relatively large BERT model. Furthermore, the recurrent neural networks (RNNs) traditionally used for the downstream tasks, while not as accurate, are smaller and faster to deploy, limiting BERT’s adoption for performance or cost-sensitive use cases. Given this, many researchers and companies have begun optimizing BERT for smaller deployments and faster inference via techniques such as unstructured pruning. Hugging Face, for example, released PruneBERT, showing that BERT could be adaptively pruned while fine-tuning on downstream datasets. They were able to remove up to 97% of the weights in the network while recovering to within 93% of the original, dense model’s accuracy on SQuAD.

Leave a Comment