THE CENTER OF GRAVITY FOR ENTREPRENEURS IN TEXAS
PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition
PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition (2021)
Authors:
Cheng-I Jeff Lai, Yang Zhang, Alexander H. Liu, Shiyu Chang, Yi-Lun Liao, Yung-Sung Chuang, Kaizhi Qian, Sameer Khurana, David Cox, James Glass
This paper will be presented by:
Jim Talley
Abstract:
Self-supervised speech representation learning (speech SSL) has demonstrated the benefit of scale in learning rich representations for Automatic Speech Recognition (ASR) with limited paired data, such as wav2vec 2.0. We investigate the existence of sparse subnetworks in pre-trained speech SSL models that achieve even better low-resource ASR results. However, directly applying widely adopted pruning methods such as the Lottery Ticket Hypothesis (LTH) is suboptimal in the computational cost needed. Moreover, we show that the discovered subnetworks yield minimal performance gain compared to the original dense network.
We present Prune-Adjust-Re-Prune (PARP), which discovers and finetunes subnetworks for much better performance, while only requiring a single downstream ASR finetuning run. PARP is inspired by our surprising observation that subnetworks pruned for pre-training tasks need merely a slight adjustment to achieve a sizeable performance boost in downstream ASR tasks. Extensive experiments on lowresource ASR verify (1) sparse subnetworks exist in mono-lingual/multi-lingual
pre-trained speech SSL, and (2) the computational advantage and performance gain of PARP over baseline pruning methods.
In particular, on the 10min Librispeech split without LM decoding, PARP discovers subnetworks from wav2vec 2.0 with an absolute 10.9%/12.6% WER decrease compared to the full model. We further demonstrate the effectiveness of PARP via: cross-lingual pruning without any phone recognition degradation, the discovery of a multi-lingual subnetwork for 10 spoken languages in 1 finetuning run, and its applicability to pre-trained BERT/XLNet for natural language tasks
Paper:
https://arxiv.org/abs/2106.05933
Gentle Summary:
https://techiespedia.org/2021/11/23/latest-from-mit-toward-speech-recognition-for-uncommon-spoken-languages/
Spots are limited to keep the discussions organized.
Austin Deep Learning Journal Club is group for committed machine learning practitioners and researchers alike. The group meets every other Tuesdays of each month to discuss research publications. The publications are usually the ones that laid foundation to ML/DL or explore novel promising ideas and are selected by a vote. Participant are expected to read the publications to be able to contribute to discussion and learn from others. This is also a great opportunity to showcase your implementations to get feedback from other experts.
Anyone can suggest and vote for the next paper on Austin Deep Learning slack work space (#paper_group channel): https://austin-deep-learning-slack.herokuapp.com/
Please only RSVP if you are certain that you will be participating.
What to bring:
A copy of the paper (either digital or hardcopy)
By Appointment Only
Our doors are open! Reach out to Members@CapitalFactory.com to book your private, in-person membership tour.