XLNet: Generalized Autoregressive Pretraining for Language Understanding
22 Jun 2019, Prathyush SPWith the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy.
For more details, visit the source.