Smart Batching Tutorial - Speed Up BERT Training
In this blog post / Notebook, I’ll demonstrate how to dramatically increase BERT’s training time by creating batches of samples with different sequence lengths.
In this blog post / Notebook, I’ll demonstrate how to dramatically increase BERT’s training time by creating batches of samples with different sequence lengths.
While working on my recent Multi-Class Classification Example, I was having trouble with running out of memory on the GPU in Colab–a pretty frustrating issue!
If your text data is domain specific (e.g. legal, financial, academic, industry-specific) or otherwise different from the “standard” text corpus used to train BERT and other langauge models you might want to consider either continuing to train BERT with some of your text data or looking for a domain-specific language model.
In conjunction with our tutorial for fine-tuning BERT on Named Entity Recognition (NER) tasks here, we wanted to provide some practical guidance and resources for building your own NER application since fine-tuning BERT may not be the best solution for every NER application.
by Chris McCormick