title stringlengths 17 126 | author stringlengths 3 21 | date stringlengths 11 18 | local stringlengths 2 59 | tags stringlengths 2 76 | URL stringlengths 30 87 | content stringlengths 1.11k 108k |
|---|---|---|---|---|---|---|
How to train a new language model from scratch using Transformers and Tokenizers | julien-c | February 14, 2020 | how-to-train | guide, nlp | https://huggingface.co/blog/how-to-train | # How to train a new language model from scratch using Transformers and Tokenizers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"> </a> Over the... |
How to generate text: using different decoding methods for language generation with Transformers | patrickvonplaten | March, 2020 | how-to-generate | guide, nlp | https://huggingface.co/blog/how-to-generate | # How to generate text: using different decoding methods for language generation with Transformers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Col... |
The Reformer - Pushing the limits of language modeling | patrickvonplaten | July 3, 2020 | reformer | research, nlp | https://huggingface.co/blog/reformer | # The Reformer - Pushing the limits of language modeling <a href="https://colab.research.google.com/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## How the Reformer uses less than 8... |
Block Sparse Matrices for Smaller and Faster Language Models | madlag | Sep 10, 2020 | pytorch_block_sparse | research, nlp | https://huggingface.co/blog/pytorch_block_sparse | # Block Sparse Matrices for Smaller and Faster Language Models ## Saving space and time, one zero at a time In previous [blog](https://medium.com/huggingface/is-the-future-of-neural-networks-sparse-an-introduction-1-n-d03923ecbd70) [posts](https://medium.com/huggingface/sparse-neural-networks-2-n-gpu-performance-b8... |
Transformer-based Encoder-Decoder Models | patrickvonplaten | October 10, 2020 | encoder-decoder | research, nlp | https://huggingface.co/blog/encoder-decoder | # Transformers-based Encoder-Decoder Models <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Encoder_Decoder_Model.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> # **Transformer-based Encoder-Decoder ... |
Hyperparameter Search with Transformers and Ray Tune | ray-project | November 2, 2020 | ray-tune | open-source-collab, nlp | https://huggingface.co/blog/ray-tune | # Hyperparameter Search with Transformers and Ray Tune ##### A guest blog post by Richard Liaw from the Anyscale team With cutting edge research implementations, thousands of trained models easily accessible, the Hugging Face [transformers](https://github.com/huggingface/transformers) library has become critical to... |
Porting fairseq wmt19 translation system to transformers | stas | November 3, 2020 | porting-fsmt | open-source-collab, nlp | https://huggingface.co/blog/porting-fsmt | " # Porting fairseq wmt19 translation system to transformers ##### A guest blog post by Stas Bekma(...TRUNCATED) |
Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models | patrickvonplaten | November 09, 2020 | warm-starting-encoder-decoder | guide, nlp | https://huggingface.co/blog/warm-starting-encoder-decoder | " # Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models <a target=\"_blan(...TRUNCATED) |
How we sped up transformer inference 100x for 🤗 API customers | Narsil | January 18, 2021 | accelerated-inference | analysis, nlp | https://huggingface.co/blog/accelerated-inference | " # How we sped up transformer inference 100x for 🤗 API customers 🤗 Transformers has become (...TRUNCATED) |
Fit More and Train Faster With ZeRO via DeepSpeed and FairScale | stas | January 19, 2021 | zero-deepspeed-fairscale | guide | https://huggingface.co/blog/zero-deepspeed-fairscale | " # Fit More and Train Faster With ZeRO via DeepSpeed and FairScale ##### A guest blog post by Hug(...TRUNCATED) |
End of preview. Expand in Data Studio
Hugging Face Blog Content..
- Downloads last month
- 19
Size of downloaded dataset files:
5.07 MB
Size of the auto-converted Parquet files:
2.53 MB
Number of rows:
312