{"title":"Introduction - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter1/1?fw=pt","markdown":"## [](#introduction)Introduction\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-1-questions)\n\n## [](#welcome-to-the-course)Welcome to the 🤗 Course!\n\nThis course will teach you about natural language processing (NLP) using libraries from the [Hugging Face](https://huggingface.co/) ecosystem — [🤗 Transformers](https://github.com/huggingface/transformers), [🤗 Datasets](https://github.com/huggingface/datasets), [🤗 Tokenizers](https://github.com/huggingface/tokenizers), and [🤗 Accelerate](https://github.com/huggingface/accelerate) — as well as the [Hugging Face Hub](https://huggingface.co/models). It’s completely free and without ads.\n\n## [](#what-to-expect)What to expect?\n\nHere is a brief overview of the course:\n\n![Brief overview of the chapters of the course.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/summary.svg) ![Brief overview of the chapters of the course.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/summary-dark.svg)\n\n- Chapters 1 to 4 provide an introduction to the main concepts of the 🤗 Transformers library. By the end of this part of the course, you will be familiar with how Transformer models work and will know how to use a model from the [Hugging Face Hub](https://huggingface.co/models), fine-tune it on a dataset, and share your results on the Hub!\n- Chapters 5 to 8 teach the basics of 🤗 Datasets and 🤗 Tokenizers before diving into classic NLP tasks. By the end of this part, you will be able to tackle the most common NLP problems by yourself.\n- Chapters 9 to 12 go beyond NLP, and explore how Transformer models can be used to tackle tasks in speech processing and computer vision. Along the way, you’ll learn how to build and share demos of your models, and optimize them for production environments. By the end of this part, you will be ready to apply 🤗 Transformers to (almost) any machine learning problem!\n\nThis course:\n\n- Requires a good knowledge of Python\n- Is better taken after an introductory deep learning course, such as [fast.ai’s](https://www.fast.ai/) [Practical Deep Learning for Coders](https://course.fast.ai/) or one of the programs developed by [DeepLearning.AI](https://www.deeplearning.ai/)\n- Does not expect prior [PyTorch](https://pytorch.org/) or [TensorFlow](https://www.tensorflow.org/) knowledge, though some familiarity with either of those will help\n\nAfter you’ve completed this course, we recommend checking out DeepLearning.AI’s [Natural Language Processing Specialization](https://www.coursera.org/specializations/natural-language-processing?utm_source=deeplearning-ai&utm_medium=institutions&utm_campaign=20211011-nlp-2-hugging_face-page-nlp-refresh), which covers a wide range of traditional NLP models like naive Bayes and LSTMs that are well worth knowing about!\n\n## [](#who-are-we)Who are we?\n\nAbout the authors:\n\n[**Abubakar Abid**](https://huggingface.co/abidlabs) completed his PhD at Stanford in applied machine learning. During his PhD, he founded [Gradio](https://github.com/gradio-app/gradio), an open-source Python library that has been used to build over 600,000 machine learning demos. Gradio was acquired by Hugging Face, which is where Abubakar now serves as a machine learning team lead.\n\n[**Matthew Carrigan**](https://huggingface.co/Rocketknight1) is a Machine Learning Engineer at Hugging Face. He lives in Dublin, Ireland and previously worked as an ML engineer at Parse.ly and before that as a post-doctoral researcher at Trinity College Dublin. He does not believe we’re going to get to AGI by scaling existing architectures, but has high hopes for robot immortality regardless.\n\n[**Lysandre Debut**](https://huggingface.co/lysandre) is a Machine Learning Engineer at Hugging Face and has been working on the 🤗 Transformers library since the very early development stages. His aim is to make NLP accessible for everyone by developing tools with a very simple API.\n\n[**Sylvain Gugger**](https://huggingface.co/sgugger) is a Research Engineer at Hugging Face and one of the core maintainers of the 🤗 Transformers library. Previously he was a Research Scientist at fast.ai, and he co-wrote _[Deep Learning for Coders with fastai and PyTorch](https://learning.oreilly.com/library/view/deep-learning-for/9781492045519/)_ with Jeremy Howard. The main focus of his research is on making deep learning more accessible, by designing and improving techniques that allow models to train fast on limited resources.\n\n[**Dawood Khan**](https://huggingface.co/dawoodkhan82) is a Machine Learning Engineer at Hugging Face. He’s from NYC and graduated from New York University studying Computer Science. After working as an iOS Engineer for a few years, Dawood quit to start Gradio with his fellow co-founders. Gradio was eventually acquired by Hugging Face.\n\n[**Merve Noyan**](https://huggingface.co/merve) is a developer advocate at Hugging Face, working on developing tools and building content around them to democratize machine learning for everyone.\n\n[**Lucile Saulnier**](https://huggingface.co/SaulLu) is a machine learning engineer at Hugging Face, developing and supporting the use of open source tools. She is also actively involved in many research projects in the field of Natural Language Processing such as collaborative training and BigScience.\n\n[**Lewis Tunstall**](https://huggingface.co/lewtun) is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the wider community. He is also a co-author of the O’Reilly book [Natural Language Processing with Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098136789/).\n\n[**Leandro von Werra**](https://huggingface.co/lvwerra) is a machine learning engineer in the open-source team at Hugging Face and also a co-author of the O’Reilly book [Natural Language Processing with Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098136789/). He has several years of industry experience bringing NLP projects to production by working across the whole machine learning stack..\n\n## [](#faq)FAQ\n\nHere are some answers to frequently asked questions:\n\n- **Does taking this course lead to a certification?** Currently we do not have any certification for this course. However, we are working on a certification program for the Hugging Face ecosystem — stay tuned!\n \n- **How much time should I spend on this course?** Each chapter in this course is designed to be completed in 1 week, with approximately 6-8 hours of work per week. However, you can take as much time as you need to complete the course.\n \n- **Where can I ask a question if I have one?** If you have a question about any section of the course, just click on the ”_Ask a question_” banner at the top of the page to be automatically redirected to the right section of the [Hugging Face forums](https://discuss.huggingface.co/):\n \n\n![Link to the Hugging Face forums](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/forum-button.png)\n\nNote that a list of [project ideas](https://discuss.huggingface.co/c/course/course-event/25) is also available on the forums if you wish to practice more once you have completed the course.\n\n- **Where can I get the code for the course?** For each section, click on the banner at the top of the page to run the code in either Google Colab or Amazon SageMaker Studio Lab:\n\n![Link to the Hugging Face course notebooks](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/notebook-buttons.png)\n\nThe Jupyter notebooks containing all the code from the course are hosted on the [`huggingface/notebooks`](https://github.com/huggingface/notebooks) repo. If you wish to generate them locally, check out the instructions in the [`course`](https://github.com/huggingface/course#-jupyter-notebooks) repo on GitHub.\n\n- **How can I contribute to the course?** There are many ways to contribute to the course! If you find a typo or a bug, please open an issue on the [`course`](https://github.com/huggingface/course) repo. If you would like to help translate the course into your native language, check out the instructions [here](https://github.com/huggingface/course#translating-the-course-into-your-language).\n \n- **What were the choices made for each translation?** Each translation has a glossary and `TRANSLATING.txt` file that details the choices that were made for machine learning jargon etc. You can find an example for German [here](https://github.com/huggingface/course/blob/main/chapters/de/TRANSLATING.txt).\n \n\n- **Can I reuse this course?** Of course! The course is released under the permissive [Apache 2 license](https://www.apache.org/licenses/LICENSE-2.0.html). This means that you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. If you would like to cite the course, please use the following BibTeX:\n\n```\n@misc{huggingfacecourse,\n author = {Hugging Face},\n title = {The Hugging Face Course, 2022},\n howpublished = \"\\url{https://huggingface.co/course}\",\n year = {2022},\n note = \"[Online; accessed ]\"\n}```\n\n## [](#lets-go)Let's Go\n\nAre you ready to roll? In this chapter, you will learn:\n\n- How to use the `pipeline()` function to solve NLP tasks such as text generation and classification\n- About the Transformer architecture\n- How to distinguish between encoder, decoder, and encoder-decoder architectures and use cases","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tIntroduction - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Introduction

\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Introduction

\"Ask

Welcome to the 🤗 Course!

This course will teach you about natural language processing (NLP) using libraries from the Hugging Face ecosystem — 🤗 Transformers, 🤗 Datasets, 🤗 Tokenizers, and 🤗 Accelerate — as well as the Hugging Face Hub. It’s completely free and without ads.

What to expect?

Here is a brief overview of the course:

\"Brief \"Brief
  • Chapters 1 to 4 provide an introduction to the main concepts of the 🤗 Transformers library. By the end of this part of the course, you will be familiar with how Transformer models work and will know how to use a model from the Hugging Face Hub, fine-tune it on a dataset, and share your results on the Hub!
  • Chapters 5 to 8 teach the basics of 🤗 Datasets and 🤗 Tokenizers before diving into classic NLP tasks. By the end of this part, you will be able to tackle the most common NLP problems by yourself.
  • Chapters 9 to 12 go beyond NLP, and explore how Transformer models can be used to tackle tasks in speech processing and computer vision. Along the way, you’ll learn how to build and share demos of your models, and optimize them for production environments. By the end of this part, you will be ready to apply 🤗 Transformers to (almost) any machine learning problem!

This course:

After you’ve completed this course, we recommend checking out DeepLearning.AI’s Natural Language Processing Specialization, which covers a wide range of traditional NLP models like naive Bayes and LSTMs that are well worth knowing about!

Who are we?

About the authors:

Abubakar Abid completed his PhD at Stanford in applied machine learning. During his PhD, he founded Gradio, an open-source Python library that has been used to build over 600,000 machine learning demos. Gradio was acquired by Hugging Face, which is where Abubakar now serves as a machine learning team lead.

Matthew Carrigan is a Machine Learning Engineer at Hugging Face. He lives in Dublin, Ireland and previously worked as an ML engineer at Parse.ly and before that as a post-doctoral researcher at Trinity College Dublin. He does not believe we’re going to get to AGI by scaling existing architectures, but has high hopes for robot immortality regardless.

Lysandre Debut is a Machine Learning Engineer at Hugging Face and has been working on the 🤗 Transformers library since the very early development stages. His aim is to make NLP accessible for everyone by developing tools with a very simple API.

Sylvain Gugger is a Research Engineer at Hugging Face and one of the core maintainers of the 🤗 Transformers library. Previously he was a Research Scientist at fast.ai, and he co-wrote Deep Learning for Coders with fastai and PyTorch with Jeremy Howard. The main focus of his research is on making deep learning more accessible, by designing and improving techniques that allow models to train fast on limited resources.

Dawood Khan is a Machine Learning Engineer at Hugging Face. He’s from NYC and graduated from New York University studying Computer Science. After working as an iOS Engineer for a few years, Dawood quit to start Gradio with his fellow co-founders. Gradio was eventually acquired by Hugging Face.

Merve Noyan is a developer advocate at Hugging Face, working on developing tools and building content around them to democratize machine learning for everyone.

Lucile Saulnier is a machine learning engineer at Hugging Face, developing and supporting the use of open source tools. She is also actively involved in many research projects in the field of Natural Language Processing such as collaborative training and BigScience.

Lewis Tunstall is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the wider community. He is also a co-author of the O’Reilly book Natural Language Processing with Transformers.

Leandro von Werra is a machine learning engineer in the open-source team at Hugging Face and also a co-author of the O’Reilly book Natural Language Processing with Transformers. He has several years of industry experience bringing NLP projects to production by working across the whole machine learning stack..

FAQ

Here are some answers to frequently asked questions:

  • Does taking this course lead to a certification?\nCurrently we do not have any certification for this course. However, we are working on a certification program for the Hugging Face ecosystem — stay tuned!

  • How much time should I spend on this course?\nEach chapter in this course is designed to be completed in 1 week, with approximately 6-8 hours of work per week. However, you can take as much time as you need to complete the course.

  • Where can I ask a question if I have one?\nIf you have a question about any section of the course, just click on the ”Ask a question” banner at the top of the page to be automatically redirected to the right section of the Hugging Face forums:

\"Link

Note that a list of project ideas is also available on the forums if you wish to practice more once you have completed the course.

  • Where can I get the code for the course?\nFor each section, click on the banner at the top of the page to run the code in either Google Colab or Amazon SageMaker Studio Lab:
\"Link

The Jupyter notebooks containing all the code from the course are hosted on the huggingface/notebooks repo. If you wish to generate them locally, check out the instructions in the course repo on GitHub.

  • How can I contribute to the course?\nThere are many ways to contribute to the course! If you find a typo or a bug, please open an issue on the course repo. If you would like to help translate the course into your native language, check out the instructions here.

  • What were the choices made for each translation?\nEach translation has a glossary and TRANSLATING.txt file that details the choices that were made for machine learning jargon etc. You can find an example for German here.

  • Can I reuse this course?\nOf course! The course is released under the permissive Apache 2 license. This means that you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. If you would like to cite the course, please use the following BibTeX:
@misc{huggingfacecourse,\n  author = {Hugging Face},\n  title = {The Hugging Face Course, 2022},\n  howpublished = \"\\url{https://huggingface.co/course}\",\n  year = {2022},\n  note = \"[Online; accessed <today>]\"\n}

Let's Go

\n\nAre you ready to roll? In this chapter, you will learn:\n
  • How to use the pipeline() function to solve NLP tasks such as text generation and classification
  • About the Transformer architecture
  • How to distinguish between encoder, decoder, and encoder-decoder architectures and use cases
\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:02.673Z"} {"title":"Transformers, what can they do? - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter1/3?fw=pt","markdown":"## [](#transformers-what-can-they-do)Transformers, what can they do?\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-1-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter1/section3.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter1/section3.ipynb)\n\nIn this section, we will look at what Transformer models can do and use our first tool from the 🤗 Transformers library: the `pipeline()` function.\n\n👀 See that _Open in Colab_ button on the top right? Click on it to open a Google Colab notebook with all the code samples of this section. This button will be present in any section containing code examples.\n\nIf you want to run the examples locally, we recommend taking a look at the [setup](/course/chapter0).\n\n## [](#transformers-are-everywhere)Transformers are everywhere!\n\nTransformer models are used to solve all kinds of NLP tasks, like the ones mentioned in the previous section. Here are some of the companies and organizations using Hugging Face and Transformer models, who also contribute back to the community by sharing their models:\n\n![Companies using Hugging Face](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/companies.PNG)\n\nThe [🤗 Transformers library](https://github.com/huggingface/transformers) provides the functionality to create and use those shared models. The [Model Hub](https://huggingface.co/models) contains thousands of pretrained models that anyone can download and use. You can also upload your own models to the Hub!\n\n⚠️ The Hugging Face Hub is not limited to Transformer models. Anyone can share any kind of models or datasets they want! [Create a huggingface.co](https://huggingface.co/join) account to benefit from all available features!\n\nBefore diving into how Transformer models work under the hood, let’s look at a few examples of how they can be used to solve some interesting NLP problems.\n\n## [](#working-with-pipelines)Working with pipelines\n\nThe most basic object in the 🤗 Transformers library is the `pipeline()` function. It connects a model with its necessary preprocessing and postprocessing steps, allowing us to directly input any text and get an intelligible answer:\n\n```\nfrom transformers import pipeline\n\nclassifier = pipeline(\"sentiment-analysis\")\nclassifier(\"I've been waiting for a HuggingFace course my whole life.\")```\n\n```\n[{'label': 'POSITIVE', 'score': 0.9598047137260437}]```\n\nWe can even pass several sentences!\n\n```\nclassifier(\n [\"I've been waiting for a HuggingFace course my whole life.\", \"I hate this so much!\"]\n)```\n\n```\n[{'label': 'POSITIVE', 'score': 0.9598047137260437},\n {'label': 'NEGATIVE', 'score': 0.9994558095932007}]```\n\nBy default, this pipeline selects a particular pretrained model that has been fine-tuned for sentiment analysis in English. The model is downloaded and cached when you create the `classifier` object. If you rerun the command, the cached model will be used instead and there is no need to download the model again.\n\nThere are three main steps involved when you pass some text to a pipeline:\n\n1. The text is preprocessed into a format the model can understand.\n2. The preprocessed inputs are passed to the model.\n3. The predictions of the model are post-processed, so you can make sense of them.\n\nSome of the currently [available pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) are:\n\n- `feature-extraction` (get the vector representation of a text)\n- `fill-mask`\n- `ner` (named entity recognition)\n- `question-answering`\n- `sentiment-analysis`\n- `summarization`\n- `text-generation`\n- `translation`\n- `zero-shot-classification`\n\nLet’s have a look at a few of these!\n\n## [](#zero-shot-classification)Zero-shot classification\n\nWe’ll start by tackling a more challenging task where we need to classify texts that haven’t been labelled. This is a common scenario in real-world projects because annotating text is usually time-consuming and requires domain expertise. For this use case, the `zero-shot-classification` pipeline is very powerful: it allows you to specify which labels to use for the classification, so you don’t have to rely on the labels of the pretrained model. You’ve already seen how the model can classify a sentence as positive or negative using those two labels — but it can also classify the text using any other set of labels you like.\n\n```\nfrom transformers import pipeline\n\nclassifier = pipeline(\"zero-shot-classification\")\nclassifier(\n \"This is a course about the Transformers library\",\n candidate_labels=[\"education\", \"politics\", \"business\"],\n)```\n\n```\n{'sequence': 'This is a course about the Transformers library',\n 'labels': ['education', 'business', 'politics'],\n 'scores': [0.8445963859558105, 0.111976258456707, 0.043427448719739914]}```\n\nThis pipeline is called _zero-shot_ because you don’t need to fine-tune the model on your data to use it. It can directly return probability scores for any list of labels you want!\n\n✏️ **Try it out!** Play around with your own sequences and labels and see how the model behaves.\n\n## [](#text-generation)Text generation\n\nNow let’s see how to use a pipeline to generate some text. The main idea here is that you provide a prompt and the model will auto-complete it by generating the remaining text. This is similar to the predictive text feature that is found on many phones. Text generation involves randomness, so it’s normal if you don’t get the same results as shown below.\n\n```\nfrom transformers import pipeline\n\ngenerator = pipeline(\"text-generation\")\ngenerator(\"In this course, we will teach you how to\")```\n\n```\n[{'generated_text': 'In this course, we will teach you how to understand and use '\n 'data flow and data interchange when handling user data. We '\n 'will be working with one or more of the most commonly used '\n 'data flows — data flows of various types, as seen by the '\n 'HTTP'}]```\n\nYou can control how many different sequences are generated with the argument `num_return_sequences` and the total length of the output text with the argument `max_length`.\n\n✏️ **Try it out!** Use the `num_return_sequences` and `max_length` arguments to generate two sentences of 15 words each.\n\n## [](#using-any-model-from-the-hub-in-a-pipeline)Using any model from the Hub in a pipeline\n\nThe previous examples used the default model for the task at hand, but you can also choose a particular model from the Hub to use in a pipeline for a specific task — say, text generation. Go to the [Model Hub](https://huggingface.co/models) and click on the corresponding tag on the left to display only the supported models for that task. You should get to a page like [this one](https://huggingface.co/models?pipeline_tag=text-generation).\n\nLet’s try the [`distilgpt2`](https://huggingface.co/distilgpt2) model! Here’s how to load it in the same pipeline as before:\n\n```\nfrom transformers import pipeline\n\ngenerator = pipeline(\"text-generation\", model=\"distilgpt2\")\ngenerator(\n \"In this course, we will teach you how to\",\n max_length=30,\n num_return_sequences=2,\n)```\n\n```\n[{'generated_text': 'In this course, we will teach you how to manipulate the world and '\n 'move your mental and physical capabilities to your advantage.'},\n {'generated_text': 'In this course, we will teach you how to become an expert and '\n 'practice realtime, and with a hands on experience on both real '\n 'time and real'}]```\n\nYou can refine your search for a model by clicking on the language tags, and pick a model that will generate text in another language. The Model Hub even contains checkpoints for multilingual models that support several languages.\n\nOnce you select a model by clicking on it, you’ll see that there is a widget enabling you to try it directly online. This way you can quickly test the model’s capabilities before downloading it.\n\n✏️ **Try it out!** Use the filters to find a text generation model for another language. Feel free to play with the widget and use it in a pipeline!\n\n### [](#the-inference-api)The Inference API\n\nAll the models can be tested directly through your browser using the Inference API, which is available on the Hugging Face [website](https://huggingface.co/). You can play with the model directly on this page by inputting custom text and watching the model process the input data.\n\nThe Inference API that powers the widget is also available as a paid product, which comes in handy if you need it for your workflows. See the [pricing page](https://huggingface.co/pricing) for more details.\n\n## [](#mask-filling)Mask filling\n\nThe next pipeline you’ll try is `fill-mask`. The idea of this task is to fill in the blanks in a given text:\n\n```\nfrom transformers import pipeline\n\nunmasker = pipeline(\"fill-mask\")\nunmasker(\"This course will teach you all about models.\", top_k=2)```\n\n```\n[{'sequence': 'This course will teach you all about mathematical models.',\n 'score': 0.19619831442832947,\n 'token': 30412,\n 'token_str': ' mathematical'},\n {'sequence': 'This course will teach you all about computational models.',\n 'score': 0.04052725434303284,\n 'token': 38163,\n 'token_str': ' computational'}]```\n\nThe `top_k` argument controls how many possibilities you want to be displayed. Note that here the model fills in the special `` word, which is often referred to as a _mask token_. Other mask-filling models might have different mask tokens, so it’s always good to verify the proper mask word when exploring other models. One way to check it is by looking at the mask word used in the widget.\n\n✏️ **Try it out!** Search for the `bert-base-cased` model on the Hub and identify its mask word in the Inference API widget. What does this model predict for the sentence in our `pipeline` example above?\n\n## [](#named-entity-recognition)Named entity recognition\n\nNamed entity recognition (NER) is a task where the model has to find which parts of the input text correspond to entities such as persons, locations, or organizations. Let’s look at an example:\n\n```\nfrom transformers import pipeline\n\nner = pipeline(\"ner\", grouped_entities=True)\nner(\"My name is Sylvain and I work at Hugging Face in Brooklyn.\")```\n\n```\n[{'entity_group': 'PER', 'score': 0.99816, 'word': 'Sylvain', 'start': 11, 'end': 18}, \n {'entity_group': 'ORG', 'score': 0.97960, 'word': 'Hugging Face', 'start': 33, 'end': 45}, \n {'entity_group': 'LOC', 'score': 0.99321, 'word': 'Brooklyn', 'start': 49, 'end': 57}\n]```\n\nHere the model correctly identified that Sylvain is a person (PER), Hugging Face an organization (ORG), and Brooklyn a location (LOC).\n\nWe pass the option `grouped_entities=True` in the pipeline creation function to tell the pipeline to regroup together the parts of the sentence that correspond to the same entity: here the model correctly grouped “Hugging” and “Face” as a single organization, even though the name consists of multiple words. In fact, as we will see in the next chapter, the preprocessing even splits some words into smaller parts. For instance, `Sylvain` is split into four pieces: `S`, `##yl`, `##va`, and `##in`. In the post-processing step, the pipeline successfully regrouped those pieces.\n\n✏️ **Try it out!** Search the Model Hub for a model able to do part-of-speech tagging (usually abbreviated as POS) in English. What does this model predict for the sentence in the example above?\n\n## [](#question-answering)Question answering\n\nThe `question-answering` pipeline answers questions using information from a given context:\n\n```\nfrom transformers import pipeline\n\nquestion_answerer = pipeline(\"question-answering\")\nquestion_answerer(\n question=\"Where do I work?\",\n context=\"My name is Sylvain and I work at Hugging Face in Brooklyn\",\n)```\n\n```\n{'score': 0.6385916471481323, 'start': 33, 'end': 45, 'answer': 'Hugging Face'}```\n\nNote that this pipeline works by extracting information from the provided context; it does not generate the answer.\n\n## [](#summarization)Summarization\n\nSummarization is the task of reducing a text into a shorter text while keeping all (or most) of the important aspects referenced in the text. Here’s an example:\n\n```\nfrom transformers import pipeline\n\nsummarizer = pipeline(\"summarization\")\nsummarizer(\n \"\"\"\n America has changed dramatically during recent years. Not only has the number of \n graduates in traditional engineering disciplines such as mechanical, civil, \n electrical, chemical, and aeronautical engineering declined, but in most of \n the premier American universities engineering curricula now concentrate on \n and encourage largely the study of engineering science. As a result, there \n are declining offerings in engineering subjects dealing with infrastructure, \n the environment, and related issues, and greater concentration on high \n technology subjects, largely supporting increasingly complex scientific \n developments. While the latter is important, it should not be at the expense \n of more traditional engineering.\n\n Rapidly developing economies such as China and India, as well as other \n industrial countries in Europe and Asia, continue to encourage and advance \n the teaching of engineering. Both China and India, respectively, graduate \n six and eight times as many traditional engineers as does the United States. \n Other industrial countries at minimum maintain their output, while America \n suffers an increasingly serious decline in the number of engineering graduates \n and a lack of well-educated engineers.\n\"\"\"\n)```\n\n```\n[{'summary_text': ' America has changed dramatically during recent years . The '\n 'number of engineering graduates in the U.S. has declined in '\n 'traditional engineering disciplines such as mechanical, civil '\n ', electrical, chemical, and aeronautical engineering . Rapidly '\n 'developing economies such as China and India, as well as other '\n 'industrial countries in Europe and Asia, continue to encourage '\n 'and advance engineering .'}]```\n\nLike with text generation, you can specify a `max_length` or a `min_length` for the result.\n\n## [](#translation)Translation\n\nFor translation, you can use a default model if you provide a language pair in the task name (such as `\"translation_en_to_fr\"`), but the easiest way is to pick the model you want to use on the [Model Hub](https://huggingface.co/models). Here we’ll try translating from French to English:\n\n```\nfrom transformers import pipeline\n\ntranslator = pipeline(\"translation\", model=\"Helsinki-NLP/opus-mt-fr-en\")\ntranslator(\"Ce cours est produit par Hugging Face.\")```\n\n```\n[{'translation_text': 'This course is produced by Hugging Face.'}]```\n\nLike with text generation and summarization, you can specify a `max_length` or a `min_length` for the result.\n\n✏️ **Try it out!** Search for translation models in other languages and try to translate the previous sentence into a few different languages.\n\nThe pipelines shown so far are mostly for demonstrative purposes. They were programmed for specific tasks and cannot perform variations of them. In the next chapter, you’ll learn what’s inside a `pipeline()` function and how to customize its behavior.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tTransformers, what can they do? - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Transformers, what can they do?

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Transformers, what can they do?

\"Ask \"Open \"Open

In this section, we will look at what Transformer models can do and use our first tool from the 🤗 Transformers library: the pipeline() function.

👀 See that Open in Colab button on the top right? Click on it to open a Google Colab notebook with all the code samples of this section. This button will be present in any section containing code examples. \n

If you want to run the examples locally, we recommend taking a look at the setup.

Transformers are everywhere!

Transformer models are used to solve all kinds of NLP tasks, like the ones mentioned in the previous section. Here are some of the companies and organizations using Hugging Face and Transformer models, who also contribute back to the community by sharing their models:

\"Companies

The 🤗 Transformers library provides the functionality to create and use those shared models. The Model Hub contains thousands of pretrained models that anyone can download and use. You can also upload your own models to the Hub!

⚠️ The Hugging Face Hub is not limited to Transformer models. Anyone can share any kind of models or datasets they want! Create a huggingface.co account to benefit from all available features!

Before diving into how Transformer models work under the hood, let’s look at a few examples of how they can be used to solve some interesting NLP problems.

Working with pipelines

The most basic object in the 🤗 Transformers library is the pipeline() function. It connects a model with its necessary preprocessing and postprocessing steps, allowing us to directly input any text and get an intelligible answer:

from transformers import pipeline\n\nclassifier = pipeline(\"sentiment-analysis\")\nclassifier(\"I've been waiting for a HuggingFace course my whole life.\")
[{'label': 'POSITIVE', 'score': 0.9598047137260437}]

We can even pass several sentences!

classifier(\n    [\"I've been waiting for a HuggingFace course my whole life.\", \"I hate this so much!\"]\n)
[{'label': 'POSITIVE', 'score': 0.9598047137260437},\n {'label': 'NEGATIVE', 'score': 0.9994558095932007}]

By default, this pipeline selects a particular pretrained model that has been fine-tuned for sentiment analysis in English. The model is downloaded and cached when you create the classifier object. If you rerun the command, the cached model will be used instead and there is no need to download the model again.

There are three main steps involved when you pass some text to a pipeline:

  1. The text is preprocessed into a format the model can understand.
  2. The preprocessed inputs are passed to the model.
  3. The predictions of the model are post-processed, so you can make sense of them.

Some of the currently available pipelines are:

  • feature-extraction (get the vector representation of a text)
  • fill-mask
  • ner (named entity recognition)
  • question-answering
  • sentiment-analysis
  • summarization
  • text-generation
  • translation
  • zero-shot-classification

Let’s have a look at a few of these!

Zero-shot classification

We’ll start by tackling a more challenging task where we need to classify texts that haven’t been labelled. This is a common scenario in real-world projects because annotating text is usually time-consuming and requires domain expertise. For this use case, the zero-shot-classification pipeline is very powerful: it allows you to specify which labels to use for the classification, so you don’t have to rely on the labels of the pretrained model. You’ve already seen how the model can classify a sentence as positive or negative using those two labels — but it can also classify the text using any other set of labels you like.

from transformers import pipeline\n\nclassifier = pipeline(\"zero-shot-classification\")\nclassifier(\n    \"This is a course about the Transformers library\",\n    candidate_labels=[\"education\", \"politics\", \"business\"],\n)
{'sequence': 'This is a course about the Transformers library',\n 'labels': ['education', 'business', 'politics'],\n 'scores': [0.8445963859558105, 0.111976258456707, 0.043427448719739914]}

This pipeline is called zero-shot because you don’t need to fine-tune the model on your data to use it. It can directly return probability scores for any list of labels you want!

✏️ Try it out! Play around with your own sequences and labels and see how the model behaves.

Text generation

Now let’s see how to use a pipeline to generate some text. The main idea here is that you provide a prompt and the model will auto-complete it by generating the remaining text. This is similar to the predictive text feature that is found on many phones. Text generation involves randomness, so it’s normal if you don’t get the same results as shown below.

from transformers import pipeline\n\ngenerator = pipeline(\"text-generation\")\ngenerator(\"In this course, we will teach you how to\")
[{'generated_text': 'In this course, we will teach you how to understand and use '\n                    'data flow and data interchange when handling user data. We '\n                    'will be working with one or more of the most commonly used '\n                    'data flows — data flows of various types, as seen by the '\n                    'HTTP'}]

You can control how many different sequences are generated with the argument num_return_sequences and the total length of the output text with the argument max_length.

✏️ Try it out! Use the num_return_sequences and max_length arguments to generate two sentences of 15 words each.

Using any model from the Hub in a pipeline

The previous examples used the default model for the task at hand, but you can also choose a particular model from the Hub to use in a pipeline for a specific task — say, text generation. Go to the Model Hub and click on the corresponding tag on the left to display only the supported models for that task. You should get to a page like this one.

Let’s try the distilgpt2 model! Here’s how to load it in the same pipeline as before:

from transformers import pipeline\n\ngenerator = pipeline(\"text-generation\", model=\"distilgpt2\")\ngenerator(\n    \"In this course, we will teach you how to\",\n    max_length=30,\n    num_return_sequences=2,\n)
[{'generated_text': 'In this course, we will teach you how to manipulate the world and '\n                    'move your mental and physical capabilities to your advantage.'},\n {'generated_text': 'In this course, we will teach you how to become an expert and '\n                    'practice realtime, and with a hands on experience on both real '\n                    'time and real'}]

You can refine your search for a model by clicking on the language tags, and pick a model that will generate text in another language. The Model Hub even contains checkpoints for multilingual models that support several languages.

Once you select a model by clicking on it, you’ll see that there is a widget enabling you to try it directly online. This way you can quickly test the model’s capabilities before downloading it.

✏️ Try it out! Use the filters to find a text generation model for another language. Feel free to play with the widget and use it in a pipeline!

The Inference API

All the models can be tested directly through your browser using the Inference API, which is available on the Hugging Face website. You can play with the model directly on this page by inputting custom text and watching the model process the input data.

The Inference API that powers the widget is also available as a paid product, which comes in handy if you need it for your workflows. See the pricing page for more details.

Mask filling

The next pipeline you’ll try is fill-mask. The idea of this task is to fill in the blanks in a given text:

from transformers import pipeline\n\nunmasker = pipeline(\"fill-mask\")\nunmasker(\"This course will teach you all about <mask> models.\", top_k=2)
[{'sequence': 'This course will teach you all about mathematical models.',\n  'score': 0.19619831442832947,\n  'token': 30412,\n  'token_str': ' mathematical'},\n {'sequence': 'This course will teach you all about computational models.',\n  'score': 0.04052725434303284,\n  'token': 38163,\n  'token_str': ' computational'}]

The top_k argument controls how many possibilities you want to be displayed. Note that here the model fills in the special <mask> word, which is often referred to as a mask token. Other mask-filling models might have different mask tokens, so it’s always good to verify the proper mask word when exploring other models. One way to check it is by looking at the mask word used in the widget.

✏️ Try it out! Search for the bert-base-cased model on the Hub and identify its mask word in the Inference API widget. What does this model predict for the sentence in our pipeline example above?

Named entity recognition

Named entity recognition (NER) is a task where the model has to find which parts of the input text correspond to entities such as persons, locations, or organizations. Let’s look at an example:

from transformers import pipeline\n\nner = pipeline(\"ner\", grouped_entities=True)\nner(\"My name is Sylvain and I work at Hugging Face in Brooklyn.\")
[{'entity_group': 'PER', 'score': 0.99816, 'word': 'Sylvain', 'start': 11, 'end': 18}, \n {'entity_group': 'ORG', 'score': 0.97960, 'word': 'Hugging Face', 'start': 33, 'end': 45}, \n {'entity_group': 'LOC', 'score': 0.99321, 'word': 'Brooklyn', 'start': 49, 'end': 57}\n]

Here the model correctly identified that Sylvain is a person (PER), Hugging Face an organization (ORG), and Brooklyn a location (LOC).

We pass the option grouped_entities=True in the pipeline creation function to tell the pipeline to regroup together the parts of the sentence that correspond to the same entity: here the model correctly grouped “Hugging” and “Face” as a single organization, even though the name consists of multiple words. In fact, as we will see in the next chapter, the preprocessing even splits some words into smaller parts. For instance, Sylvain is split into four pieces: S, ##yl, ##va, and ##in. In the post-processing step, the pipeline successfully regrouped those pieces.

✏️ Try it out! Search the Model Hub for a model able to do part-of-speech tagging (usually abbreviated as POS) in English. What does this model predict for the sentence in the example above?

Question answering

The question-answering pipeline answers questions using information from a given context:

from transformers import pipeline\n\nquestion_answerer = pipeline(\"question-answering\")\nquestion_answerer(\n    question=\"Where do I work?\",\n    context=\"My name is Sylvain and I work at Hugging Face in Brooklyn\",\n)
{'score': 0.6385916471481323, 'start': 33, 'end': 45, 'answer': 'Hugging Face'}

Note that this pipeline works by extracting information from the provided context; it does not generate the answer.

Summarization

Summarization is the task of reducing a text into a shorter text while keeping all (or most) of the important aspects referenced in the text. Here’s an example:

from transformers import pipeline\n\nsummarizer = pipeline(\"summarization\")\nsummarizer(\n    \"\"\"\n    America has changed dramatically during recent years. Not only has the number of \n    graduates in traditional engineering disciplines such as mechanical, civil, \n    electrical, chemical, and aeronautical engineering declined, but in most of \n    the premier American universities engineering curricula now concentrate on \n    and encourage largely the study of engineering science. As a result, there \n    are declining offerings in engineering subjects dealing with infrastructure, \n    the environment, and related issues, and greater concentration on high \n    technology subjects, largely supporting increasingly complex scientific \n    developments. While the latter is important, it should not be at the expense \n    of more traditional engineering.\n\n    Rapidly developing economies such as China and India, as well as other \n    industrial countries in Europe and Asia, continue to encourage and advance \n    the teaching of engineering. Both China and India, respectively, graduate \n    six and eight times as many traditional engineers as does the United States. \n    Other industrial countries at minimum maintain their output, while America \n    suffers an increasingly serious decline in the number of engineering graduates \n    and a lack of well-educated engineers.\n\"\"\"\n)
[{'summary_text': ' America has changed dramatically during recent years . The '\n                  'number of engineering graduates in the U.S. has declined in '\n                  'traditional engineering disciplines such as mechanical, civil '\n                  ', electrical, chemical, and aeronautical engineering . Rapidly '\n                  'developing economies such as China and India, as well as other '\n                  'industrial countries in Europe and Asia, continue to encourage '\n                  'and advance engineering .'}]

Like with text generation, you can specify a max_length or a min_length for the result.

Translation

For translation, you can use a default model if you provide a language pair in the task name (such as \"translation_en_to_fr\"), but the easiest way is to pick the model you want to use on the Model Hub. Here we’ll try translating from French to English:

from transformers import pipeline\n\ntranslator = pipeline(\"translation\", model=\"Helsinki-NLP/opus-mt-fr-en\")\ntranslator(\"Ce cours est produit par Hugging Face.\")
[{'translation_text': 'This course is produced by Hugging Face.'}]

Like with text generation and summarization, you can specify a max_length or a min_length for the result.

✏️ Try it out! Search for translation models in other languages and try to translate the previous sentence into a few different languages.

The pipelines shown so far are mostly for demonstrative purposes. They were programmed for specific tasks and cannot perform variations of them. In the next chapter, you’ll learn what’s inside a pipeline() function and how to customize its behavior.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:03.574Z"} {"title":"Natural Language Processing - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter1/2?fw=pt","markdown":"NLP Course documentation\n\nNatural Language Processing\n\n3\\. Fine-tuning a pretrained model\n\n4\\. Sharing models and tokenizers\n\n5\\. The 🤗 Datasets library\n\n6\\. The 🤗 Tokenizers library\n\n9\\. Building and sharing demos new\n\n## [](#natural-language-processing)Natural Language Processing\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-1-questions)\n\nBefore jumping into Transformer models, let’s do a quick overview of what natural language processing is and why we care about it.\n\n## [](#what-is-nlp)What is NLP?\n\nNLP is a field of linguistics and machine learning focused on understanding everything related to human language. The aim of NLP tasks is not only to understand single words individually, but to be able to understand the context of those words.\n\nThe following is a list of common NLP tasks, with some examples of each:\n\n- **Classifying whole sentences**: Getting the sentiment of a review, detecting if an email is spam, determining if a sentence is grammatically correct or whether two sentences are logically related or not\n- **Classifying each word in a sentence**: Identifying the grammatical components of a sentence (noun, verb, adjective), or the named entities (person, location, organization)\n- **Generating text content**: Completing a prompt with auto-generated text, filling in the blanks in a text with masked words\n- **Extracting an answer from a text**: Given a question and a context, extracting the answer to the question based on the information provided in the context\n- **Generating a new sentence from an input text**: Translating a text into another language, summarizing a text\n\nNLP isn’t limited to written text though. It also tackles complex challenges in speech recognition and computer vision, such as generating a transcript of an audio sample or a description of an image.\n\n## [](#why-is-it-challenging)Why is it challenging?\n\nComputers don’t process information in the same way as humans. For example, when we read the sentence “I am hungry,” we can easily understand its meaning. Similarly, given two sentences such as “I am hungry” and “I am sad,” we’re able to easily determine how similar they are. For machine learning (ML) models, such tasks are more difficult. The text needs to be processed in a way that enables the model to learn from it. And because language is complex, we need to think carefully about how this processing must be done. There has been a lot of research done on how to represent text, and we will look at some methods in the next chapter.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tNatural Language Processing - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Natural Language Processing

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Natural Language Processing

\"Ask

Before jumping into Transformer models, let’s do a quick overview of what natural language processing is and why we care about it.

What is NLP?

NLP is a field of linguistics and machine learning focused on understanding everything related to human language. The aim of NLP tasks is not only to understand single words individually, but to be able to understand the context of those words.

The following is a list of common NLP tasks, with some examples of each:

  • Classifying whole sentences: Getting the sentiment of a review, detecting if an email is spam, determining if a sentence is grammatically correct or whether two sentences are logically related or not
  • Classifying each word in a sentence: Identifying the grammatical components of a sentence (noun, verb, adjective), or the named entities (person, location, organization)
  • Generating text content: Completing a prompt with auto-generated text, filling in the blanks in a text with masked words
  • Extracting an answer from a text: Given a question and a context, extracting the answer to the question based on the information provided in the context
  • Generating a new sentence from an input text: Translating a text into another language, summarizing a text

NLP isn’t limited to written text though. It also tackles complex challenges in speech recognition and computer vision, such as generating a transcript of an audio sample or a description of an image.

Why is it challenging?

Computers don’t process information in the same way as humans. For example, when we read the sentence “I am hungry,” we can easily understand its meaning. Similarly, given two sentences such as “I am hungry” and “I am sad,” we’re able to easily determine how similar they are. For machine learning (ML) models, such tasks are more difficult. The text needs to be processed in a way that enables the model to learn from it. And because language is complex, we need to think carefully about how this processing must be done. There has been a lot of research done on how to represent text, and we will look at some methods in the next chapter.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:03.635Z"} {"title":"Encoder models - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter1/5?fw=pt","markdown":"## [](#encoder-models)Encoder models\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-1-questions)\n\nEncoder models use only the encoder of a Transformer model. At each stage, the attention layers can access all the words in the initial sentence. These models are often characterized as having “bi-directional” attention, and are often called _auto-encoding models_.\n\nThe pretraining of these models usually revolves around somehow corrupting a given sentence (for instance, by masking random words in it) and tasking the model with finding or reconstructing the initial sentence.\n\nEncoder models are best suited for tasks requiring an understanding of the full sentence, such as sentence classification, named entity recognition (and more generally word classification), and extractive question answering.\n\nRepresentatives of this family of models include:\n\n- [ALBERT](https://huggingface.co/transformers/model_doc/albert.html)\n- [BERT](https://huggingface.co/transformers/model_doc/bert.html)\n- [DistilBERT](https://huggingface.co/transformers/model_doc/distilbert.html)\n- [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html)\n- [RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html)","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tEncoder models - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Encoder models

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Encoder models

\"Ask

Encoder models use only the encoder of a Transformer model. At each stage, the attention layers can access all the words in the initial sentence. These models are often characterized as having “bi-directional” attention, and are often called auto-encoding models.

The pretraining of these models usually revolves around somehow corrupting a given sentence (for instance, by masking random words in it) and tasking the model with finding or reconstructing the initial sentence.

Encoder models are best suited for tasks requiring an understanding of the full sentence, such as sentence classification, named entity recognition (and more generally word classification), and extractive question answering.

Representatives of this family of models include:

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:04.874Z"} {"title":"How do Transformers work? - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter1/4?fw=pt","markdown":"## [](#how-do-transformers-work)How do Transformers work?\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-1-questions)\n\nIn this section, we will take a high-level look at the architecture of Transformer models.\n\n## [](#a-bit-of-transformer-history)A bit of Transformer history\n\nHere are some reference points in the (short) history of Transformer models:\n\n![A brief chronology of Transformers models.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers_chrono.svg) ![A brief chronology of Transformers models.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers_chrono-dark.svg)\n\nThe [Transformer architecture](https://arxiv.org/abs/1706.03762) was introduced in June 2017. The focus of the original research was on translation tasks. This was followed by the introduction of several influential models, including:\n\n- **June 2018**: [GPT](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf), the first pretrained Transformer model, used for fine-tuning on various NLP tasks and obtained state-of-the-art results\n \n- **October 2018**: [BERT](https://arxiv.org/abs/1810.04805), another large pretrained model, this one designed to produce better summaries of sentences (more on this in the next chapter!)\n \n- **February 2019**: [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf), an improved (and bigger) version of GPT that was not immediately publicly released due to ethical concerns\n \n- **October 2019**: [DistilBERT](https://arxiv.org/abs/1910.01108), a distilled version of BERT that is 60% faster, 40% lighter in memory, and still retains 97% of BERT’s performance\n \n- **October 2019**: [BART](https://arxiv.org/abs/1910.13461) and [T5](https://arxiv.org/abs/1910.10683), two large pretrained models using the same architecture as the original Transformer model (the first to do so)\n \n- **May 2020**, [GPT-3](https://arxiv.org/abs/2005.14165), an even bigger version of GPT-2 that is able to perform well on a variety of tasks without the need for fine-tuning (called _zero-shot learning_)\n \n\nThis list is far from comprehensive, and is just meant to highlight a few of the different kinds of Transformer models. Broadly, they can be grouped into three categories:\n\n- GPT-like (also called _auto-regressive_ Transformer models)\n- BERT-like (also called _auto-encoding_ Transformer models)\n- BART/T5-like (also called _sequence-to-sequence_ Transformer models)\n\nWe will dive into these families in more depth later on.\n\n## [](#transformers-are-language-models)Transformers are language models\n\nAll the Transformer models mentioned above (GPT, BERT, BART, T5, etc.) have been trained as _language models_. This means they have been trained on large amounts of raw text in a self-supervised fashion. Self-supervised learning is a type of training in which the objective is automatically computed from the inputs of the model. That means that humans are not needed to label the data!\n\nThis type of model develops a statistical understanding of the language it has been trained on, but it’s not very useful for specific practical tasks. Because of this, the general pretrained model then goes through a process called _transfer learning_. During this process, the model is fine-tuned in a supervised way — that is, using human-annotated labels — on a given task.\n\nAn example of a task is predicting the next word in a sentence having read the _n_ previous words. This is called _causal language modeling_ because the output depends on the past and present inputs, but not the future ones.\n\n![Example of causal language modeling in which the next word from a sentence is predicted.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/causal_modeling.svg) ![Example of causal language modeling in which the next word from a sentence is predicted.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/causal_modeling-dark.svg)\n\nAnother example is _masked language modeling_, in which the model predicts a masked word in the sentence.\n\n![Example of masked language modeling in which a masked word from a sentence is predicted.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/masked_modeling.svg) ![Example of masked language modeling in which a masked word from a sentence is predicted.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/masked_modeling-dark.svg)\n\n## [](#transformers-are-big-models)Transformers are big models\n\nApart from a few outliers (like DistilBERT), the general strategy to achieve better performance is by increasing the models’ sizes as well as the amount of data they are pretrained on.\n\n![Number of parameters of recent Transformers models](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/model_parameters.png)\n\nUnfortunately, training a model, especially a large one, requires a large amount of data. This becomes very costly in terms of time and compute resources. It even translates to environmental impact, as can be seen in the following graph.\n\n![The carbon footprint of a large language model.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/carbon_footprint.svg) ![The carbon footprint of a large language model.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/carbon_footprint-dark.svg)\n\nAnd this is showing a project for a (very big) model led by a team consciously trying to reduce the environmental impact of pretraining. The footprint of running lots of trials to get the best hyperparameters would be even higher.\n\nImagine if each time a research team, a student organization, or a company wanted to train a model, it did so from scratch. This would lead to huge, unnecessary global costs!\n\nThis is why sharing language models is paramount: sharing the trained weights and building on top of already trained weights reduces the overall compute cost and carbon footprint of the community.\n\nBy the way, you can evaluate the carbon footprint of your models’ training through several tools. For example [ML CO2 Impact](https://mlco2.github.io/impact/) or [Code Carbon](https://codecarbon.io/) which is integrated in 🤗 Transformers. To learn more about this, you can read this [blog post](https://huggingface.co/blog/carbon-emissions-on-the-hub) which will show you how to generate an `emissions.csv` file with an estimate of the footprint of your training, as well as the [documentation](https://huggingface.co/docs/hub/model-cards-co2) of 🤗 Transformers addressing this topic.\n\n## [](#transfer-learning)Transfer Learning\n\n_Pretraining_ is the act of training a model from scratch: the weights are randomly initialized, and the training starts without any prior knowledge.\n\n![The pretraining of a language model is costly in both time and money.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/pretraining.svg) ![The pretraining of a language model is costly in both time and money.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/pretraining-dark.svg)\n\nThis pretraining is usually done on very large amounts of data. Therefore, it requires a very large corpus of data, and training can take up to several weeks.\n\n_Fine-tuning_, on the other hand, is the training done **after** a model has been pretrained. To perform fine-tuning, you first acquire a pretrained language model, then perform additional training with a dataset specific to your task. Wait — why not simply train directly for the final task? There are a couple of reasons:\n\n- The pretrained model was already trained on a dataset that has some similarities with the fine-tuning dataset. The fine-tuning process is thus able to take advantage of knowledge acquired by the initial model during pretraining (for instance, with NLP problems, the pretrained model will have some kind of statistical understanding of the language you are using for your task).\n- Since the pretrained model was already trained on lots of data, the fine-tuning requires way less data to get decent results.\n- For the same reason, the amount of time and resources needed to get good results are much lower.\n\nFor example, one could leverage a pretrained model trained on the English language and then fine-tune it on an arXiv corpus, resulting in a science/research-based model. The fine-tuning will only require a limited amount of data: the knowledge the pretrained model has acquired is “transferred,” hence the term _transfer learning_.\n\n![The fine-tuning of a language model is cheaper than pretraining in both time and money.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/finetuning.svg) ![The fine-tuning of a language model is cheaper than pretraining in both time and money.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/finetuning-dark.svg)\n\nFine-tuning a model therefore has lower time, data, financial, and environmental costs. It is also quicker and easier to iterate over different fine-tuning schemes, as the training is less constraining than a full pretraining.\n\nThis process will also achieve better results than training from scratch (unless you have lots of data), which is why you should always try to leverage a pretrained model — one as close as possible to the task you have at hand — and fine-tune it.\n\n## [](#general-architecture)General architecture\n\nIn this section, we’ll go over the general architecture of the Transformer model. Don’t worry if you don’t understand some of the concepts; there are detailed sections later covering each of the components.\n\n## [](#introduction)Introduction\n\nThe model is primarily composed of two blocks:\n\n- **Encoder (left)**: The encoder receives an input and builds a representation of it (its features). This means that the model is optimized to acquire understanding from the input.\n- **Decoder (right)**: The decoder uses the encoder’s representation (features) along with other inputs to generate a target sequence. This means that the model is optimized for generating outputs.\n\n![Architecture of a Transformers models](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers_blocks.svg) ![Architecture of a Transformers models](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers_blocks-dark.svg)\n\nEach of these parts can be used independently, depending on the task:\n\n- **Encoder-only models**: Good for tasks that require understanding of the input, such as sentence classification and named entity recognition.\n- **Decoder-only models**: Good for generative tasks such as text generation.\n- **Encoder-decoder models** or **sequence-to-sequence models**: Good for generative tasks that require an input, such as translation or summarization.\n\nWe will dive into those architectures independently in later sections.\n\n## [](#attention-layers)Attention layers\n\nA key feature of Transformer models is that they are built with special layers called _attention layers_. In fact, the title of the paper introducing the Transformer architecture was [“Attention Is All You Need”](https://arxiv.org/abs/1706.03762)! We will explore the details of attention layers later in the course; for now, all you need to know is that this layer will tell the model to pay specific attention to certain words in the sentence you passed it (and more or less ignore the others) when dealing with the representation of each word.\n\nTo put this into context, consider the task of translating text from English to French. Given the input “You like this course”, a translation model will need to also attend to the adjacent word “You” to get the proper translation for the word “like”, because in French the verb “like” is conjugated differently depending on the subject. The rest of the sentence, however, is not useful for the translation of that word. In the same vein, when translating “this” the model will also need to pay attention to the word “course”, because “this” translates differently depending on whether the associated noun is masculine or feminine. Again, the other words in the sentence will not matter for the translation of “this”. With more complex sentences (and more complex grammar rules), the model would need to pay special attention to words that might appear farther away in the sentence to properly translate each word.\n\nThe same concept applies to any task associated with natural language: a word by itself has a meaning, but that meaning is deeply affected by the context, which can be any other word (or words) before or after the word being studied.\n\nNow that you have an idea of what attention layers are all about, let’s take a closer look at the Transformer architecture.\n\n## [](#the-original-architecture)The original architecture\n\nThe Transformer architecture was originally designed for translation. During training, the encoder receives inputs (sentences) in a certain language, while the decoder receives the same sentences in the desired target language. In the encoder, the attention layers can use all the words in a sentence (since, as we just saw, the translation of a given word can be dependent on what is after as well as before it in the sentence). The decoder, however, works sequentially and can only pay attention to the words in the sentence that it has already translated (so, only the words before the word currently being generated). For example, when we have predicted the first three words of the translated target, we give them to the decoder which then uses all the inputs of the encoder to try to predict the fourth word.\n\nTo speed things up during training (when the model has access to target sentences), the decoder is fed the whole target, but it is not allowed to use future words (if it had access to the word at position 2 when trying to predict the word at position 2, the problem would not be very hard!). For instance, when trying to predict the fourth word, the attention layer will only have access to the words in positions 1 to 3.\n\nThe original Transformer architecture looked like this, with the encoder on the left and the decoder on the right:\n\n![Architecture of a Transformers models](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers.svg) ![Architecture of a Transformers models](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers-dark.svg)\n\nNote that the first attention layer in a decoder block pays attention to all (past) inputs to the decoder, but the second attention layer uses the output of the encoder. It can thus access the whole input sentence to best predict the current word. This is very useful as different languages can have grammatical rules that put the words in different orders, or some context provided later in the sentence may be helpful to determine the best translation of a given word.\n\nThe _attention mask_ can also be used in the encoder/decoder to prevent the model from paying attention to some special words — for instance, the special padding word used to make all the inputs the same length when batching together sentences.\n\n## [](#architecture-vs-checkpoints)Architectures vs. checkpoints\n\nAs we dive into Transformer models in this course, you’ll see mentions of _architectures_ and _checkpoints_ as well as _models_. These terms all have slightly different meanings:\n\n- **Architecture**: This is the skeleton of the model — the definition of each layer and each operation that happens within the model.\n- **Checkpoints**: These are the weights that will be loaded in a given architecture.\n- **Model**: This is an umbrella term that isn’t as precise as “architecture” or “checkpoint”: it can mean both. This course will specify _architecture_ or _checkpoint_ when it matters to reduce ambiguity.\n\nFor example, BERT is an architecture while `bert-base-cased`, a set of weights trained by the Google team for the first release of BERT, is a checkpoint. However, one can say “the BERT model” and “the `bert-base-cased` model.”","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tHow do Transformers work? - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

How do Transformers work?

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

How do Transformers work?

\"Ask

In this section, we will take a high-level look at the architecture of Transformer models.

A bit of Transformer history

Here are some reference points in the (short) history of Transformer models:

\"A \"A

The Transformer architecture was introduced in June 2017. The focus of the original research was on translation tasks. This was followed by the introduction of several influential models, including:

  • June 2018: GPT, the first pretrained Transformer model, used for fine-tuning on various NLP tasks and obtained state-of-the-art results

  • October 2018: BERT, another large pretrained model, this one designed to produce better summaries of sentences (more on this in the next chapter!)

  • February 2019: GPT-2, an improved (and bigger) version of GPT that was not immediately publicly released due to ethical concerns

  • October 2019: DistilBERT, a distilled version of BERT that is 60% faster, 40% lighter in memory, and still retains 97% of BERT’s performance

  • October 2019: BART and T5, two large pretrained models using the same architecture as the original Transformer model (the first to do so)

  • May 2020, GPT-3, an even bigger version of GPT-2 that is able to perform well on a variety of tasks without the need for fine-tuning (called zero-shot learning)

This list is far from comprehensive, and is just meant to highlight a few of the different kinds of Transformer models. Broadly, they can be grouped into three categories:

  • GPT-like (also called auto-regressive Transformer models)
  • BERT-like (also called auto-encoding Transformer models)
  • BART/T5-like (also called sequence-to-sequence Transformer models)

We will dive into these families in more depth later on.

Transformers are language models

All the Transformer models mentioned above (GPT, BERT, BART, T5, etc.) have been trained as language models. This means they have been trained on large amounts of raw text in a self-supervised fashion. Self-supervised learning is a type of training in which the objective is automatically computed from the inputs of the model. That means that humans are not needed to label the data!

This type of model develops a statistical understanding of the language it has been trained on, but it’s not very useful for specific practical tasks. Because of this, the general pretrained model then goes through a process called transfer learning. During this process, the model is fine-tuned in a supervised way — that is, using human-annotated labels — on a given task.

An example of a task is predicting the next word in a sentence having read the n previous words. This is called causal language modeling because the output depends on the past and present inputs, but not the future ones.

\"Example \"Example

Another example is masked language modeling, in which the model predicts a masked word in the sentence.

\"Example \"Example

Transformers are big models

Apart from a few outliers (like DistilBERT), the general strategy to achieve better performance is by increasing the models’ sizes as well as the amount of data they are pretrained on.

\"Number

Unfortunately, training a model, especially a large one, requires a large amount of data. This becomes very costly in terms of time and compute resources. It even translates to environmental impact, as can be seen in the following graph.

\"The \"The

And this is showing a project for a (very big) model led by a team consciously trying to reduce the environmental impact of pretraining. The footprint of running lots of trials to get the best hyperparameters would be even higher.

Imagine if each time a research team, a student organization, or a company wanted to train a model, it did so from scratch. This would lead to huge, unnecessary global costs!

This is why sharing language models is paramount: sharing the trained weights and building on top of already trained weights reduces the overall compute cost and carbon footprint of the community.

By the way, you can evaluate the carbon footprint of your models’ training through several tools. For example ML CO2 Impact or Code Carbon which is integrated in 🤗 Transformers. To learn more about this, you can read this blog post which will show you how to generate an emissions.csv file with an estimate of the footprint of your training, as well as the documentation of 🤗 Transformers addressing this topic.

Transfer Learning

Pretraining is the act of training a model from scratch: the weights are randomly initialized, and the training starts without any prior knowledge.

\"The \"The

This pretraining is usually done on very large amounts of data. Therefore, it requires a very large corpus of data, and training can take up to several weeks.

Fine-tuning, on the other hand, is the training done after a model has been pretrained. To perform fine-tuning, you first acquire a pretrained language model, then perform additional training with a dataset specific to your task. Wait — why not simply train directly for the final task? There are a couple of reasons:

  • The pretrained model was already trained on a dataset that has some similarities with the fine-tuning dataset. The fine-tuning process is thus able to take advantage of knowledge acquired by the initial model during pretraining (for instance, with NLP problems, the pretrained model will have some kind of statistical understanding of the language you are using for your task).
  • Since the pretrained model was already trained on lots of data, the fine-tuning requires way less data to get decent results.
  • For the same reason, the amount of time and resources needed to get good results are much lower.

For example, one could leverage a pretrained model trained on the English language and then fine-tune it on an arXiv corpus, resulting in a science/research-based model. The fine-tuning will only require a limited amount of data: the knowledge the pretrained model has acquired is “transferred,” hence the term transfer learning.

\"The \"The

Fine-tuning a model therefore has lower time, data, financial, and environmental costs. It is also quicker and easier to iterate over different fine-tuning schemes, as the training is less constraining than a full pretraining.

This process will also achieve better results than training from scratch (unless you have lots of data), which is why you should always try to leverage a pretrained model — one as close as possible to the task you have at hand — and fine-tune it.

General architecture

In this section, we’ll go over the general architecture of the Transformer model. Don’t worry if you don’t understand some of the concepts; there are detailed sections later covering each of the components.

Introduction

The model is primarily composed of two blocks:

  • Encoder (left): The encoder receives an input and builds a representation of it (its features). This means that the model is optimized to acquire understanding from the input.
  • Decoder (right): The decoder uses the encoder’s representation (features) along with other inputs to generate a target sequence. This means that the model is optimized for generating outputs.
\"Architecture \"Architecture

Each of these parts can be used independently, depending on the task:

  • Encoder-only models: Good for tasks that require understanding of the input, such as sentence classification and named entity recognition.
  • Decoder-only models: Good for generative tasks such as text generation.
  • Encoder-decoder models or sequence-to-sequence models: Good for generative tasks that require an input, such as translation or summarization.

We will dive into those architectures independently in later sections.

Attention layers

A key feature of Transformer models is that they are built with special layers called attention layers. In fact, the title of the paper introducing the Transformer architecture was “Attention Is All You Need”! We will explore the details of attention layers later in the course; for now, all you need to know is that this layer will tell the model to pay specific attention to certain words in the sentence you passed it (and more or less ignore the others) when dealing with the representation of each word.

To put this into context, consider the task of translating text from English to French. Given the input “You like this course”, a translation model will need to also attend to the adjacent word “You” to get the proper translation for the word “like”, because in French the verb “like” is conjugated differently depending on the subject. The rest of the sentence, however, is not useful for the translation of that word. In the same vein, when translating “this” the model will also need to pay attention to the word “course”, because “this” translates differently depending on whether the associated noun is masculine or feminine. Again, the other words in the sentence will not matter for the translation of “this”. With more complex sentences (and more complex grammar rules), the model would need to pay special attention to words that might appear farther away in the sentence to properly translate each word.

The same concept applies to any task associated with natural language: a word by itself has a meaning, but that meaning is deeply affected by the context, which can be any other word (or words) before or after the word being studied.

Now that you have an idea of what attention layers are all about, let’s take a closer look at the Transformer architecture.

The original architecture

The Transformer architecture was originally designed for translation. During training, the encoder receives inputs (sentences) in a certain language, while the decoder receives the same sentences in the desired target language. In the encoder, the attention layers can use all the words in a sentence (since, as we just saw, the translation of a given word can be dependent on what is after as well as before it in the sentence). The decoder, however, works sequentially and can only pay attention to the words in the sentence that it has already translated (so, only the words before the word currently being generated). For example, when we have predicted the first three words of the translated target, we give them to the decoder which then uses all the inputs of the encoder to try to predict the fourth word.

To speed things up during training (when the model has access to target sentences), the decoder is fed the whole target, but it is not allowed to use future words (if it had access to the word at position 2 when trying to predict the word at position 2, the problem would not be very hard!). For instance, when trying to predict the fourth word, the attention layer will only have access to the words in positions 1 to 3.

The original Transformer architecture looked like this, with the encoder on the left and the decoder on the right:

\"Architecture \"Architecture

Note that the first attention layer in a decoder block pays attention to all (past) inputs to the decoder, but the second attention layer uses the output of the encoder. It can thus access the whole input sentence to best predict the current word. This is very useful as different languages can have grammatical rules that put the words in different orders, or some context provided later in the sentence may be helpful to determine the best translation of a given word.

The attention mask can also be used in the encoder/decoder to prevent the model from paying attention to some special words — for instance, the special padding word used to make all the inputs the same length when batching together sentences.

Architectures vs. checkpoints

As we dive into Transformer models in this course, you’ll see mentions of architectures and checkpoints as well as models. These terms all have slightly different meanings:

  • Architecture: This is the skeleton of the model — the definition of each layer and each operation that happens within the model.
  • Checkpoints: These are the weights that will be loaded in a given architecture.
  • Model: This is an umbrella term that isn’t as precise as “architecture” or “checkpoint”: it can mean both. This course will specify architecture or checkpoint when it matters to reduce ambiguity.

For example, BERT is an architecture while bert-base-cased, a set of weights trained by the Google team for the first release of BERT, is a checkpoint. However, one can say “the BERT model” and “the bert-base-cased model.”

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:04.982Z"} {"title":"Decoder models - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt","markdown":"## [](#decoder-models)Decoder models\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-1-questions)\n\nDecoder models use only the decoder of a Transformer model. At each stage, for a given word the attention layers can only access the words positioned before it in the sentence. These models are often called _auto-regressive models_.\n\nThe pretraining of decoder models usually revolves around predicting the next word in the sentence.\n\nThese models are best suited for tasks involving text generation.\n\nRepresentatives of this family of models include:\n\n- [CTRL](https://huggingface.co/transformers/model_doc/ctrl.html)\n- [GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)\n- [GPT-2](https://huggingface.co/transformers/model_doc/gpt2.html)\n- [Transformer XL](https://huggingface.co/transformers/model_doc/transfo-xl.html)","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tDecoder models - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Decoder models

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Decoder models

\"Ask

Decoder models use only the decoder of a Transformer model. At each stage, for a given word the attention layers can only access the words positioned before it in the sentence. These models are often called auto-regressive models.

The pretraining of decoder models usually revolves around predicting the next word in the sentence.

These models are best suited for tasks involving text generation.

Representatives of this family of models include:

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:05.751Z"} {"title":"Sequence-to-sequence models[sequence-to-sequence-models] - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter1/7?fw=pt","markdown":"## [](#sequencetosequence-modelssequencetosequencemodels)Sequence-to-sequence models\\[sequence-to-sequence-models\\]\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-1-questions)\n\nEncoder-decoder models (also called _sequence-to-sequence models_) use both parts of the Transformer architecture. At each stage, the attention layers of the encoder can access all the words in the initial sentence, whereas the attention layers of the decoder can only access the words positioned before a given word in the input.\n\nThe pretraining of these models can be done using the objectives of encoder or decoder models, but usually involves something a bit more complex. For instance, [T5](https://huggingface.co/t5-base) is pretrained by replacing random spans of text (that can contain several words) with a single mask special word, and the objective is then to predict the text that this mask word replaces.\n\nSequence-to-sequence models are best suited for tasks revolving around generating new sentences depending on a given input, such as summarization, translation, or generative question answering.\n\nRepresentatives of this family of models include:\n\n- [BART](https://huggingface.co/transformers/model_doc/bart.html)\n- [mBART](https://huggingface.co/transformers/model_doc/mbart.html)\n- [Marian](https://huggingface.co/transformers/model_doc/marian.html)\n- [T5](https://huggingface.co/transformers/model_doc/t5.html)","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tSequence-to-sequence models[sequence-to-sequence-models] - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Sequence-to-sequence models[sequence-to-sequence-models]

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Sequence-to-sequence models[sequence-to-sequence-models]

\"Ask

Encoder-decoder models (also called sequence-to-sequence models) use both parts of the Transformer architecture. At each stage, the attention layers of the encoder can access all the words in the initial sentence, whereas the attention layers of the decoder can only access the words positioned before a given word in the input.

The pretraining of these models can be done using the objectives of encoder or decoder models, but usually involves something a bit more complex. For instance, T5 is pretrained by replacing random spans of text (that can contain several words) with a single mask special word, and the objective is then to predict the text that this mask word replaces.

Sequence-to-sequence models are best suited for tasks revolving around generating new sentences depending on a given input, such as summarization, translation, or generative question answering.

Representatives of this family of models include:

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:05.887Z"} {"title":"Bias and limitations - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter1/8?fw=pt","markdown":"3\\. Fine-tuning a pretrained model\n\n4\\. Sharing models and tokenizers\n\n5\\. The 🤗 Datasets library\n\n6\\. The 🤗 Tokenizers library\n\n9\\. Building and sharing demos new\n\n## [](#bias-and-limitations)Bias and limitations\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-1-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter1/section8.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter1/section8.ipynb)\n\nIf your intent is to use a pretrained model or a fine-tuned version in production, please be aware that, while these models are powerful tools, they come with limitations. The biggest of these is that, to enable pretraining on large amounts of data, researchers often scrape all the content they can find, taking the best as well as the worst of what is available on the internet.\n\nTo give a quick illustration, let’s go back the example of a `fill-mask` pipeline with the BERT model:\n\n```\nfrom transformers import pipeline\n\nunmasker = pipeline(\"fill-mask\", model=\"bert-base-uncased\")\nresult = unmasker(\"This man works as a [MASK].\")\nprint([r[\"token_str\"] for r in result])\n\nresult = unmasker(\"This woman works as a [MASK].\")\nprint([r[\"token_str\"] for r in result])```\n\n```\n['lawyer', 'carpenter', 'doctor', 'waiter', 'mechanic']\n['nurse', 'waitress', 'teacher', 'maid', 'prostitute']```\n\nWhen asked to fill in the missing word in these two sentences, the model gives only one gender-free answer (waiter/waitress). The others are work occupations usually associated with one specific gender — and yes, prostitute ended up in the top 5 possibilities the model associates with “woman” and “work.” This happens even though BERT is one of the rare Transformer models not built by scraping data from all over the internet, but rather using apparently neutral data (it’s trained on the [English Wikipedia](https://huggingface.co/datasets/wikipedia) and [BookCorpus](https://huggingface.co/datasets/bookcorpus) datasets).\n\nWhen you use these tools, you therefore need to keep in the back of your mind that the original model you are using could very easily generate sexist, racist, or homophobic content. Fine-tuning the model on your data won’t make this intrinsic bias disappear.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tBias and limitations - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Bias and limitations

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Bias and limitations

\"Ask \"Open \"Open

If your intent is to use a pretrained model or a fine-tuned version in production, please be aware that, while these models are powerful tools, they come with limitations. The biggest of these is that, to enable pretraining on large amounts of data, researchers often scrape all the content they can find, taking the best as well as the worst of what is available on the internet.

To give a quick illustration, let’s go back the example of a fill-mask pipeline with the BERT model:

from transformers import pipeline\n\nunmasker = pipeline(\"fill-mask\", model=\"bert-base-uncased\")\nresult = unmasker(\"This man works as a [MASK].\")\nprint([r[\"token_str\"] for r in result])\n\nresult = unmasker(\"This woman works as a [MASK].\")\nprint([r[\"token_str\"] for r in result])
['lawyer', 'carpenter', 'doctor', 'waiter', 'mechanic']\n['nurse', 'waitress', 'teacher', 'maid', 'prostitute']

When asked to fill in the missing word in these two sentences, the model gives only one gender-free answer (waiter/waitress). The others are work occupations usually associated with one specific gender — and yes, prostitute ended up in the top 5 possibilities the model associates with “woman” and “work.” This happens even though BERT is one of the rare Transformer models not built by scraping data from all over the internet, but rather using apparently neutral data (it’s trained on the English Wikipedia and BookCorpus datasets).

When you use these tools, you therefore need to keep in the back of your mind that the original model you are using could very easily generate sexist, racist, or homophobic content. Fine-tuning the model on your data won’t make this intrinsic bias disappear.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:06.452Z"} {"title":"Summary - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter1/9?fw=pt","markdown":"## [](#summary)Summary\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-1-questions)\n\nIn this chapter, you saw how to approach different NLP tasks using the high-level `pipeline()` function from 🤗 Transformers. You also saw how to search for and use models in the Hub, as well as how to use the Inference API to test the models directly in your browser.\n\nWe discussed how Transformer models work at a high level, and talked about the importance of transfer learning and fine-tuning. A key aspect is that you can use the full architecture or only the encoder or decoder, depending on what kind of task you aim to solve. The following table summarizes this:\n\n| Model | Examples | Tasks |\n| --- | --- | --- |\n| Encoder | ALBERT, BERT, DistilBERT, ELECTRA, RoBERTa | Sentence classification, named entity recognition, extractive question answering |\n| Decoder | CTRL, GPT, GPT-2, Transformer XL | Text generation |\n| Encoder-decoder | BART, T5, Marian, mBART | Summarization, translation, generative question answering |","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tSummary - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Summary

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Summary

\"Ask

In this chapter, you saw how to approach different NLP tasks using the high-level pipeline() function from 🤗 Transformers. You also saw how to search for and use models in the Hub, as well as how to use the Inference API to test the models directly in your browser.

We discussed how Transformer models work at a high level, and talked about the importance of transfer learning and fine-tuning. A key aspect is that you can use the full architecture or only the encoder or decoder, depending on what kind of task you aim to solve. The following table summarizes this:

Model Examples Tasks
Encoder ALBERT, BERT, DistilBERT, ELECTRA, RoBERTa Sentence classification, named entity recognition, extractive question answering
Decoder CTRL, GPT, GPT-2, Transformer XL Text generation
Encoder-decoder BART, T5, Marian, mBART Summarization, translation, generative question answering
\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:06.604Z"} {"title":"End-of-chapter quiz - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter1/10?fw=pt","markdown":"3\\. Fine-tuning a pretrained model\n\n4\\. Sharing models and tokenizers\n\n5\\. The 🤗 Datasets library\n\n6\\. The 🤗 Tokenizers library\n\n9\\. Building and sharing demos new\n\n## [](#end-of-chapter-quiz)End-of-chapter quiz\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-1-questions)\n\nThis chapter covered a lot of ground! Don’t worry if you didn’t grasp all the details; the next chapters will help you understand how things work under the hood.\n\nFirst, though, let’s test what you learned in this chapter!\n\n### [](#1.-explore-the-hub-and-look-for-the-roberta-large-mnli-checkpoint.-what-task-does-it-perform?)1\\. Explore the Hub and look for the `roberta-large-mnli` checkpoint. What task does it perform?\n\n### [](#2.-what-will-the-following-code-return?)2\\. What will the following code return?\n\n```\nfrom transformers import pipeline\n\nner = pipeline(\"ner\", grouped_entities=True)\nner(\"My name is Sylvain and I work at Hugging Face in Brooklyn.\")```\n\n### [](#3.-what-should-replace-…-in-this-code-sample?)3\\. What should replace … in this code sample?\n\n```\nfrom transformers import pipeline\n\nfiller = pipeline(\"fill-mask\", model=\"bert-base-cased\")\nresult = filler(\"...\")```\n\n### [](#4.-why-will-this-code-fail?)4\\. Why will this code fail?\n\n```\nfrom transformers import pipeline\n\nclassifier = pipeline(\"zero-shot-classification\")\nresult = classifier(\"This is a course about the Transformers library\")```\n\n### [](#5.-what-does-“transfer-learning”-mean?)5\\. What does “transfer learning” mean?\n\n### [](#6.-true-or-false?-a-language-model-usually-does-not-need-labels-for-its-pretraining.)6\\. True or false? A language model usually does not need labels for its pretraining.\n\n### [](#7.-select-the-sentence-that-best-describes-the-terms-“model”,-“architecture”,-and-“weights”.)7\\. Select the sentence that best describes the terms “model”, “architecture”, and “weights”.\n\n### [](#8.-which-of-these-types-of-models-would-you-use-for-completing-prompts-with-generated-text?)8\\. Which of these types of models would you use for completing prompts with generated text?\n\n### [](#9.-which-of-those-types-of-models-would-you-use-for-summarizing-texts?)9\\. Which of those types of models would you use for summarizing texts?\n\n### [](#10.-which-of-these-types-of-models-would-you-use-for-classifying-text-inputs-according-to-certain-labels?)10\\. Which of these types of models would you use for classifying text inputs according to certain labels?\n\n### [](#11.-what-possible-source-can-the-bias-observed-in-a-model-have?)11\\. What possible source can the bias observed in a model have?","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tEnd-of-chapter quiz - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

End-of-chapter quiz

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

End-of-chapter quiz

\"Ask

This chapter covered a lot of ground! Don’t worry if you didn’t grasp all the details; the next chapters will help you understand how things work under the hood.

First, though, let’s test what you learned in this chapter!

roberta-large-mnli-checkpoint.-what-task-does-it-perform?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#1.-explore-the-hub-and-look-for-the-roberta-large-mnli-checkpoint.-what-task-does-it-perform?\"> 1. Explore the Hub and look for the roberta-large-mnli checkpoint. What task does it perform?

2. What will the following code return?

from transformers import pipeline\n\nner = pipeline(\"ner\", grouped_entities=True)\nner(\"My name is Sylvain and I work at Hugging Face in Brooklyn.\")

3. What should replace … in this code sample?

from transformers import pipeline\n\nfiller = pipeline(\"fill-mask\", model=\"bert-base-cased\")\nresult = filler(\"...\")

4. Why will this code fail?

from transformers import pipeline\n\nclassifier = pipeline(\"zero-shot-classification\")\nresult = classifier(\"This is a course about the Transformers library\")

5. What does “transfer learning” mean?

6. True or false? A language model usually does not need labels for its pretraining.

7. Select the sentence that best describes the terms “model”, “architecture”, and “weights”.

8. Which of these types of models would you use for completing prompts with generated text?

9. Which of those types of models would you use for summarizing texts?

10. Which of these types of models would you use for classifying text inputs according to certain labels?

11. What possible source can the bias observed in a model have?

\n\t\t\t\t
Summary\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tNext chapter
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:06.996Z"} {"title":"Introduction - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter0/1?fw=pt","markdown":"## [](#introduction)Introduction\n\nWelcome to the Hugging Face course! This introduction will guide you through setting up a working environment. If you’re just starting the course, we recommend you first take a look at [Chapter 1](/course/chapter1), then come back and set up your environment so you can try the code yourself.\n\nAll the libraries that we’ll be using in this course are available as Python packages, so here we’ll show you how to set up a Python environment and install the specific libraries you’ll need.\n\nWe’ll cover two ways of setting up your working environment, using a Colab notebook or a Python virtual environment. Feel free to choose the one that resonates with you the most. For beginners, we strongly recommend that you get started by using a Colab notebook.\n\nNote that we will not be covering the Windows system. If you’re running on Windows, we recommend following along using a Colab notebook. If you’re using a Linux distribution or macOS, you can use either approach described here.\n\nMost of the course relies on you having a Hugging Face account. We recommend creating one now: [create an account](https://huggingface.co/join).\n\n## [](#using-a-google-colab-notebook)Using a Google Colab notebook\n\nUsing a Colab notebook is the simplest possible setup; boot up a notebook in your browser and get straight to coding!\n\nIf you’re not familiar with Colab, we recommend you start by following the [introduction](https://colab.research.google.com/notebooks/intro.ipynb). Colab allows you to use some accelerating hardware, like GPUs or TPUs, and it is free for smaller workloads.\n\nOnce you’re comfortable moving around in Colab, create a new notebook and get started with the setup:\n\n![An empty colab notebook](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter0/new_colab.png)\n\nThe next step is to install the libraries that we’ll be using in this course. We’ll use `pip` for the installation, which is the package manager for Python. In notebooks, you can run system commands by preceding them with the `!` character, so you can install the 🤗 Transformers library as follows:\n\n```\n!pip install transformers```\n\nYou can make sure the package was correctly installed by importing it within your Python runtime:\n\n![A gif showing the result of the two commands above: installation and import](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter0/install.gif)\n\nThis installs a very light version of 🤗 Transformers. In particular, no specific machine learning frameworks (like PyTorch or TensorFlow) are installed. Since we’ll be using a lot of different features of the library, we recommend installing the development version, which comes with all the required dependencies for pretty much any imaginable use case:\n\n```\n!pip install transformers[sentencepiece]```\n\nThis will take a bit of time, but then you’ll be ready to go for the rest of the course!\n\n## [](#using-a-python-virtual-environment)Using a Python virtual environment\n\nIf you prefer to use a Python virtual environment, the first step is to install Python on your system. We recommend following [this guide](https://realpython.com/installing-python/) to get started.\n\nOnce you have Python installed, you should be able to run Python commands in your terminal. You can start by running the following command to ensure that it is correctly installed before proceeding to the next steps: `python --version`. This should print out the Python version now available on your system.\n\nWhen running a Python command in your terminal, such as `python --version`, you should think of the program running your command as the “main” Python on your system. We recommend keeping this main installation free of any packages, and using it to create separate environments for each application you work on — this way, each application can have its own dependencies and packages, and you won’t need to worry about potential compatibility issues with other applications.\n\nIn Python this is done with [_virtual environments_](https://docs.python.org/3/tutorial/venv.html), which are self-contained directory trees that each contain a Python installation with a particular Python version alongside all the packages the application needs. Creating such a virtual environment can be done with a number of different tools, but we’ll use the official Python package for that purpose, which is called [`venv`](https://docs.python.org/3/library/venv.html#module-venv).\n\nFirst, create the directory you’d like your application to live in — for example, you might want to make a new directory called _transformers-course_ at the root of your home directory:\n\n```\nmkdir ~/transformers-course\ncd ~/transformers-course```\n\nFrom inside this directory, create a virtual environment using the Python `venv` module:\n\nYou should now have a directory called _.env_ in your otherwise empty folder:\n\nYou can jump in and out of your virtual environment with the `activate` and `deactivate` scripts:\n\n```\nsource .env/bin/activate\n\n\nsource .env/bin/deactivate```\n\nYou can make sure that the environment is activated by running the `which python` command: if it points to the virtual environment, then you have successfully activated it!\n\n```\n/home//transformers-course/.env/bin/python```\n\n### [](#installing-dependencies)Installing dependencies\n\nAs in the previous section on using Google Colab instances, you’ll now need to install the packages required to continue. Again, you can install the development version of 🤗 Transformers using the `pip` package manager:\n\n```\npip install \"transformers[sentencepiece]\"```\n\nYou’re now all set up and ready to go!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tIntroduction - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Introduction

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Introduction

Welcome to the Hugging Face course! This introduction will guide you through setting up a working environment. If you’re just starting the course, we recommend you first take a look at Chapter 1, then come back and set up your environment so you can try the code yourself.

All the libraries that we’ll be using in this course are available as Python packages, so here we’ll show you how to set up a Python environment and install the specific libraries you’ll need.

We’ll cover two ways of setting up your working environment, using a Colab notebook or a Python virtual environment. Feel free to choose the one that resonates with you the most. For beginners, we strongly recommend that you get started by using a Colab notebook.

Note that we will not be covering the Windows system. If you’re running on Windows, we recommend following along using a Colab notebook. If you’re using a Linux distribution or macOS, you can use either approach described here.

Most of the course relies on you having a Hugging Face account. We recommend creating one now: create an account.

Using a Google Colab notebook

Using a Colab notebook is the simplest possible setup; boot up a notebook in your browser and get straight to coding!

If you’re not familiar with Colab, we recommend you start by following the introduction. Colab allows you to use some accelerating hardware, like GPUs or TPUs, and it is free for smaller workloads.

Once you’re comfortable moving around in Colab, create a new notebook and get started with the setup:

\"An

The next step is to install the libraries that we’ll be using in this course. We’ll use pip for the installation, which is the package manager for Python. In notebooks, you can run system commands by preceding them with the ! character, so you can install the 🤗 Transformers library as follows:

!pip install transformers

You can make sure the package was correctly installed by importing it within your Python runtime:

import transformers
\"A

This installs a very light version of 🤗 Transformers. In particular, no specific machine learning frameworks (like PyTorch or TensorFlow) are installed. Since we’ll be using a lot of different features of the library, we recommend installing the development version, which comes with all the required dependencies for pretty much any imaginable use case:

!pip install transformers[sentencepiece]

This will take a bit of time, but then you’ll be ready to go for the rest of the course!

Using a Python virtual environment

If you prefer to use a Python virtual environment, the first step is to install Python on your system. We recommend following this guide to get started.

Once you have Python installed, you should be able to run Python commands in your terminal. You can start by running the following command to ensure that it is correctly installed before proceeding to the next steps: python --version. This should print out the Python version now available on your system.

When running a Python command in your terminal, such as python --version, you should think of the program running your command as the “main” Python on your system. We recommend keeping this main installation free of any packages, and using it to create separate environments for each application you work on — this way, each application can have its own dependencies and packages, and you won’t need to worry about potential compatibility issues with other applications.

In Python this is done with virtual environments, which are self-contained directory trees that each contain a Python installation with a particular Python version alongside all the packages the application needs. Creating such a virtual environment can be done with a number of different tools, but we’ll use the official Python package for that purpose, which is called venv.

First, create the directory you’d like your application to live in — for example, you might want to make a new directory called transformers-course at the root of your home directory:

mkdir ~/transformers-course\ncd ~/transformers-course

From inside this directory, create a virtual environment using the Python venv module:

python -m venv .env

You should now have a directory called .env in your otherwise empty folder:

ls -a
.      ..    .env

You can jump in and out of your virtual environment with the activate and deactivate scripts:

# Activate the virtual environment\nsource .env/bin/activate\n\n# Deactivate the virtual environment\nsource .env/bin/deactivate

You can make sure that the environment is activated by running the which python command: if it points to the virtual environment, then you have successfully activated it!

which python
/home/<user>/transformers-course/.env/bin/python

Installing dependencies

As in the previous section on using Google Colab instances, you’ll now need to install the packages required to continue. Again, you can install the development version of 🤗 Transformers using the pip package manager:

pip install \"transformers[sentencepiece]\"

You’re now all set up and ready to go!

\n\t\t\t\t
\n\t\t\t\t\tIntroduction
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:07.451Z"} {"title":"Introduction - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter2/1?fw=pt","markdown":"## [](#introduction)Introduction\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-2-questions)\n\nAs you saw in [Chapter 1](/course/chapter1), Transformer models are usually very large. With millions to tens of _billions_ of parameters, training and deploying these models is a complicated undertaking. Furthermore, with new models being released on a near-daily basis and each having its own implementation, trying them all out is no easy task.\n\nThe 🤗 Transformers library was created to solve this problem. Its goal is to provide a single API through which any Transformer model can be loaded, trained, and saved. The library’s main features are:\n\n- **Ease of use**: Downloading, loading, and using a state-of-the-art NLP model for inference can be done in just two lines of code.\n- **Flexibility**: At their core, all models are simple PyTorch `nn.Module` or TensorFlow `tf.keras.Model` classes and can be handled like any other models in their respective machine learning (ML) frameworks.\n- **Simplicity**: Hardly any abstractions are made across the library. The “All in one file” is a core concept: a model’s forward pass is entirely defined in a single file, so that the code itself is understandable and hackable.\n\nThis last feature makes 🤗 Transformers quite different from other ML libraries. The models are not built on modules that are shared across files; instead, each model has its own layers. In addition to making the models more approachable and understandable, this allows you to easily experiment on one model without affecting others.\n\nThis chapter will begin with an end-to-end example where we use a model and a tokenizer together to replicate the `pipeline()` function introduced in [Chapter 1](/course/chapter1). Next, we’ll discuss the model API: we’ll dive into the model and configuration classes, and show you how to load a model and how it processes numerical inputs to output predictions.\n\nThen we’ll look at the tokenizer API, which is the other main component of the `pipeline()` function. Tokenizers take care of the first and last processing steps, handling the conversion from text to numerical inputs for the neural network, and the conversion back to text when it is needed. Finally, we’ll show you how to handle sending multiple sentences through a model in a prepared batch, then wrap it all up with a closer look at the high-level `tokenizer()` function.\n\n⚠️ In order to benefit from all features available with the Model Hub and 🤗 Transformers, we recommend [creating an account](https://huggingface.co/join).","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tIntroduction - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Introduction

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Introduction

\"Ask

As you saw in Chapter 1, Transformer models are usually very large. With millions to tens of billions of parameters, training and deploying these models is a complicated undertaking. Furthermore, with new models being released on a near-daily basis and each having its own implementation, trying them all out is no easy task.

The 🤗 Transformers library was created to solve this problem. Its goal is to provide a single API through which any Transformer model can be loaded, trained, and saved. The library’s main features are:

  • Ease of use: Downloading, loading, and using a state-of-the-art NLP model for inference can be done in just two lines of code.
  • Flexibility: At their core, all models are simple PyTorch nn.Module or TensorFlow tf.keras.Model classes and can be handled like any other models in their respective machine learning (ML) frameworks.
  • Simplicity: Hardly any abstractions are made across the library. The “All in one file” is a core concept: a model’s forward pass is entirely defined in a single file, so that the code itself is understandable and hackable.

This last feature makes 🤗 Transformers quite different from other ML libraries. The models are not built on modules\nthat are shared across files; instead, each model has its own layers. In addition to making the models more approachable and understandable, this allows you to easily experiment on one model without affecting others.

This chapter will begin with an end-to-end example where we use a model and a tokenizer together to replicate the pipeline() function introduced in Chapter 1. Next, we’ll discuss the model API: we’ll dive into the model and configuration classes, and show you how to load a model and how it processes numerical inputs to output predictions.

Then we’ll look at the tokenizer API, which is the other main component of the pipeline() function. Tokenizers take care of the first and last processing steps, handling the conversion from text to numerical inputs for the neural network, and the conversion back to text when it is needed. Finally, we’ll show you how to handle sending multiple sentences through a model in a prepared batch, then wrap it all up with a closer look at the high-level tokenizer() function.

⚠️ In order to benefit from all features available with the Model Hub and 🤗 Transformers, we recommend creating an account.
\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:07.627Z"} {"title":"Models - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter2/3?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#models)Models\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-2-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter2/section3_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter2/section3_pt.ipynb)\n\nIn this section we’ll take a closer look at creating and using a model. We’ll use the `AutoModel` class, which is handy when you want to instantiate any model from a checkpoint.\n\nThe `AutoModel` class and all of its relatives are actually simple wrappers over the wide variety of models available in the library. It’s a clever wrapper as it can automatically guess the appropriate model architecture for your checkpoint, and then instantiates a model with this architecture.\n\nHowever, if you know the type of model you want to use, you can use the class that defines its architecture directly. Let’s take a look at how this works with a BERT model.\n\n## [](#creating-a-transformer)Creating a Transformer\n\nThe first thing we’ll need to do to initialize a BERT model is load a configuration object:\n\n```\nfrom transformers import BertConfig, BertModel\n\n\nconfig = BertConfig()\n\n\nmodel = BertModel(config)```\n\nThe configuration contains many attributes that are used to build the model:\n\n```\nBertConfig {\n [...]\n \"hidden_size\": 768,\n \"intermediate_size\": 3072,\n \"max_position_embeddings\": 512,\n \"num_attention_heads\": 12,\n \"num_hidden_layers\": 12,\n [...]\n}```\n\nWhile you haven’t seen what all of these attributes do yet, you should recognize some of them: the `hidden_size` attribute defines the size of the `hidden_states` vector, and `num_hidden_layers` defines the number of layers the Transformer model has.\n\n### [](#different-loading-methods)Different loading methods\n\nCreating a model from the default configuration initializes it with random values:\n\n```\nfrom transformers import BertConfig, BertModel\n\nconfig = BertConfig()\nmodel = BertModel(config)\n\n```\n\nThe model can be used in this state, but it will output gibberish; it needs to be trained first. We could train the model from scratch on the task at hand, but as you saw in [Chapter 1](/course/chapter1), this would require a long time and a lot of data, and it would have a non-negligible environmental impact. To avoid unnecessary and duplicated effort, it’s imperative to be able to share and reuse models that have already been trained.\n\nLoading a Transformer model that is already trained is simple — we can do this using the `from_pretrained()` method:\n\n```\nfrom transformers import BertModel\n\nmodel = BertModel.from_pretrained(\"bert-base-cased\")```\n\nAs you saw earlier, we could replace `BertModel` with the equivalent `AutoModel` class. We’ll do this from now on as this produces checkpoint-agnostic code; if your code works for one checkpoint, it should work seamlessly with another. This applies even if the architecture is different, as long as the checkpoint was trained for a similar task (for example, a sentiment analysis task).\n\nIn the code sample above we didn’t use `BertConfig`, and instead loaded a pretrained model via the `bert-base-cased` identifier. This is a model checkpoint that was trained by the authors of BERT themselves; you can find more details about it in its [model card](https://huggingface.co/bert-base-cased).\n\nThis model is now initialized with all the weights of the checkpoint. It can be used directly for inference on the tasks it was trained on, and it can also be fine-tuned on a new task. By training with pretrained weights rather than from scratch, we can quickly achieve good results.\n\nThe weights have been downloaded and cached (so future calls to the `from_pretrained()` method won’t re-download them) in the cache folder, which defaults to _~/.cache/huggingface/transformers_. You can customize your cache folder by setting the `HF_HOME` environment variable.\n\nThe identifier used to load the model can be the identifier of any model on the Model Hub, as long as it is compatible with the BERT architecture. The entire list of available BERT checkpoints can be found [here](https://huggingface.co/models?filter=bert).\n\n### [](#saving-methods)Saving methods\n\nSaving a model is as easy as loading one — we use the `save_pretrained()` method, which is analogous to the `from_pretrained()` method:\n\n```\nmodel.save_pretrained(\"directory_on_my_computer\")```\n\nThis saves two files to your disk:\n\n```\nls directory_on_my_computer\n\nconfig.json pytorch_model.bin```\n\nIf you take a look at the _config.json_ file, you’ll recognize the attributes necessary to build the model architecture. This file also contains some metadata, such as where the checkpoint originated and what 🤗 Transformers version you were using when you last saved the checkpoint.\n\nThe _pytorch\\_model.bin_ file is known as the _state dictionary_; it contains all your model’s weights. The two files go hand in hand; the configuration is necessary to know your model’s architecture, while the model weights are your model’s parameters.\n\n## [](#using-a-transformer-model-for-inference)Using a Transformer model for inference\n\nNow that you know how to load and save a model, let’s try using it to make some predictions. Transformer models can only process numbers — numbers that the tokenizer generates. But before we discuss tokenizers, let’s explore what inputs the model accepts.\n\nTokenizers can take care of casting the inputs to the appropriate framework’s tensors, but to help you understand what’s going on, we’ll take a quick look at what must be done before sending the inputs to the model.\n\nLet’s say we have a couple of sequences:\n\n```\nsequences = [\"Hello!\", \"Cool.\", \"Nice!\"]```\n\nThe tokenizer converts these to vocabulary indices which are typically called _input IDs_. Each sequence is now a list of numbers! The resulting output is:\n\n```\nencoded_sequences = [\n [101, 7592, 999, 102],\n [101, 4658, 1012, 102],\n [101, 3835, 999, 102],\n]```\n\nThis is a list of encoded sequences: a list of lists. Tensors only accept rectangular shapes (think matrices). This “array” is already of rectangular shape, so converting it to a tensor is easy:\n\n```\nimport torch\n\nmodel_inputs = torch.tensor(encoded_sequences)```\n\n### [](#using-the-tensors-as-inputs-to-the-model)Using the tensors as inputs to the model\n\nMaking use of the tensors with the model is extremely simple — we just call the model with the inputs:\n\n```\noutput = model(model_inputs)```\n\nWhile the model accepts a lot of different arguments, only the input IDs are necessary. We’ll explain what the other arguments do and when they are required later, but first we need to take a closer look at the tokenizers that build the inputs that a Transformer model can understand.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tModels - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Models

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Models

\"Ask \"Open \"Open

In this section we’ll take a closer look at creating and using a model. We’ll use the AutoModel class, which is handy when you want to instantiate any model from a checkpoint.

The AutoModel class and all of its relatives are actually simple wrappers over the wide variety of models available in the library. It’s a clever wrapper as it can automatically guess the appropriate model architecture for your checkpoint, and then instantiates a model with this architecture.

However, if you know the type of model you want to use, you can use the class that defines its architecture directly. Let’s take a look at how this works with a BERT model.

Creating a Transformer

The first thing we’ll need to do to initialize a BERT model is load a configuration object:

from transformers import BertConfig, BertModel\n\n# Building the config\nconfig = BertConfig()\n\n# Building the model from the config\nmodel = BertModel(config)

The configuration contains many attributes that are used to build the model:

print(config)
BertConfig {\n  [...]\n  \"hidden_size\": 768,\n  \"intermediate_size\": 3072,\n  \"max_position_embeddings\": 512,\n  \"num_attention_heads\": 12,\n  \"num_hidden_layers\": 12,\n  [...]\n}

While you haven’t seen what all of these attributes do yet, you should recognize some of them: the hidden_size attribute defines the size of the hidden_states vector, and num_hidden_layers defines the number of layers the Transformer model has.

Different loading methods

Creating a model from the default configuration initializes it with random values:

from transformers import BertConfig, BertModel\n\nconfig = BertConfig()\nmodel = BertModel(config)\n\n# Model is randomly initialized!

The model can be used in this state, but it will output gibberish; it needs to be trained first. We could train the model from scratch on the task at hand, but as you saw in Chapter 1, this would require a long time and a lot of data, and it would have a non-negligible environmental impact. To avoid unnecessary and duplicated effort, it’s imperative to be able to share and reuse models that have already been trained.

Loading a Transformer model that is already trained is simple — we can do this using the from_pretrained() method:

from transformers import BertModel\n\nmodel = BertModel.from_pretrained(\"bert-base-cased\")

As you saw earlier, we could replace BertModel with the equivalent AutoModel class. We’ll do this from now on as this produces checkpoint-agnostic code; if your code works for one checkpoint, it should work seamlessly with another. This applies even if the architecture is different, as long as the checkpoint was trained for a similar task (for example, a sentiment analysis task).

In the code sample above we didn’t use BertConfig, and instead loaded a pretrained model via the bert-base-cased identifier. This is a model checkpoint that was trained by the authors of BERT themselves; you can find more details about it in its model card.

This model is now initialized with all the weights of the checkpoint. It can be used directly for inference on the tasks it was trained on, and it can also be fine-tuned on a new task. By training with pretrained weights rather than from scratch, we can quickly achieve good results.

The weights have been downloaded and cached (so future calls to the from_pretrained() method won’t re-download them) in the cache folder, which defaults to ~/.cache/huggingface/transformers. You can customize your cache folder by setting the HF_HOME environment variable.

The identifier used to load the model can be the identifier of any model on the Model Hub, as long as it is compatible with the BERT architecture. The entire list of available BERT checkpoints can be found here.

Saving methods

Saving a model is as easy as loading one — we use the save_pretrained() method, which is analogous to the from_pretrained() method:

model.save_pretrained(\"directory_on_my_computer\")

This saves two files to your disk:

ls directory_on_my_computer\n\nconfig.json pytorch_model.bin

If you take a look at the config.json file, you’ll recognize the attributes necessary to build the model architecture. This file also contains some metadata, such as where the checkpoint originated and what 🤗 Transformers version you were using when you last saved the checkpoint.

The pytorch_model.bin file is known as the state dictionary; it contains all your model’s weights. The two files go hand in hand; the configuration is necessary to know your model’s architecture, while the model weights are your model’s parameters.

Using a Transformer model for inference

Now that you know how to load and save a model, let’s try using it to make some predictions. Transformer models can only process numbers — numbers that the tokenizer generates. But before we discuss tokenizers, let’s explore what inputs the model accepts.

Tokenizers can take care of casting the inputs to the appropriate framework’s tensors, but to help you understand what’s going on, we’ll take a quick look at what must be done before sending the inputs to the model.

Let’s say we have a couple of sequences:

sequences = [\"Hello!\", \"Cool.\", \"Nice!\"]

The tokenizer converts these to vocabulary indices which are typically called input IDs. Each sequence is now a list of numbers! The resulting output is:

encoded_sequences = [\n    [101, 7592, 999, 102],\n    [101, 4658, 1012, 102],\n    [101, 3835, 999, 102],\n]

This is a list of encoded sequences: a list of lists. Tensors only accept rectangular shapes (think matrices). This “array” is already of rectangular shape, so converting it to a tensor is easy:

import torch\n\nmodel_inputs = torch.tensor(encoded_sequences)

Using the tensors as inputs to the model

Making use of the tensors with the model is extremely simple — we just call the model with the inputs:

output = model(model_inputs)

While the model accepts a lot of different arguments, only the input IDs are necessary. We’ll explain what the other arguments do and when they are required later,\nbut first we need to take a closer look at the tokenizers that build the inputs that a Transformer model can understand.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:08.326Z"} {"title":"Behind the pipeline - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter2/2?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#behind-the-pipeline)Behind the pipeline\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-2-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter2/section2_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter2/section2_pt.ipynb)\n\nThis is the first section where the content is slightly different depending on whether you use PyTorch or TensorFlow. Toggle the switch on top of the title to select the platform you prefer!\n\nLet’s start with a complete example, taking a look at what happened behind the scenes when we executed the following code in [Chapter 1](/course/chapter1):\n\n```\nfrom transformers import pipeline\n\nclassifier = pipeline(\"sentiment-analysis\")\nclassifier(\n [\n \"I've been waiting for a HuggingFace course my whole life.\",\n \"I hate this so much!\",\n ]\n)```\n\nand obtained:\n\n```\n[{'label': 'POSITIVE', 'score': 0.9598047137260437},\n {'label': 'NEGATIVE', 'score': 0.9994558095932007}]```\n\nAs we saw in [Chapter 1](/course/chapter1), this pipeline groups together three steps: preprocessing, passing the inputs through the model, and postprocessing:\n\n![The full NLP pipeline: tokenization of text, conversion to IDs, and inference through the Transformer model and the model head.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter2/full_nlp_pipeline.svg) ![The full NLP pipeline: tokenization of text, conversion to IDs, and inference through the Transformer model and the model head.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter2/full_nlp_pipeline-dark.svg)\n\nLet’s quickly go over each of these.\n\n## [](#preprocessing-with-a-tokenizer)Preprocessing with a tokenizer\n\nLike other neural networks, Transformer models can’t process raw text directly, so the first step of our pipeline is to convert the text inputs into numbers that the model can make sense of. To do this we use a _tokenizer_, which will be responsible for:\n\n- Splitting the input into words, subwords, or symbols (like punctuation) that are called _tokens_\n- Mapping each token to an integer\n- Adding additional inputs that may be useful to the model\n\nAll this preprocessing needs to be done in exactly the same way as when the model was pretrained, so we first need to download that information from the [Model Hub](https://huggingface.co/models). To do this, we use the `AutoTokenizer` class and its `from_pretrained()` method. Using the checkpoint name of our model, it will automatically fetch the data associated with the model’s tokenizer and cache it (so it’s only downloaded the first time you run the code below).\n\nSince the default checkpoint of the `sentiment-analysis` pipeline is `distilbert-base-uncased-finetuned-sst-2-english` (you can see its model card [here](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)), we run the following:\n\n```\nfrom transformers import AutoTokenizer\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)```\n\nOnce we have the tokenizer, we can directly pass our sentences to it and we’ll get back a dictionary that’s ready to feed to our model! The only thing left to do is to convert the list of input IDs to tensors.\n\nYou can use 🤗 Transformers without having to worry about which ML framework is used as a backend; it might be PyTorch or TensorFlow, or Flax for some models. However, Transformer models only accept _tensors_ as input. If this is your first time hearing about tensors, you can think of them as NumPy arrays instead. A NumPy array can be a scalar (0D), a vector (1D), a matrix (2D), or have more dimensions. It’s effectively a tensor; other ML frameworks’ tensors behave similarly, and are usually as simple to instantiate as NumPy arrays.\n\nTo specify the type of tensors we want to get back (PyTorch, TensorFlow, or plain NumPy), we use the `return_tensors` argument:\n\n```\nraw_inputs = [\n \"I've been waiting for a HuggingFace course my whole life.\",\n \"I hate this so much!\",\n]\ninputs = tokenizer(raw_inputs, padding=True, truncation=True, return_tensors=\"pt\")\nprint(inputs)```\n\nDon’t worry about padding and truncation just yet; we’ll explain those later. The main things to remember here are that you can pass one sentence or a list of sentences, as well as specifying the type of tensors you want to get back (if no type is passed, you will get a list of lists as a result).\n\nHere’s what the results look like as PyTorch tensors:\n\n```\n{\n 'input_ids': tensor([\n [ 101, 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012, 102],\n [ 101, 1045, 5223, 2023, 2061, 2172, 999, 102, 0, 0, 0, 0, 0, 0, 0, 0]\n ]), \n 'attention_mask': tensor([\n [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]\n ])\n}```\n\nThe output itself is a dictionary containing two keys, `input_ids` and `attention_mask`. `input_ids` contains two rows of integers (one for each sentence) that are the unique identifiers of the tokens in each sentence. We’ll explain what the `attention_mask` is later in this chapter.\n\n## [](#going-through-the-model)Going through the model\n\nWe can download our pretrained model the same way we did with our tokenizer. 🤗 Transformers provides an `AutoModel` class which also has a `from_pretrained()` method:\n\n```\nfrom transformers import AutoModel\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\nmodel = AutoModel.from_pretrained(checkpoint)```\n\nIn this code snippet, we have downloaded the same checkpoint we used in our pipeline before (it should actually have been cached already) and instantiated a model with it.\n\nThis architecture contains only the base Transformer module: given some inputs, it outputs what we’ll call _hidden states_, also known as _features_. For each model input, we’ll retrieve a high-dimensional vector representing the **contextual understanding of that input by the Transformer model**.\n\nIf this doesn’t make sense, don’t worry about it. We’ll explain it all later.\n\nWhile these hidden states can be useful on their own, they’re usually inputs to another part of the model, known as the _head_. In [Chapter 1](/course/chapter1), the different tasks could have been performed with the same architecture, but each of these tasks will have a different head associated with it.\n\n### [](#a-high-dimensional-vector)A high-dimensional vector?\n\nThe vector output by the Transformer module is usually large. It generally has three dimensions:\n\n- **Batch size**: The number of sequences processed at a time (2 in our example).\n- **Sequence length**: The length of the numerical representation of the sequence (16 in our example).\n- **Hidden size**: The vector dimension of each model input.\n\nIt is said to be “high dimensional” because of the last value. The hidden size can be very large (768 is common for smaller models, and in larger models this can reach 3072 or more).\n\nWe can see this if we feed the inputs we preprocessed to our model:\n\n```\noutputs = model(**inputs)\nprint(outputs.last_hidden_state.shape)```\n\nNote that the outputs of 🤗 Transformers models behave like `namedtuple`s or dictionaries. You can access the elements by attributes (like we did) or by key (`outputs[\"last_hidden_state\"]`), or even by index if you know exactly where the thing you are looking for is (`outputs[0]`).\n\n### [](#model-heads-making-sense-out-of-numbers)Model heads: Making sense out of numbers\n\nThe model heads take the high-dimensional vector of hidden states as input and project them onto a different dimension. They are usually composed of one or a few linear layers:\n\n![A Transformer network alongside its head.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter2/transformer_and_head.svg) ![A Transformer network alongside its head.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter2/transformer_and_head-dark.svg)\n\nThe output of the Transformer model is sent directly to the model head to be processed.\n\nIn this diagram, the model is represented by its embeddings layer and the subsequent layers. The embeddings layer converts each input ID in the tokenized input into a vector that represents the associated token. The subsequent layers manipulate those vectors using the attention mechanism to produce the final representation of the sentences.\n\nThere are many different architectures available in 🤗 Transformers, with each one designed around tackling a specific task. Here is a non-exhaustive list:\n\n- `*Model` (retrieve the hidden states)\n- `*ForCausalLM`\n- `*ForMaskedLM`\n- `*ForMultipleChoice`\n- `*ForQuestionAnswering`\n- `*ForSequenceClassification`\n- `*ForTokenClassification`\n- and others 🤗\n\nFor our example, we will need a model with a sequence classification head (to be able to classify the sentences as positive or negative). So, we won’t actually use the `AutoModel` class, but `AutoModelForSequenceClassification`:\n\n```\nfrom transformers import AutoModelForSequenceClassification\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\noutputs = model(**inputs)```\n\nNow if we look at the shape of our outputs, the dimensionality will be much lower: the model head takes as input the high-dimensional vectors we saw before, and outputs vectors containing two values (one per label):\n\n```\nprint(outputs.logits.shape)```\n\nSince we have just two sentences and two labels, the result we get from our model is of shape 2 x 2.\n\n## [](#postprocessing-the-output)Postprocessing the output\n\nThe values we get as output from our model don’t necessarily make sense by themselves. Let’s take a look:\n\n```\ntensor([[-1.5607, 1.6123],\n [ 4.1692, -3.3464]], grad_fn=)```\n\nOur model predicted `[-1.5607, 1.6123]` for the first sentence and `[ 4.1692, -3.3464]` for the second one. Those are not probabilities but _logits_, the raw, unnormalized scores outputted by the last layer of the model. To be converted to probabilities, they need to go through a [SoftMax](https://en.wikipedia.org/wiki/Softmax_function) layer (all 🤗 Transformers models output the logits, as the loss function for training will generally fuse the last activation function, such as SoftMax, with the actual loss function, such as cross entropy):\n\n```\nimport torch\n\npredictions = torch.nn.functional.softmax(outputs.logits, dim=-1)\nprint(predictions)```\n\n```\ntensor([[4.0195e-02, 9.5980e-01],\n [9.9946e-01, 5.4418e-04]], grad_fn=)```\n\nNow we can see that the model predicted `[0.0402, 0.9598]` for the first sentence and `[0.9995, 0.0005]` for the second one. These are recognizable probability scores.\n\nTo get the labels corresponding to each position, we can inspect the `id2label` attribute of the model config (more on this in the next section):\n\n```\n{0: 'NEGATIVE', 1: 'POSITIVE'}```\n\nNow we can conclude that the model predicted the following:\n\n- First sentence: NEGATIVE: 0.0402, POSITIVE: 0.9598\n- Second sentence: NEGATIVE: 0.9995, POSITIVE: 0.0005\n\nWe have successfully reproduced the three steps of the pipeline: preprocessing with tokenizers, passing the inputs through the model, and postprocessing! Now let’s take some time to dive deeper into each of those steps.\n\n✏️ **Try it out!** Choose two (or more) texts of your own and run them through the `sentiment-analysis` pipeline. Then replicate the steps you saw here yourself and check that you obtain the same results!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tBehind the pipeline - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Behind the pipeline

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Behind the pipeline

\"Ask \"Open \"Open
This is the first section where the content is slightly different depending on whether you use PyTorch or TensorFlow. Toggle the switch on top of the title to select the platform you prefer!

Let’s start with a complete example, taking a look at what happened behind the scenes when we executed the following code in Chapter 1:

from transformers import pipeline\n\nclassifier = pipeline(\"sentiment-analysis\")\nclassifier(\n    [\n        \"I've been waiting for a HuggingFace course my whole life.\",\n        \"I hate this so much!\",\n    ]\n)

and obtained:

[{'label': 'POSITIVE', 'score': 0.9598047137260437},\n {'label': 'NEGATIVE', 'score': 0.9994558095932007}]

As we saw in Chapter 1, this pipeline groups together three steps: preprocessing, passing the inputs through the model, and postprocessing:

\"The \"The

Let’s quickly go over each of these.

Preprocessing with a tokenizer

Like other neural networks, Transformer models can’t process raw text directly, so the first step of our pipeline is to convert the text inputs into numbers that the model can make sense of. To do this we use a tokenizer, which will be responsible for:

  • Splitting the input into words, subwords, or symbols (like punctuation) that are called tokens
  • Mapping each token to an integer
  • Adding additional inputs that may be useful to the model

All this preprocessing needs to be done in exactly the same way as when the model was pretrained, so we first need to download that information from the Model Hub. To do this, we use the AutoTokenizer class and its from_pretrained() method. Using the checkpoint name of our model, it will automatically fetch the data associated with the model’s tokenizer and cache it (so it’s only downloaded the first time you run the code below).

Since the default checkpoint of the sentiment-analysis pipeline is distilbert-base-uncased-finetuned-sst-2-english (you can see its model card here), we run the following:

from transformers import AutoTokenizer\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)

Once we have the tokenizer, we can directly pass our sentences to it and we’ll get back a dictionary that’s ready to feed to our model! The only thing left to do is to convert the list of input IDs to tensors.

You can use 🤗 Transformers without having to worry about which ML framework is used as a backend; it might be PyTorch or TensorFlow, or Flax for some models. However, Transformer models only accept tensors as input. If this is your first time hearing about tensors, you can think of them as NumPy arrays instead. A NumPy array can be a scalar (0D), a vector (1D), a matrix (2D), or have more dimensions. It’s effectively a tensor; other ML frameworks’ tensors behave similarly, and are usually as simple to instantiate as NumPy arrays.

To specify the type of tensors we want to get back (PyTorch, TensorFlow, or plain NumPy), we use the return_tensors argument:

raw_inputs = [\n    \"I've been waiting for a HuggingFace course my whole life.\",\n    \"I hate this so much!\",\n]\ninputs = tokenizer(raw_inputs, padding=True, truncation=True, return_tensors=\"pt\")\nprint(inputs)

Don’t worry about padding and truncation just yet; we’ll explain those later. The main things to remember here are that you can pass one sentence or a list of sentences, as well as specifying the type of tensors you want to get back (if no type is passed, you will get a list of lists as a result).

Here’s what the results look like as PyTorch tensors:

{\n    'input_ids': tensor([\n        [  101,  1045,  1005,  2310,  2042,  3403,  2005,  1037, 17662, 12172, 2607,  2026,  2878,  2166,  1012,   102],\n        [  101,  1045,  5223,  2023,  2061,  2172,   999,   102,     0,     0,     0,     0,     0,     0,     0,     0]\n    ]), \n    'attention_mask': tensor([\n        [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n        [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]\n    ])\n}

The output itself is a dictionary containing two keys, input_ids and attention_mask. input_ids contains two rows of integers (one for each sentence) that are the unique identifiers of the tokens in each sentence. We’ll explain what the attention_mask is later in this chapter.

Going through the model

We can download our pretrained model the same way we did with our tokenizer. 🤗 Transformers provides an AutoModel class which also has a from_pretrained() method:

from transformers import AutoModel\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\nmodel = AutoModel.from_pretrained(checkpoint)

In this code snippet, we have downloaded the same checkpoint we used in our pipeline before (it should actually have been cached already) and instantiated a model with it.

This architecture contains only the base Transformer module: given some inputs, it outputs what we’ll call hidden states, also known as features. For each model input, we’ll retrieve a high-dimensional vector representing the contextual understanding of that input by the Transformer model.

If this doesn’t make sense, don’t worry about it. We’ll explain it all later.

While these hidden states can be useful on their own, they’re usually inputs to another part of the model, known as the head. In Chapter 1, the different tasks could have been performed with the same architecture, but each of these tasks will have a different head associated with it.

A high-dimensional vector?

The vector output by the Transformer module is usually large. It generally has three dimensions:

  • Batch size: The number of sequences processed at a time (2 in our example).
  • Sequence length: The length of the numerical representation of the sequence (16 in our example).
  • Hidden size: The vector dimension of each model input.

It is said to be “high dimensional” because of the last value. The hidden size can be very large (768 is common for smaller models, and in larger models this can reach 3072 or more).

We can see this if we feed the inputs we preprocessed to our model:

outputs = model(**inputs)\nprint(outputs.last_hidden_state.shape)
torch.Size([2, 16, 768])

Note that the outputs of 🤗 Transformers models behave like namedtuples or dictionaries. You can access the elements by attributes (like we did) or by key (outputs[\"last_hidden_state\"]), or even by index if you know exactly where the thing you are looking for is (outputs[0]).

Model heads: Making sense out of numbers

The model heads take the high-dimensional vector of hidden states as input and project them onto a different dimension. They are usually composed of one or a few linear layers:

\"A \"A

The output of the Transformer model is sent directly to the model head to be processed.

In this diagram, the model is represented by its embeddings layer and the subsequent layers. The embeddings layer converts each input ID in the tokenized input into a vector that represents the associated token. The subsequent layers manipulate those vectors using the attention mechanism to produce the final representation of the sentences.

There are many different architectures available in 🤗 Transformers, with each one designed around tackling a specific task. Here is a non-exhaustive list:

  • *Model (retrieve the hidden states)
  • *ForCausalLM
  • *ForMaskedLM
  • *ForMultipleChoice
  • *ForQuestionAnswering
  • *ForSequenceClassification
  • *ForTokenClassification
  • and others 🤗

For our example, we will need a model with a sequence classification head (to be able to classify the sentences as positive or negative). So, we won’t actually use the AutoModel class, but AutoModelForSequenceClassification:

from transformers import AutoModelForSequenceClassification\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\noutputs = model(**inputs)

Now if we look at the shape of our outputs, the dimensionality will be much lower: the model head takes as input the high-dimensional vectors we saw before, and outputs vectors containing two values (one per label):

print(outputs.logits.shape)
torch.Size([2, 2])

Since we have just two sentences and two labels, the result we get from our model is of shape 2 x 2.

Postprocessing the output

The values we get as output from our model don’t necessarily make sense by themselves. Let’s take a look:

print(outputs.logits)
tensor([[-1.5607,  1.6123],\n        [ 4.1692, -3.3464]], grad_fn=<AddmmBackward>)

Our model predicted [-1.5607, 1.6123] for the first sentence and [ 4.1692, -3.3464] for the second one. Those are not probabilities but logits, the raw, unnormalized scores outputted by the last layer of the model. To be converted to probabilities, they need to go through a SoftMax layer (all 🤗 Transformers models output the logits, as the loss function for training will generally fuse the last activation function, such as SoftMax, with the actual loss function, such as cross entropy):

import torch\n\npredictions = torch.nn.functional.softmax(outputs.logits, dim=-1)\nprint(predictions)
tensor([[4.0195e-02, 9.5980e-01],\n        [9.9946e-01, 5.4418e-04]], grad_fn=<SoftmaxBackward>)

Now we can see that the model predicted [0.0402, 0.9598] for the first sentence and [0.9995, 0.0005] for the second one. These are recognizable probability scores.

To get the labels corresponding to each position, we can inspect the id2label attribute of the model config (more on this in the next section):

model.config.id2label
{0: 'NEGATIVE', 1: 'POSITIVE'}

Now we can conclude that the model predicted the following:

  • First sentence: NEGATIVE: 0.0402, POSITIVE: 0.9598
  • Second sentence: NEGATIVE: 0.9995, POSITIVE: 0.0005

We have successfully reproduced the three steps of the pipeline: preprocessing with tokenizers, passing the inputs through the model, and postprocessing! Now let’s take some time to dive deeper into each of those steps.

✏️ Try it out! Choose two (or more) texts of your own and run them through the sentiment-analysis pipeline. Then replicate the steps you saw here yourself and check that you obtain the same results!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:08.761Z"} {"title":"Tokenizers - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter2/4?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#tokenizers)Tokenizers\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-2-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter2/section4_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter2/section4_pt.ipynb)\n\nTokenizers are one of the core components of the NLP pipeline. They serve one purpose: to translate text into data that can be processed by the model. Models can only process numbers, so tokenizers need to convert our text inputs to numerical data. In this section, we’ll explore exactly what happens in the tokenization pipeline.\n\nIn NLP tasks, the data that is generally processed is raw text. Here’s an example of such text:\n\nHowever, models can only process numbers, so we need to find a way to convert the raw text to numbers. That’s what the tokenizers do, and there are a lot of ways to go about this. The goal is to find the most meaningful representation — that is, the one that makes the most sense to the model — and, if possible, the smallest representation.\n\nLet’s take a look at some examples of tokenization algorithms, and try to answer some of the questions you may have about tokenization.\n\n## [](#word-based)Word-based\n\nThe first type of tokenizer that comes to mind is _word-based_. It’s generally very easy to set up and use with only a few rules, and it often yields decent results. For example, in the image below, the goal is to split the raw text into words and find a numerical representation for each of them:\n\n![An example of word-based tokenization.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter2/word_based_tokenization.svg) ![An example of word-based tokenization.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter2/word_based_tokenization-dark.svg)\n\nThere are different ways to split the text. For example, we could use whitespace to tokenize the text into words by applying Python’s `split()` function:\n\n```\ntokenized_text = \"Jim Henson was a puppeteer\".split()\nprint(tokenized_text)```\n\n```\n['Jim', 'Henson', 'was', 'a', 'puppeteer']```\n\nThere are also variations of word tokenizers that have extra rules for punctuation. With this kind of tokenizer, we can end up with some pretty large “vocabularies,” where a vocabulary is defined by the total number of independent tokens that we have in our corpus.\n\nEach word gets assigned an ID, starting from 0 and going up to the size of the vocabulary. The model uses these IDs to identify each word.\n\nIf we want to completely cover a language with a word-based tokenizer, we’ll need to have an identifier for each word in the language, which will generate a huge amount of tokens. For example, there are over 500,000 words in the English language, so to build a map from each word to an input ID we’d need to keep track of that many IDs. Furthermore, words like “dog” are represented differently from words like “dogs”, and the model will initially have no way of knowing that “dog” and “dogs” are similar: it will identify the two words as unrelated. The same applies to other similar words, like “run” and “running”, which the model will not see as being similar initially.\n\nFinally, we need a custom token to represent words that are not in our vocabulary. This is known as the “unknown” token, often represented as ”\\[UNK\\]” or ””. It’s generally a bad sign if you see that the tokenizer is producing a lot of these tokens, as it wasn’t able to retrieve a sensible representation of a word and you’re losing information along the way. The goal when crafting the vocabulary is to do it in such a way that the tokenizer tokenizes as few words as possible into the unknown token.\n\nOne way to reduce the amount of unknown tokens is to go one level deeper, using a _character-based_ tokenizer.\n\n## [](#character-based)Character-based\n\nCharacter-based tokenizers split the text into characters, rather than words. This has two primary benefits:\n\n- The vocabulary is much smaller.\n- There are much fewer out-of-vocabulary (unknown) tokens, since every word can be built from characters.\n\nBut here too some questions arise concerning spaces and punctuation:\n\n![An example of character-based tokenization.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter2/character_based_tokenization.svg) ![An example of character-based tokenization.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter2/character_based_tokenization-dark.svg)\n\nThis approach isn’t perfect either. Since the representation is now based on characters rather than words, one could argue that, intuitively, it’s less meaningful: each character doesn’t mean a lot on its own, whereas that is the case with words. However, this again differs according to the language; in Chinese, for example, each character carries more information than a character in a Latin language.\n\nAnother thing to consider is that we’ll end up with a very large amount of tokens to be processed by our model: whereas a word would only be a single token with a word-based tokenizer, it can easily turn into 10 or more tokens when converted into characters.\n\nTo get the best of both worlds, we can use a third technique that combines the two approaches: _subword tokenization_.\n\n## [](#subword-tokenization)Subword tokenization\n\nSubword tokenization algorithms rely on the principle that frequently used words should not be split into smaller subwords, but rare words should be decomposed into meaningful subwords.\n\nFor instance, “annoyingly” might be considered a rare word and could be decomposed into “annoying” and “ly”. These are both likely to appear more frequently as standalone subwords, while at the same time the meaning of “annoyingly” is kept by the composite meaning of “annoying” and “ly”.\n\nHere is an example showing how a subword tokenization algorithm would tokenize the sequence “Let’s do tokenization!“:\n\n![A subword tokenization algorithm.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter2/bpe_subword.svg) ![A subword tokenization algorithm.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter2/bpe_subword-dark.svg)\n\nThese subwords end up providing a lot of semantic meaning: for instance, in the example above “tokenization” was split into “token” and “ization”, two tokens that have a semantic meaning while being space-efficient (only two tokens are needed to represent a long word). This allows us to have relatively good coverage with small vocabularies, and close to no unknown tokens.\n\nThis approach is especially useful in agglutinative languages such as Turkish, where you can form (almost) arbitrarily long complex words by stringing together subwords.\n\n### [](#and-more)And more!\n\nUnsurprisingly, there are many more techniques out there. To name a few:\n\n- Byte-level BPE, as used in GPT-2\n- WordPiece, as used in BERT\n- SentencePiece or Unigram, as used in several multilingual models\n\nYou should now have sufficient knowledge of how tokenizers work to get started with the API.\n\n## [](#loading-and-saving)Loading and saving\n\nLoading and saving tokenizers is as simple as it is with models. Actually, it’s based on the same two methods: `from_pretrained()` and `save_pretrained()`. These methods will load or save the algorithm used by the tokenizer (a bit like the _architecture_ of the model) as well as its vocabulary (a bit like the _weights_ of the model).\n\nLoading the BERT tokenizer trained with the same checkpoint as BERT is done the same way as loading the model, except we use the `BertTokenizer` class:\n\n```\nfrom transformers import BertTokenizer\n\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")```\n\nSimilar to `AutoModel`, the `AutoTokenizer` class will grab the proper tokenizer class in the library based on the checkpoint name, and can be used directly with any checkpoint:\n\n```\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")```\n\nWe can now use the tokenizer as shown in the previous section:\n\n```\ntokenizer(\"Using a Transformer network is simple\")```\n\n```\n{'input_ids': [101, 7993, 170, 11303, 1200, 2443, 1110, 3014, 102],\n 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0],\n 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}```\n\nSaving a tokenizer is identical to saving a model:\n\n```\ntokenizer.save_pretrained(\"directory_on_my_computer\")```\n\nWe’ll talk more about `token_type_ids` in [Chapter 3](/course/chapter3), and we’ll explain the `attention_mask` key a little later. First, let’s see how the `input_ids` are generated. To do this, we’ll need to look at the intermediate methods of the tokenizer.\n\n## [](#encoding)Encoding\n\nTranslating text to numbers is known as _encoding_. Encoding is done in a two-step process: the tokenization, followed by the conversion to input IDs.\n\nAs we’ve seen, the first step is to split the text into words (or parts of words, punctuation symbols, etc.), usually called _tokens_. There are multiple rules that can govern that process, which is why we need to instantiate the tokenizer using the name of the model, to make sure we use the same rules that were used when the model was pretrained.\n\nThe second step is to convert those tokens into numbers, so we can build a tensor out of them and feed them to the model. To do this, the tokenizer has a _vocabulary_, which is the part we download when we instantiate it with the `from_pretrained()` method. Again, we need to use the same vocabulary used when the model was pretrained.\n\nTo get a better understanding of the two steps, we’ll explore them separately. Note that we will use some methods that perform parts of the tokenization pipeline separately to show you the intermediate results of those steps, but in practice, you should call the tokenizer directly on your inputs (as shown in the section 2).\n\n### [](#tokenization)Tokenization\n\nThe tokenization process is done by the `tokenize()` method of the tokenizer:\n\n```\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\n\nsequence = \"Using a Transformer network is simple\"\ntokens = tokenizer.tokenize(sequence)\n\nprint(tokens)```\n\nThe output of this method is a list of strings, or tokens:\n\n```\n['Using', 'a', 'transform', '##er', 'network', 'is', 'simple']```\n\nThis tokenizer is a subword tokenizer: it splits the words until it obtains tokens that can be represented by its vocabulary. That’s the case here with `transformer`, which is split into two tokens: `transform` and `##er`.\n\n### [](#from-tokens-to-input-ids)From tokens to input IDs\n\nThe conversion to input IDs is handled by the `convert_tokens_to_ids()` tokenizer method:\n\n```\nids = tokenizer.convert_tokens_to_ids(tokens)\n\nprint(ids)```\n\n```\n[7993, 170, 11303, 1200, 2443, 1110, 3014]```\n\nThese outputs, once converted to the appropriate framework tensor, can then be used as inputs to a model as seen earlier in this chapter.\n\n✏️ **Try it out!** Replicate the two last steps (tokenization and conversion to input IDs) on the input sentences we used in section 2 (“I’ve been waiting for a HuggingFace course my whole life.” and “I hate this so much!”). Check that you get the same input IDs we got earlier!\n\n## [](#decoding)Decoding\n\n_Decoding_ is going the other way around: from vocabulary indices, we want to get a string. This can be done with the `decode()` method as follows:\n\n```\ndecoded_string = tokenizer.decode([7993, 170, 11303, 1200, 2443, 1110, 3014])\nprint(decoded_string)```\n\n```\n'Using a Transformer network is simple'```\n\nNote that the `decode` method not only converts the indices back to tokens, but also groups together the tokens that were part of the same words to produce a readable sentence. This behavior will be extremely useful when we use models that predict new text (either text generated from a prompt, or for sequence-to-sequence problems like translation or summarization).\n\nBy now you should understand the atomic operations a tokenizer can handle: tokenization, conversion to IDs, and converting IDs back to a string. However, we’ve just scraped the tip of the iceberg. In the following section, we’ll take our approach to its limits and take a look at how to overcome them.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tTokenizers - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t
\n\n\n\n
\n\t\t

NLP Course documentation\n\t\t\t

\n\t\t\t

Tokenizers

\n\t\t\t\t
\n\t\t
\n\t
\n\t\n\t\n\t\n\t\n\t
\n\t\t\n\t\t
\n\t\t\t\n\t\t\t\n\t\t\t\n
\n\t\n\t\n\t\n\t\n\t
\n\t\t\t\n\t\t\t\t
\n\n\t
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Tokenizers

\"Ask \"Open \"Open

Tokenizers are one of the core components of the NLP pipeline. They serve one purpose: to translate text into data that can be processed by the model. Models can only process numbers, so tokenizers need to convert our text inputs to numerical data. In this section, we’ll explore exactly what happens in the tokenization pipeline.

In NLP tasks, the data that is generally processed is raw text. Here’s an example of such text:

Jim Henson was a puppeteer

However, models can only process numbers, so we need to find a way to convert the raw text to numbers. That’s what the tokenizers do, and there are a lot of ways to go about this. The goal is to find the most meaningful representation — that is, the one that makes the most sense to the model — and, if possible, the smallest representation.

Let’s take a look at some examples of tokenization algorithms, and try to answer some of the questions you may have about tokenization.

Word-based

The first type of tokenizer that comes to mind is word-based. It’s generally very easy to set up and use with only a few rules, and it often yields decent results. For example, in the image below, the goal is to split the raw text into words and find a numerical representation for each of them:

\"An \"An

There are different ways to split the text. For example, we could use whitespace to tokenize the text into words by applying Python’s split() function:

tokenized_text = \"Jim Henson was a puppeteer\".split()\nprint(tokenized_text)
['Jim', 'Henson', 'was', 'a', 'puppeteer']

There are also variations of word tokenizers that have extra rules for punctuation. With this kind of tokenizer, we can end up with some pretty large “vocabularies,” where a vocabulary is defined by the total number of independent tokens that we have in our corpus.

Each word gets assigned an ID, starting from 0 and going up to the size of the vocabulary. The model uses these IDs to identify each word.

If we want to completely cover a language with a word-based tokenizer, we’ll need to have an identifier for each word in the language, which will generate a huge amount of tokens. For example, there are over 500,000 words in the English language, so to build a map from each word to an input ID we’d need to keep track of that many IDs. Furthermore, words like “dog” are represented differently from words like “dogs”, and the model will initially have no way of knowing that “dog” and “dogs” are similar: it will identify the two words as unrelated. The same applies to other similar words, like “run” and “running”, which the model will not see as being similar initially.

Finally, we need a custom token to represent words that are not in our vocabulary. This is known as the “unknown” token, often represented as ”[UNK]” or ””. It’s generally a bad sign if you see that the tokenizer is producing a lot of these tokens, as it wasn’t able to retrieve a sensible representation of a word and you’re losing information along the way. The goal when crafting the vocabulary is to do it in such a way that the tokenizer tokenizes as few words as possible into the unknown token.

One way to reduce the amount of unknown tokens is to go one level deeper, using a character-based tokenizer.

Character-based

Character-based tokenizers split the text into characters, rather than words. This has two primary benefits:

  • The vocabulary is much smaller.
  • There are much fewer out-of-vocabulary (unknown) tokens, since every word can be built from characters.

But here too some questions arise concerning spaces and punctuation:

\"An \"An

This approach isn’t perfect either. Since the representation is now based on characters rather than words, one could argue that, intuitively, it’s less meaningful: each character doesn’t mean a lot on its own, whereas that is the case with words. However, this again differs according to the language; in Chinese, for example, each character carries more information than a character in a Latin language.

Another thing to consider is that we’ll end up with a very large amount of tokens to be processed by our model: whereas a word would only be a single token with a word-based tokenizer, it can easily turn into 10 or more tokens when converted into characters.

To get the best of both worlds, we can use a third technique that combines the two approaches: subword tokenization.

Subword tokenization

Subword tokenization algorithms rely on the principle that frequently used words should not be split into smaller subwords, but rare words should be decomposed into meaningful subwords.

For instance, “annoyingly” might be considered a rare word and could be decomposed into “annoying” and “ly”. These are both likely to appear more frequently as standalone subwords, while at the same time the meaning of “annoyingly” is kept by the composite meaning of “annoying” and “ly”.

Here is an example showing how a subword tokenization algorithm would tokenize the sequence “Let’s do tokenization!“:

\"A \"A

These subwords end up providing a lot of semantic meaning: for instance, in the example above “tokenization” was split into “token” and “ization”, two tokens that have a semantic meaning while being space-efficient (only two tokens are needed to represent a long word). This allows us to have relatively good coverage with small vocabularies, and close to no unknown tokens.

This approach is especially useful in agglutinative languages such as Turkish, where you can form (almost) arbitrarily long complex words by stringing together subwords.

And more!

Unsurprisingly, there are many more techniques out there. To name a few:

  • Byte-level BPE, as used in GPT-2
  • WordPiece, as used in BERT
  • SentencePiece or Unigram, as used in several multilingual models

You should now have sufficient knowledge of how tokenizers work to get started with the API.

Loading and saving

Loading and saving tokenizers is as simple as it is with models. Actually, it’s based on the same two methods: from_pretrained() and save_pretrained(). These methods will load or save the algorithm used by the tokenizer (a bit like the architecture of the model) as well as its vocabulary (a bit like the weights of the model).

Loading the BERT tokenizer trained with the same checkpoint as BERT is done the same way as loading the model, except we use the BertTokenizer class:

from transformers import BertTokenizer\n\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")

Similar to AutoModel, the AutoTokenizer class will grab the proper tokenizer class in the library based on the checkpoint name, and can be used directly with any checkpoint:

from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")

We can now use the tokenizer as shown in the previous section:

tokenizer(\"Using a Transformer network is simple\")
{'input_ids': [101, 7993, 170, 11303, 1200, 2443, 1110, 3014, 102],\n 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0],\n 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}

Saving a tokenizer is identical to saving a model:

tokenizer.save_pretrained(\"directory_on_my_computer\")

We’ll talk more about token_type_ids in Chapter 3, and we’ll explain the attention_mask key a little later. First, let’s see how the input_ids are generated. To do this, we’ll need to look at the intermediate methods of the tokenizer.

Encoding

Translating text to numbers is known as encoding. Encoding is done in a two-step process: the tokenization, followed by the conversion to input IDs.

As we’ve seen, the first step is to split the text into words (or parts of words, punctuation symbols, etc.), usually called tokens. There are multiple rules that can govern that process, which is why we need to instantiate the tokenizer using the name of the model, to make sure we use the same rules that were used when the model was pretrained.

The second step is to convert those tokens into numbers, so we can build a tensor out of them and feed them to the model. To do this, the tokenizer has a vocabulary, which is the part we download when we instantiate it with the from_pretrained() method. Again, we need to use the same vocabulary used when the model was pretrained.

To get a better understanding of the two steps, we’ll explore them separately. Note that we will use some methods that perform parts of the tokenization pipeline separately to show you the intermediate results of those steps, but in practice, you should call the tokenizer directly on your inputs (as shown in the section 2).

Tokenization

The tokenization process is done by the tokenize() method of the tokenizer:

from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\n\nsequence = \"Using a Transformer network is simple\"\ntokens = tokenizer.tokenize(sequence)\n\nprint(tokens)

The output of this method is a list of strings, or tokens:

['Using', 'a', 'transform', '##er', 'network', 'is', 'simple']

This tokenizer is a subword tokenizer: it splits the words until it obtains tokens that can be represented by its vocabulary. That’s the case here with transformer, which is split into two tokens: transform and ##er.

From tokens to input IDs

The conversion to input IDs is handled by the convert_tokens_to_ids() tokenizer method:

ids = tokenizer.convert_tokens_to_ids(tokens)\n\nprint(ids)
[7993, 170, 11303, 1200, 2443, 1110, 3014]

These outputs, once converted to the appropriate framework tensor, can then be used as inputs to a model as seen earlier in this chapter.

✏️ Try it out! Replicate the two last steps (tokenization and conversion to input IDs) on the input sentences we used in section 2 (“I’ve been waiting for a HuggingFace course my whole life.” and “I hate this so much!”). Check that you get the same input IDs we got earlier!

Decoding

Decoding is going the other way around: from vocabulary indices, we want to get a string. This can be done with the decode() method as follows:

decoded_string = tokenizer.decode([7993, 170, 11303, 1200, 2443, 1110, 3014])\nprint(decoded_string)
'Using a Transformer network is simple'

Note that the decode method not only converts the indices back to tokens, but also groups together the tokens that were part of the same words to produce a readable sentence. This behavior will be extremely useful when we use models that predict new text (either text generated from a prompt, or for sequence-to-sequence problems like translation or summarization).

By now you should understand the atomic operations a tokenizer can handle: tokenization, conversion to IDs, and converting IDs back to a string. However, we’ve just scraped the tip of the iceberg. In the following section, we’ll take our approach to its limits and take a look at how to overcome them.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:09.560Z"} {"title":"Handling multiple sequences - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter2/5?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#handling-multiple-sequences)Handling multiple sequences\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-2-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter2/section5_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter2/section5_pt.ipynb)\n\nIn the previous section, we explored the simplest of use cases: doing inference on a single sequence of a small length. However, some questions emerge already:\n\n- How do we handle multiple sequences?\n- How do we handle multiple sequences _of different lengths_?\n- Are vocabulary indices the only inputs that allow a model to work well?\n- Is there such a thing as too long a sequence?\n\nLet’s see what kinds of problems these questions pose, and how we can solve them using the 🤗 Transformers API.\n\n## [](#models-expect-a-batch-of-inputs)Models expect a batch of inputs\n\nIn the previous exercise you saw how sequences get translated into lists of numbers. Let’s convert this list of numbers to a tensor and send it to the model:\n\n```\nimport torch\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\n\nsequence = \"I've been waiting for a HuggingFace course my whole life.\"\n\ntokens = tokenizer.tokenize(sequence)\nids = tokenizer.convert_tokens_to_ids(tokens)\ninput_ids = torch.tensor(ids)\n\nmodel(input_ids)```\n\n```\nIndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)```\n\nOh no! Why did this fail? “We followed the steps from the pipeline in section 2.\n\nThe problem is that we sent a single sequence to the model, whereas 🤗 Transformers models expect multiple sentences by default. Here we tried to do everything the tokenizer did behind the scenes when we applied it to a `sequence`. But if you look closely, you’ll see that the tokenizer didn’t just convert the list of input IDs into a tensor, it added a dimension on top of it:\n\n```\ntokenized_inputs = tokenizer(sequence, return_tensors=\"pt\")\nprint(tokenized_inputs[\"input_ids\"])```\n\n```\ntensor([[ 101, 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172,\n 2607, 2026, 2878, 2166, 1012, 102]])```\n\nLet’s try again and add a new dimension:\n\n```\nimport torch\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\n\nsequence = \"I've been waiting for a HuggingFace course my whole life.\"\n\ntokens = tokenizer.tokenize(sequence)\nids = tokenizer.convert_tokens_to_ids(tokens)\n\ninput_ids = torch.tensor([ids])\nprint(\"Input IDs:\", input_ids)\n\noutput = model(input_ids)\nprint(\"Logits:\", output.logits)```\n\nWe print the input IDs as well as the resulting logits — here’s the output:\n\n```\nInput IDs: [[ 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012]]\nLogits: [[-2.7276, 2.8789]]```\n\n_Batching_ is the act of sending multiple sentences through the model, all at once. If you only have one sentence, you can just build a batch with a single sequence:\n\nThis is a batch of two identical sequences!\n\n✏️ **Try it out!** Convert this `batched_ids` list into a tensor and pass it through your model. Check that you obtain the same logits as before (but twice)!\n\nBatching allows the model to work when you feed it multiple sentences. Using multiple sequences is just as simple as building a batch with a single sequence. There’s a second issue, though. When you’re trying to batch together two (or more) sentences, they might be of different lengths. If you’ve ever worked with tensors before, you know that they need to be of rectangular shape, so you won’t be able to convert the list of input IDs into a tensor directly. To work around this problem, we usually _pad_ the inputs.\n\n## [](#padding-the-inputs)Padding the inputs\n\nThe following list of lists cannot be converted to a tensor:\n\n```\nbatched_ids = [\n [200, 200, 200],\n [200, 200]\n]```\n\nIn order to work around this, we’ll use _padding_ to make our tensors have a rectangular shape. Padding makes sure all our sentences have the same length by adding a special word called the _padding token_ to the sentences with fewer values. For example, if you have 10 sentences with 10 words and 1 sentence with 20 words, padding will ensure all the sentences have 20 words. In our example, the resulting tensor looks like this:\n\n```\npadding_id = 100\n\nbatched_ids = [\n [200, 200, 200],\n [200, 200, padding_id],\n]```\n\nThe padding token ID can be found in `tokenizer.pad_token_id`. Let’s use it and send our two sentences through the model individually and batched together:\n\n```\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\n\nsequence1_ids = [[200, 200, 200]]\nsequence2_ids = [[200, 200]]\nbatched_ids = [\n [200, 200, 200],\n [200, 200, tokenizer.pad_token_id],\n]\n\nprint(model(torch.tensor(sequence1_ids)).logits)\nprint(model(torch.tensor(sequence2_ids)).logits)\nprint(model(torch.tensor(batched_ids)).logits)```\n\n```\ntensor([[ 1.5694, -1.3895]], grad_fn=)\ntensor([[ 0.5803, -0.4125]], grad_fn=)\ntensor([[ 1.5694, -1.3895],\n [ 1.3373, -1.2163]], grad_fn=)```\n\nThere’s something wrong with the logits in our batched predictions: the second row should be the same as the logits for the second sentence, but we’ve got completely different values!\n\nThis is because the key feature of Transformer models is attention layers that _contextualize_ each token. These will take into account the padding tokens since they attend to all of the tokens of a sequence. To get the same result when passing individual sentences of different lengths through the model or when passing a batch with the same sentences and padding applied, we need to tell those attention layers to ignore the padding tokens. This is done by using an attention mask.\n\n## [](#attention-masks)Attention masks\n\n_Attention masks_ are tensors with the exact same shape as the input IDs tensor, filled with 0s and 1s: 1s indicate the corresponding tokens should be attended to, and 0s indicate the corresponding tokens should not be attended to (i.e., they should be ignored by the attention layers of the model).\n\nLet’s complete the previous example with an attention mask:\n\n```\nbatched_ids = [\n [200, 200, 200],\n [200, 200, tokenizer.pad_token_id],\n]\n\nattention_mask = [\n [1, 1, 1],\n [1, 1, 0],\n]\n\noutputs = model(torch.tensor(batched_ids), attention_mask=torch.tensor(attention_mask))\nprint(outputs.logits)```\n\n```\ntensor([[ 1.5694, -1.3895],\n [ 0.5803, -0.4125]], grad_fn=)```\n\nNow we get the same logits for the second sentence in the batch.\n\nNotice how the last value of the second sequence is a padding ID, which is a 0 value in the attention mask.\n\n✏️ **Try it out!** Apply the tokenization manually on the two sentences used in section 2 (“I’ve been waiting for a HuggingFace course my whole life.” and “I hate this so much!”). Pass them through the model and check that you get the same logits as in section 2. Now batch them together using the padding token, then create the proper attention mask. Check that you obtain the same results when going through the model!\n\n## [](#longer-sequences)Longer sequences\n\nWith Transformer models, there is a limit to the lengths of the sequences we can pass the models. Most models handle sequences of up to 512 or 1024 tokens, and will crash when asked to process longer sequences. There are two solutions to this problem:\n\n- Use a model with a longer supported sequence length.\n- Truncate your sequences.\n\nModels have different supported sequence lengths, and some specialize in handling very long sequences. [Longformer](https://huggingface.co/transformers/model_doc/longformer.html) is one example, and another is [LED](https://huggingface.co/transformers/model_doc/led.html). If you’re working on a task that requires very long sequences, we recommend you take a look at those models.\n\nOtherwise, we recommend you truncate your sequences by specifying the `max_sequence_length` parameter:\n\n```\nsequence = sequence[:max_sequence_length]```","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tHandling multiple sequences - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Handling multiple sequences

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Handling multiple sequences

\"Ask \"Open \"Open

In the previous section, we explored the simplest of use cases: doing inference on a single sequence of a small length. However, some questions emerge already:

  • How do we handle multiple sequences?
  • How do we handle multiple sequences of different lengths?
  • Are vocabulary indices the only inputs that allow a model to work well?
  • Is there such a thing as too long a sequence?

Let’s see what kinds of problems these questions pose, and how we can solve them using the 🤗 Transformers API.

Models expect a batch of inputs

In the previous exercise you saw how sequences get translated into lists of numbers. Let’s convert this list of numbers to a tensor and send it to the model:

import torch\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\n\nsequence = \"I've been waiting for a HuggingFace course my whole life.\"\n\ntokens = tokenizer.tokenize(sequence)\nids = tokenizer.convert_tokens_to_ids(tokens)\ninput_ids = torch.tensor(ids)\n# This line will fail.\nmodel(input_ids)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Oh no! Why did this fail? “We followed the steps from the pipeline in section 2.

The problem is that we sent a single sequence to the model, whereas 🤗 Transformers models expect multiple sentences by default. Here we tried to do everything the tokenizer did behind the scenes when we applied it to a sequence. But if you look closely, you’ll see that the tokenizer didn’t just convert the list of input IDs into a tensor, it added a dimension on top of it:

tokenized_inputs = tokenizer(sequence, return_tensors=\"pt\")\nprint(tokenized_inputs[\"input_ids\"])
tensor([[  101,  1045,  1005,  2310,  2042,  3403,  2005,  1037, 17662, 12172,\n          2607,  2026,  2878,  2166,  1012,   102]])

Let’s try again and add a new dimension:

import torch\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\n\nsequence = \"I've been waiting for a HuggingFace course my whole life.\"\n\ntokens = tokenizer.tokenize(sequence)\nids = tokenizer.convert_tokens_to_ids(tokens)\n\ninput_ids = torch.tensor([ids])\nprint(\"Input IDs:\", input_ids)\n\noutput = model(input_ids)\nprint(\"Logits:\", output.logits)

We print the input IDs as well as the resulting logits — here’s the output:

Input IDs: [[ 1045,  1005,  2310,  2042,  3403,  2005,  1037, 17662, 12172,  2607, 2026,  2878,  2166,  1012]]\nLogits: [[-2.7276,  2.8789]]

Batching is the act of sending multiple sentences through the model, all at once. If you only have one sentence, you can just build a batch with a single sequence:

batched_ids = [ids, ids]

This is a batch of two identical sequences!

✏️ Try it out! Convert this batched_ids list into a tensor and pass it through your model. Check that you obtain the same logits as before (but twice)!

Batching allows the model to work when you feed it multiple sentences. Using multiple sequences is just as simple as building a batch with a single sequence. There’s a second issue, though. When you’re trying to batch together two (or more) sentences, they might be of different lengths. If you’ve ever worked with tensors before, you know that they need to be of rectangular shape, so you won’t be able to convert the list of input IDs into a tensor directly. To work around this problem, we usually pad the inputs.

Padding the inputs

The following list of lists cannot be converted to a tensor:

batched_ids = [\n    [200, 200, 200],\n    [200, 200]\n]

In order to work around this, we’ll use padding to make our tensors have a rectangular shape. Padding makes sure all our sentences have the same length by adding a special word called the padding token to the sentences with fewer values. For example, if you have 10 sentences with 10 words and 1 sentence with 20 words, padding will ensure all the sentences have 20 words. In our example, the resulting tensor looks like this:

padding_id = 100\n\nbatched_ids = [\n    [200, 200, 200],\n    [200, 200, padding_id],\n]

The padding token ID can be found in tokenizer.pad_token_id. Let’s use it and send our two sentences through the model individually and batched together:

model = AutoModelForSequenceClassification.from_pretrained(checkpoint)\n\nsequence1_ids = [[200, 200, 200]]\nsequence2_ids = [[200, 200]]\nbatched_ids = [\n    [200, 200, 200],\n    [200, 200, tokenizer.pad_token_id],\n]\n\nprint(model(torch.tensor(sequence1_ids)).logits)\nprint(model(torch.tensor(sequence2_ids)).logits)\nprint(model(torch.tensor(batched_ids)).logits)
tensor([[ 1.5694, -1.3895]], grad_fn=<AddmmBackward>)\ntensor([[ 0.5803, -0.4125]], grad_fn=<AddmmBackward>)\ntensor([[ 1.5694, -1.3895],\n        [ 1.3373, -1.2163]], grad_fn=<AddmmBackward>)

There’s something wrong with the logits in our batched predictions: the second row should be the same as the logits for the second sentence, but we’ve got completely different values!

This is because the key feature of Transformer models is attention layers that contextualize each token. These will take into account the padding tokens since they attend to all of the tokens of a sequence. To get the same result when passing individual sentences of different lengths through the model or when passing a batch with the same sentences and padding applied, we need to tell those attention layers to ignore the padding tokens. This is done by using an attention mask.

Attention masks

Attention masks are tensors with the exact same shape as the input IDs tensor, filled with 0s and 1s: 1s indicate the corresponding tokens should be attended to, and 0s indicate the corresponding tokens should not be attended to (i.e., they should be ignored by the attention layers of the model).

Let’s complete the previous example with an attention mask:

batched_ids = [\n    [200, 200, 200],\n    [200, 200, tokenizer.pad_token_id],\n]\n\nattention_mask = [\n    [1, 1, 1],\n    [1, 1, 0],\n]\n\noutputs = model(torch.tensor(batched_ids), attention_mask=torch.tensor(attention_mask))\nprint(outputs.logits)
tensor([[ 1.5694, -1.3895],\n        [ 0.5803, -0.4125]], grad_fn=<AddmmBackward>)

Now we get the same logits for the second sentence in the batch.

Notice how the last value of the second sequence is a padding ID, which is a 0 value in the attention mask.

✏️ Try it out! Apply the tokenization manually on the two sentences used in section 2 (“I’ve been waiting for a HuggingFace course my whole life.” and “I hate this so much!”). Pass them through the model and check that you get the same logits as in section 2. Now batch them together using the padding token, then create the proper attention mask. Check that you obtain the same results when going through the model!

Longer sequences

With Transformer models, there is a limit to the lengths of the sequences we can pass the models. Most models handle sequences of up to 512 or 1024 tokens, and will crash when asked to process longer sequences. There are two solutions to this problem:

  • Use a model with a longer supported sequence length.
  • Truncate your sequences.

Models have different supported sequence lengths, and some specialize in handling very long sequences. Longformer is one example, and another is LED. If you’re working on a task that requires very long sequences, we recommend you take a look at those models.

Otherwise, we recommend you truncate your sequences by specifying the max_sequence_length parameter:

sequence = sequence[:max_sequence_length]
\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:09.652Z"} {"title":"Putting it all together - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter2/6?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#putting-it-all-together)Putting it all together\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-2-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter2/section6_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter2/section6_pt.ipynb)\n\nIn the last few sections, we’ve been trying our best to do most of the work by hand. We’ve explored how tokenizers work and looked at tokenization, conversion to input IDs, padding, truncation, and attention masks.\n\nHowever, as we saw in section 2, the 🤗 Transformers API can handle all of this for us with a high-level function that we’ll dive into here. When you call your `tokenizer` directly on the sentence, you get back inputs that are ready to pass through your model:\n\n```\nfrom transformers import AutoTokenizer\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\n\nsequence = \"I've been waiting for a HuggingFace course my whole life.\"\n\nmodel_inputs = tokenizer(sequence)```\n\nHere, the `model_inputs` variable contains everything that’s necessary for a model to operate well. For DistilBERT, that includes the input IDs as well as the attention mask. Other models that accept additional inputs will also have those output by the `tokenizer` object.\n\nAs we’ll see in some examples below, this method is very powerful. First, it can tokenize a single sequence:\n\n```\nsequence = \"I've been waiting for a HuggingFace course my whole life.\"\n\nmodel_inputs = tokenizer(sequence)```\n\nIt also handles multiple sequences at a time, with no change in the API:\n\n```\nsequences = [\"I've been waiting for a HuggingFace course my whole life.\", \"So have I!\"]\n\nmodel_inputs = tokenizer(sequences)```\n\nIt can pad according to several objectives:\n\n```\nmodel_inputs = tokenizer(sequences, padding=\"longest\")\n\n\n\nmodel_inputs = tokenizer(sequences, padding=\"max_length\")\n\n\nmodel_inputs = tokenizer(sequences, padding=\"max_length\", max_length=8)```\n\nIt can also truncate sequences:\n\n```\nsequences = [\"I've been waiting for a HuggingFace course my whole life.\", \"So have I!\"]\n\n\n\nmodel_inputs = tokenizer(sequences, truncation=True)\n\n\nmodel_inputs = tokenizer(sequences, max_length=8, truncation=True)```\n\nThe `tokenizer` object can handle the conversion to specific framework tensors, which can then be directly sent to the model. For example, in the following code sample we are prompting the tokenizer to return tensors from the different frameworks — `\"pt\"` returns PyTorch tensors, `\"tf\"` returns TensorFlow tensors, and `\"np\"` returns NumPy arrays:\n\n```\nsequences = [\"I've been waiting for a HuggingFace course my whole life.\", \"So have I!\"]\n\n\nmodel_inputs = tokenizer(sequences, padding=True, return_tensors=\"pt\")\n\n\nmodel_inputs = tokenizer(sequences, padding=True, return_tensors=\"tf\")\n\n\nmodel_inputs = tokenizer(sequences, padding=True, return_tensors=\"np\")```\n\n## [](#special-tokens)Special tokens\n\nIf we take a look at the input IDs returned by the tokenizer, we will see they are a tiny bit different from what we had earlier:\n\n```\nsequence = \"I've been waiting for a HuggingFace course my whole life.\"\n\nmodel_inputs = tokenizer(sequence)\nprint(model_inputs[\"input_ids\"])\n\ntokens = tokenizer.tokenize(sequence)\nids = tokenizer.convert_tokens_to_ids(tokens)\nprint(ids)```\n\n```\n[101, 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012, 102]\n[1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012]```\n\nOne token ID was added at the beginning, and one at the end. Let’s decode the two sequences of IDs above to see what this is about:\n\n```\nprint(tokenizer.decode(model_inputs[\"input_ids\"]))\nprint(tokenizer.decode(ids))```\n\n```\n\"[CLS] i've been waiting for a huggingface course my whole life. [SEP]\"\n\"i've been waiting for a huggingface course my whole life.\"```\n\nThe tokenizer added the special word `[CLS]` at the beginning and the special word `[SEP]` at the end. This is because the model was pretrained with those, so to get the same results for inference we need to add them as well. Note that some models don’t add special words, or add different ones; models may also add these special words only at the beginning, or only at the end. In any case, the tokenizer knows which ones are expected and will deal with this for you.\n\n## [](#wrapping-up-from-tokenizer-to-model)Wrapping up: From tokenizer to model\n\nNow that we’ve seen all the individual steps the `tokenizer` object uses when applied on texts, let’s see one final time how it can handle multiple sequences (padding!), very long sequences (truncation!), and multiple types of tensors with its main API:\n\n```\nimport torch\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\nsequences = [\"I've been waiting for a HuggingFace course my whole life.\", \"So have I!\"]\n\ntokens = tokenizer(sequences, padding=True, truncation=True, return_tensors=\"pt\")\noutput = model(**tokens)```","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tPutting it all together - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Putting it all together

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Putting it all together

\"Ask \"Open \"Open

In the last few sections, we’ve been trying our best to do most of the work by hand. We’ve explored how tokenizers work and looked at tokenization, conversion to input IDs, padding, truncation, and attention masks.

However, as we saw in section 2, the 🤗 Transformers API can handle all of this for us with a high-level function that we’ll dive into here. When you call your tokenizer directly on the sentence, you get back inputs that are ready to pass through your model:

from transformers import AutoTokenizer\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\n\nsequence = \"I've been waiting for a HuggingFace course my whole life.\"\n\nmodel_inputs = tokenizer(sequence)

Here, the model_inputs variable contains everything that’s necessary for a model to operate well. For DistilBERT, that includes the input IDs as well as the attention mask. Other models that accept additional inputs will also have those output by the tokenizer object.

As we’ll see in some examples below, this method is very powerful. First, it can tokenize a single sequence:

sequence = \"I've been waiting for a HuggingFace course my whole life.\"\n\nmodel_inputs = tokenizer(sequence)

It also handles multiple sequences at a time, with no change in the API:

sequences = [\"I've been waiting for a HuggingFace course my whole life.\", \"So have I!\"]\n\nmodel_inputs = tokenizer(sequences)

It can pad according to several objectives:

# Will pad the sequences up to the maximum sequence length\nmodel_inputs = tokenizer(sequences, padding=\"longest\")\n\n# Will pad the sequences up to the model max length\n# (512 for BERT or DistilBERT)\nmodel_inputs = tokenizer(sequences, padding=\"max_length\")\n\n# Will pad the sequences up to the specified max length\nmodel_inputs = tokenizer(sequences, padding=\"max_length\", max_length=8)

It can also truncate sequences:

sequences = [\"I've been waiting for a HuggingFace course my whole life.\", \"So have I!\"]\n\n# Will truncate the sequences that are longer than the model max length\n# (512 for BERT or DistilBERT)\nmodel_inputs = tokenizer(sequences, truncation=True)\n\n# Will truncate the sequences that are longer than the specified max length\nmodel_inputs = tokenizer(sequences, max_length=8, truncation=True)

The tokenizer object can handle the conversion to specific framework tensors, which can then be directly sent to the model. For example, in the following code sample we are prompting the tokenizer to return tensors from the different frameworks — \"pt\" returns PyTorch tensors, \"tf\" returns TensorFlow tensors, and \"np\" returns NumPy arrays:

sequences = [\"I've been waiting for a HuggingFace course my whole life.\", \"So have I!\"]\n\n# Returns PyTorch tensors\nmodel_inputs = tokenizer(sequences, padding=True, return_tensors=\"pt\")\n\n# Returns TensorFlow tensors\nmodel_inputs = tokenizer(sequences, padding=True, return_tensors=\"tf\")\n\n# Returns NumPy arrays\nmodel_inputs = tokenizer(sequences, padding=True, return_tensors=\"np\")

Special tokens

If we take a look at the input IDs returned by the tokenizer, we will see they are a tiny bit different from what we had earlier:

sequence = \"I've been waiting for a HuggingFace course my whole life.\"\n\nmodel_inputs = tokenizer(sequence)\nprint(model_inputs[\"input_ids\"])\n\ntokens = tokenizer.tokenize(sequence)\nids = tokenizer.convert_tokens_to_ids(tokens)\nprint(ids)
[101, 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012, 102]\n[1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012]

One token ID was added at the beginning, and one at the end. Let’s decode the two sequences of IDs above to see what this is about:

print(tokenizer.decode(model_inputs[\"input_ids\"]))\nprint(tokenizer.decode(ids))
\"[CLS] i've been waiting for a huggingface course my whole life. [SEP]\"\n\"i've been waiting for a huggingface course my whole life.\"

The tokenizer added the special word [CLS] at the beginning and the special word [SEP] at the end. This is because the model was pretrained with those, so to get the same results for inference we need to add them as well. Note that some models don’t add special words, or add different ones; models may also add these special words only at the beginning, or only at the end. In any case, the tokenizer knows which ones are expected and will deal with this for you.

Wrapping up: From tokenizer to model

Now that we’ve seen all the individual steps the tokenizer object uses when applied on texts, let’s see one final time how it can handle multiple sequences (padding!), very long sequences (truncation!), and multiple types of tensors with its main API:

import torch\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\n\ncheckpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\nsequences = [\"I've been waiting for a HuggingFace course my whole life.\", \"So have I!\"]\n\ntokens = tokenizer(sequences, padding=True, truncation=True, return_tensors=\"pt\")\noutput = model(**tokens)
\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:10.262Z"} {"title":"Basic usage completed! - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter2/7?fw=pt","markdown":"## [](#basic-usage-completed)Basic usage completed!\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-2-questions)\n\nGreat job following the course up to here! To recap, in this chapter you:\n\n- Learned the basic building blocks of a Transformer model.\n- Learned what makes up a tokenization pipeline.\n- Saw how to use a Transformer model in practice.\n- Learned how to leverage a tokenizer to convert text to tensors that are understandable by the model.\n- Set up a tokenizer and a model together to get from text to predictions.\n- Learned the limitations of input IDs, and learned about attention masks.\n- Played around with versatile and configurable tokenizer methods.\n\nFrom now on, you should be able to freely navigate the 🤗 Transformers docs: the vocabulary will sound familiar, and you’ve already seen the methods that you’ll use the majority of the time.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tBasic usage completed! - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Basic usage completed!

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Basic usage completed!

\"Ask

Great job following the course up to here! To recap, in this chapter you:

  • Learned the basic building blocks of a Transformer model.
  • Learned what makes up a tokenization pipeline.
  • Saw how to use a Transformer model in practice.
  • Learned how to leverage a tokenizer to convert text to tensors that are understandable by the model.
  • Set up a tokenizer and a model together to get from text to predictions.
  • Learned the limitations of input IDs, and learned about attention masks.
  • Played around with versatile and configurable tokenizer methods.

From now on, you should be able to freely navigate the 🤗 Transformers docs: the vocabulary will sound familiar, and you’ve already seen the methods that you’ll use the majority of the time.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:10.408Z"} {"title":"End-of-chapter quiz - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter2/8?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#end-of-chapter-quiz)End-of-chapter quiz\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-2-questions)\n\n### [](#1.-what-is-the-order-of-the-language-modeling-pipeline?)1\\. What is the order of the language modeling pipeline?\n\n### [](#2.-how-many-dimensions-does-the-tensor-output-by-the-base-transformer-model-have,-and-what-are-they?)2\\. How many dimensions does the tensor output by the base Transformer model have, and what are they?\n\n### [](#3.-which-of-the-following-is-an-example-of-subword-tokenization?)3\\. Which of the following is an example of subword tokenization?\n\n### [](#4.-what-is-a-model-head?)4\\. What is a model head?\n\n### [](#5.-what-is-an-automodel?)5\\. What is an AutoModel?\n\n### [](#6.-what-are-the-techniques-to-be-aware-of-when-batching-sequences-of-different-lengths-together?)6\\. What are the techniques to be aware of when batching sequences of different lengths together?\n\n### [](#7.-what-is-the-point-of-applying-a-softmax-function-to-the-logits-output-by-a-sequence-classification-model?)7\\. What is the point of applying a SoftMax function to the logits output by a sequence classification model?\n\n### [](#8.-what-method-is-most-of-the-tokenizer-api-centered-around?)8\\. What method is most of the tokenizer API centered around?\n\n### [](#9.-what-does-the-result-variable-contain-in-this-code-sample?)9\\. What does the `result` variable contain in this code sample?\n\n```\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\nresult = tokenizer.tokenize(\"Hello!\")```\n\n### [](#10.-is-there-something-wrong-with-the-following-code?)10\\. Is there something wrong with the following code?\n\n```\nfrom transformers import AutoTokenizer, AutoModel\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\nmodel = AutoModel.from_pretrained(\"gpt2\")\n\nencoded = tokenizer(\"Hey!\", return_tensors=\"pt\")\nresult = model(**encoded)```","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tEnd-of-chapter quiz - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

End-of-chapter quiz

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

End-of-chapter quiz

\"Ask

1. What is the order of the language modeling pipeline?

2. How many dimensions does the tensor output by the base Transformer model have, and what are they?

3. Which of the following is an example of subword tokenization?

4. What is a model head?

5. What is an AutoModel?

6. What are the techniques to be aware of when batching sequences of different lengths together?

7. What is the point of applying a SoftMax function to the logits output by a sequence classification model?

8. What method is most of the tokenizer API centered around?

result-variable-contain-in-this-code-sample?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#9.-what-does-the-result-variable-contain-in-this-code-sample?\"> 9. What does the result variable contain in this code sample?

from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\nresult = tokenizer.tokenize(\"Hello!\")

10. Is there something wrong with the following code?

from transformers import AutoTokenizer, AutoModel\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\nmodel = AutoModel.from_pretrained(\"gpt2\")\n\nencoded = tokenizer(\"Hey!\", return_tensors=\"pt\")\nresult = model(**encoded)
\n\t\t\t\t
Basic usage completed!\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tNext chapter
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:10.914Z"} {"title":"Introduction - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter3/1?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#introduction)Introduction\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-3-questions)\n\nIn [Chapter 2](/course/chapter2) we explored how to use tokenizers and pretrained models to make predictions. But what if you want to fine-tune a pretrained model for your own dataset? That’s the topic of this chapter! You will learn:\n\n- How to prepare a large dataset from the Hub\n- How to use the high-level `Trainer` API to fine-tune a model\n- How to use a custom training loop\n- How to leverage the 🤗 Accelerate library to easily run that custom training loop on any distributed setup\n\nIn order to upload your trained checkpoints to the Hugging Face Hub, you will need a huggingface.co account: [create an account](https://huggingface.co/join)","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tIntroduction - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Introduction

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Introduction

\"Ask

In Chapter 2 we explored how to use tokenizers and pretrained models to make predictions. But what if you want to fine-tune a pretrained model for your own dataset? That’s the topic of this chapter! You will learn:

  • How to prepare a large dataset from the Hub
  • How to use the high-level Trainer API to fine-tune a model
  • How to use a custom training loop
  • How to leverage the 🤗 Accelerate library to easily run that custom training loop on any distributed setup

In order to upload your trained checkpoints to the Hugging Face Hub, you will need a huggingface.co account: create an account

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:11.451Z"} {"title":"Processing the data - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter3/2?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#processing-the-data)Processing the data\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-3-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter3/section2_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter3/section2_pt.ipynb)\n\nContinuing with the example from the [previous chapter](/course/chapter2), here is how we would train a sequence classifier on one batch in PyTorch:\n\n```\nimport torch\nfrom transformers import AdamW, AutoTokenizer, AutoModelForSequenceClassification\n\n\ncheckpoint = \"bert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\nsequences = [\n \"I've been waiting for a HuggingFace course my whole life.\",\n \"This course is amazing!\",\n]\nbatch = tokenizer(sequences, padding=True, truncation=True, return_tensors=\"pt\")\n\n\nbatch[\"labels\"] = torch.tensor([1, 1])\n\noptimizer = AdamW(model.parameters())\nloss = model(**batch).loss\nloss.backward()\noptimizer.step()```\n\nOf course, just training the model on two sentences is not going to yield very good results. To get better results, you will need to prepare a bigger dataset.\n\nIn this section we will use as an example the MRPC (Microsoft Research Paraphrase Corpus) dataset, introduced in a [paper](https://www.aclweb.org/anthology/I05-5002.pdf) by William B. Dolan and Chris Brockett. The dataset consists of 5,801 pairs of sentences, with a label indicating if they are paraphrases or not (i.e., if both sentences mean the same thing). We’ve selected it for this chapter because it’s a small dataset, so it’s easy to experiment with training on it.\n\n### [](#loading-a-dataset-from-the-hub)Loading a dataset from the Hub\n\nThe Hub doesn’t just contain models; it also has multiple datasets in lots of different languages. You can browse the datasets [here](https://huggingface.co/datasets), and we recommend you try to load and process a new dataset once you have gone through this section (see the general documentation [here](https://huggingface.co/docs/datasets/loading_datasets.html#from-the-huggingface-hub)). But for now, let’s focus on the MRPC dataset! This is one of the 10 datasets composing the [GLUE benchmark](https://gluebenchmark.com/), which is an academic benchmark that is used to measure the performance of ML models across 10 different text classification tasks.\n\nThe 🤗 Datasets library provides a very simple command to download and cache a dataset on the Hub. We can download the MRPC dataset like this:\n\n```\nfrom datasets import load_dataset\n\nraw_datasets = load_dataset(\"glue\", \"mrpc\")\nraw_datasets```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['sentence1', 'sentence2', 'label', 'idx'],\n num_rows: 3668\n })\n validation: Dataset({\n features: ['sentence1', 'sentence2', 'label', 'idx'],\n num_rows: 408\n })\n test: Dataset({\n features: ['sentence1', 'sentence2', 'label', 'idx'],\n num_rows: 1725\n })\n})```\n\nAs you can see, we get a `DatasetDict` object which contains the training set, the validation set, and the test set. Each of those contains several columns (`sentence1`, `sentence2`, `label`, and `idx`) and a variable number of rows, which are the number of elements in each set (so, there are 3,668 pairs of sentences in the training set, 408 in the validation set, and 1,725 in the test set).\n\nThis command downloads and caches the dataset, by default in _~/.cache/huggingface/datasets_. Recall from Chapter 2 that you can customize your cache folder by setting the `HF_HOME` environment variable.\n\nWe can access each pair of sentences in our `raw_datasets` object by indexing, like with a dictionary:\n\n```\nraw_train_dataset = raw_datasets[\"train\"]\nraw_train_dataset[0]```\n\n```\n{'idx': 0,\n 'label': 1,\n 'sentence1': 'Amrozi accused his brother , whom he called \" the witness \" , of deliberately distorting his evidence .',\n 'sentence2': 'Referring to him as only \" the witness \" , Amrozi accused his brother of deliberately distorting his evidence .'}```\n\nWe can see the labels are already integers, so we won’t have to do any preprocessing there. To know which integer corresponds to which label, we can inspect the `features` of our `raw_train_dataset`. This will tell us the type of each column:\n\n```\nraw_train_dataset.features```\n\n```\n{'sentence1': Value(dtype='string', id=None),\n 'sentence2': Value(dtype='string', id=None),\n 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None),\n 'idx': Value(dtype='int32', id=None)}```\n\nBehind the scenes, `label` is of type `ClassLabel`, and the mapping of integers to label name is stored in the _names_ folder. `0` corresponds to `not_equivalent`, and `1` corresponds to `equivalent`.\n\n✏️ **Try it out!** Look at element 15 of the training set and element 87 of the validation set. What are their labels?\n\n### [](#preprocessing-a-dataset)Preprocessing a dataset\n\nTo preprocess the dataset, we need to convert the text to numbers the model can make sense of. As you saw in the [previous chapter](/course/chapter2), this is done with a tokenizer. We can feed the tokenizer one sentence or a list of sentences, so we can directly tokenize all the first sentences and all the second sentences of each pair like this:\n\n```\nfrom transformers import AutoTokenizer\n\ncheckpoint = \"bert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\ntokenized_sentences_1 = tokenizer(raw_datasets[\"train\"][\"sentence1\"])\ntokenized_sentences_2 = tokenizer(raw_datasets[\"train\"][\"sentence2\"])```\n\nHowever, we can’t just pass two sequences to the model and get a prediction of whether the two sentences are paraphrases or not. We need to handle the two sequences as a pair, and apply the appropriate preprocessing. Fortunately, the tokenizer can also take a pair of sequences and prepare it the way our BERT model expects:\n\n```\ninputs = tokenizer(\"This is the first sentence.\", \"This is the second one.\")\ninputs```\n\n```\n{ \n 'input_ids': [101, 2023, 2003, 1996, 2034, 6251, 1012, 102, 2023, 2003, 1996, 2117, 2028, 1012, 102],\n 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1],\n 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n}```\n\nWe discussed the `input_ids` and `attention_mask` keys in [Chapter 2](/course/chapter2), but we put off talking about `token_type_ids`. In this example, this is what tells the model which part of the input is the first sentence and which is the second sentence.\n\n✏️ **Try it out!** Take element 15 of the training set and tokenize the two sentences separately and as a pair. What’s the difference between the two results?\n\nIf we decode the IDs inside `input_ids` back to words:\n\n```\ntokenizer.convert_ids_to_tokens(inputs[\"input_ids\"])```\n\nwe will get:\n\n```\n['[CLS]', 'this', 'is', 'the', 'first', 'sentence', '.', '[SEP]', 'this', 'is', 'the', 'second', 'one', '.', '[SEP]']```\n\nSo we see the model expects the inputs to be of the form `[CLS] sentence1 [SEP] sentence2 [SEP]` when there are two sentences. Aligning this with the `token_type_ids` gives us:\n\n```\n['[CLS]', 'this', 'is', 'the', 'first', 'sentence', '.', '[SEP]', 'this', 'is', 'the', 'second', 'one', '.', '[SEP]']\n[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]```\n\nAs you can see, the parts of the input corresponding to `[CLS] sentence1 [SEP]` all have a token type ID of `0`, while the other parts, corresponding to `sentence2 [SEP]`, all have a token type ID of `1`.\n\nNote that if you select a different checkpoint, you won’t necessarily have the `token_type_ids` in your tokenized inputs (for instance, they’re not returned if you use a DistilBERT model). They are only returned when the model will know what to do with them, because it has seen them during its pretraining.\n\nHere, BERT is pretrained with token type IDs, and on top of the masked language modeling objective we talked about in [Chapter 1](/course/chapter1), it has an additional objective called _next sentence prediction_. The goal with this task is to model the relationship between pairs of sentences.\n\nWith next sentence prediction, the model is provided pairs of sentences (with randomly masked tokens) and asked to predict whether the second sentence follows the first. To make the task non-trivial, half of the time the sentences follow each other in the original document they were extracted from, and the other half of the time the two sentences come from two different documents.\n\nIn general, you don’t need to worry about whether or not there are `token_type_ids` in your tokenized inputs: as long as you use the same checkpoint for the tokenizer and the model, everything will be fine as the tokenizer knows what to provide to its model.\n\nNow that we have seen how our tokenizer can deal with one pair of sentences, we can use it to tokenize our whole dataset: like in the [previous chapter](/course/chapter2), we can feed the tokenizer a list of pairs of sentences by giving it the list of first sentences, then the list of second sentences. This is also compatible with the padding and truncation options we saw in [Chapter 2](/course/chapter2). So, one way to preprocess the training dataset is:\n\n```\ntokenized_dataset = tokenizer(\n raw_datasets[\"train\"][\"sentence1\"],\n raw_datasets[\"train\"][\"sentence2\"],\n padding=True,\n truncation=True,\n)```\n\nThis works well, but it has the disadvantage of returning a dictionary (with our keys, `input_ids`, `attention_mask`, and `token_type_ids`, and values that are lists of lists). It will also only work if you have enough RAM to store your whole dataset during the tokenization (whereas the datasets from the 🤗 Datasets library are [Apache Arrow](https://arrow.apache.org/) files stored on the disk, so you only keep the samples you ask for loaded in memory).\n\nTo keep the data as a dataset, we will use the [`Dataset.map()`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) method. This also allows us some extra flexibility, if we need more preprocessing done than just tokenization. The `map()` method works by applying a function on each element of the dataset, so let’s define a function that tokenizes our inputs:\n\n```\ndef tokenize_function(example):\n return tokenizer(example[\"sentence1\"], example[\"sentence2\"], truncation=True)```\n\nThis function takes a dictionary (like the items of our dataset) and returns a new dictionary with the keys `input_ids`, `attention_mask`, and `token_type_ids`. Note that it also works if the `example` dictionary contains several samples (each key as a list of sentences) since the `tokenizer` works on lists of pairs of sentences, as seen before. This will allow us to use the option `batched=True` in our call to `map()`, which will greatly speed up the tokenization. The `tokenizer` is backed by a tokenizer written in Rust from the [🤗 Tokenizers](https://github.com/huggingface/tokenizers) library. This tokenizer can be very fast, but only if we give it lots of inputs at once.\n\nNote that we’ve left the `padding` argument out in our tokenization function for now. This is because padding all the samples to the maximum length is not efficient: it’s better to pad the samples when we’re building a batch, as then we only need to pad to the maximum length in that batch, and not the maximum length in the entire dataset. This can save a lot of time and processing power when the inputs have very variable lengths!\n\nHere is how we apply the tokenization function on all our datasets at once. We’re using `batched=True` in our call to `map` so the function is applied to multiple elements of our dataset at once, and not on each element separately. This allows for faster preprocessing.\n\n```\ntokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\ntokenized_datasets```\n\nThe way the 🤗 Datasets library applies this processing is by adding new fields to the datasets, one for each key in the dictionary returned by the preprocessing function:\n\n```\nDatasetDict({\n train: Dataset({\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'sentence1', 'sentence2', 'token_type_ids'],\n num_rows: 3668\n })\n validation: Dataset({\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'sentence1', 'sentence2', 'token_type_ids'],\n num_rows: 408\n })\n test: Dataset({\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'sentence1', 'sentence2', 'token_type_ids'],\n num_rows: 1725\n })\n})```\n\nYou can even use multiprocessing when applying your preprocessing function with `map()` by passing along a `num_proc` argument. We didn’t do this here because the 🤗 Tokenizers library already uses multiple threads to tokenize our samples faster, but if you are not using a fast tokenizer backed by this library, this could speed up your preprocessing.\n\nOur `tokenize_function` returns a dictionary with the keys `input_ids`, `attention_mask`, and `token_type_ids`, so those three fields are added to all splits of our dataset. Note that we could also have changed existing fields if our preprocessing function returned a new value for an existing key in the dataset to which we applied `map()`.\n\nThe last thing we will need to do is pad all the examples to the length of the longest element when we batch elements together — a technique we refer to as _dynamic padding_.\n\n### [](#dynamic-padding)Dynamic padding\n\nThe function that is responsible for putting together samples inside a batch is called a _collate function_. It’s an argument you can pass when you build a `DataLoader`, the default being a function that will just convert your samples to PyTorch tensors and concatenate them (recursively if your elements are lists, tuples, or dictionaries). This won’t be possible in our case since the inputs we have won’t all be of the same size. We have deliberately postponed the padding, to only apply it as necessary on each batch and avoid having over-long inputs with a lot of padding. This will speed up training by quite a bit, but note that if you’re training on a TPU it can cause problems — TPUs prefer fixed shapes, even when that requires extra padding.\n\nTo do this in practice, we have to define a collate function that will apply the correct amount of padding to the items of the dataset we want to batch together. Fortunately, the 🤗 Transformers library provides us with such a function via `DataCollatorWithPadding`. It takes a tokenizer when you instantiate it (to know which padding token to use, and whether the model expects padding to be on the left or on the right of the inputs) and will do everything you need:\n\n```\nfrom transformers import DataCollatorWithPadding\n\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)```\n\nTo test this new toy, let’s grab a few samples from our training set that we would like to batch together. Here, we remove the columns `idx`, `sentence1`, and `sentence2` as they won’t be needed and contain strings (and we can’t create tensors with strings) and have a look at the lengths of each entry in the batch:\n\n```\nsamples = tokenized_datasets[\"train\"][:8]\nsamples = {k: v for k, v in samples.items() if k not in [\"idx\", \"sentence1\", \"sentence2\"]}\n[len(x) for x in samples[\"input_ids\"]]```\n\n```\n[50, 59, 47, 67, 59, 50, 62, 32]```\n\nNo surprise, we get samples of varying length, from 32 to 67. Dynamic padding means the samples in this batch should all be padded to a length of 67, the maximum length inside the batch. Without dynamic padding, all of the samples would have to be padded to the maximum length in the whole dataset, or the maximum length the model can accept. Let’s double-check that our `data_collator` is dynamically padding the batch properly:\n\n```\nbatch = data_collator(samples)\n{k: v.shape for k, v in batch.items()}```\n\n```\n{'attention_mask': torch.Size([8, 67]),\n 'input_ids': torch.Size([8, 67]),\n 'token_type_ids': torch.Size([8, 67]),\n 'labels': torch.Size([8])}```\n\nLooking good! Now that we’ve gone from raw text to batches our model can deal with, we’re ready to fine-tune it!\n\n✏️ **Try it out!** Replicate the preprocessing on the GLUE SST-2 dataset. It’s a little bit different since it’s composed of single sentences instead of pairs, but the rest of what we did should look the same. For a harder challenge, try to write a preprocessing function that works on any of the GLUE tasks.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tProcessing the data - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Processing the data

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Processing the data

\"Ask \"Open \"Open

Continuing with the example from the previous chapter, here is how we would train a sequence classifier on one batch in PyTorch:

import torch\nfrom transformers import AdamW, AutoTokenizer, AutoModelForSequenceClassification\n\n# Same as before\ncheckpoint = \"bert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\nsequences = [\n    \"I've been waiting for a HuggingFace course my whole life.\",\n    \"This course is amazing!\",\n]\nbatch = tokenizer(sequences, padding=True, truncation=True, return_tensors=\"pt\")\n\n# This is new\nbatch[\"labels\"] = torch.tensor([1, 1])\n\noptimizer = AdamW(model.parameters())\nloss = model(**batch).loss\nloss.backward()\noptimizer.step()

Of course, just training the model on two sentences is not going to yield very good results. To get better results, you will need to prepare a bigger dataset.

In this section we will use as an example the MRPC (Microsoft Research Paraphrase Corpus) dataset, introduced in a paper by William B. Dolan and Chris Brockett. The dataset consists of 5,801 pairs of sentences, with a label indicating if they are paraphrases or not (i.e., if both sentences mean the same thing). We’ve selected it for this chapter because it’s a small dataset, so it’s easy to experiment with training on it.

Loading a dataset from the Hub

The Hub doesn’t just contain models; it also has multiple datasets in lots of different languages. You can browse the datasets here, and we recommend you try to load and process a new dataset once you have gone through this section (see the general documentation here). But for now, let’s focus on the MRPC dataset! This is one of the 10 datasets composing the GLUE benchmark, which is an academic benchmark that is used to measure the performance of ML models across 10 different text classification tasks.

The 🤗 Datasets library provides a very simple command to download and cache a dataset on the Hub. We can download the MRPC dataset like this:

from datasets import load_dataset\n\nraw_datasets = load_dataset(\"glue\", \"mrpc\")\nraw_datasets
DatasetDict({\n    train: Dataset({\n        features: ['sentence1', 'sentence2', 'label', 'idx'],\n        num_rows: 3668\n    })\n    validation: Dataset({\n        features: ['sentence1', 'sentence2', 'label', 'idx'],\n        num_rows: 408\n    })\n    test: Dataset({\n        features: ['sentence1', 'sentence2', 'label', 'idx'],\n        num_rows: 1725\n    })\n})

As you can see, we get a DatasetDict object which contains the training set, the validation set, and the test set. Each of those contains several columns (sentence1, sentence2, label, and idx) and a variable number of rows, which are the number of elements in each set (so, there are 3,668 pairs of sentences in the training set, 408 in the validation set, and 1,725 in the test set).

This command downloads and caches the dataset, by default in ~/.cache/huggingface/datasets. Recall from Chapter 2 that you can customize your cache folder by setting the HF_HOME environment variable.

We can access each pair of sentences in our raw_datasets object by indexing, like with a dictionary:

raw_train_dataset = raw_datasets[\"train\"]\nraw_train_dataset[0]
{'idx': 0,\n 'label': 1,\n 'sentence1': 'Amrozi accused his brother , whom he called \" the witness \" , of deliberately distorting his evidence .',\n 'sentence2': 'Referring to him as only \" the witness \" , Amrozi accused his brother of deliberately distorting his evidence .'}

We can see the labels are already integers, so we won’t have to do any preprocessing there. To know which integer corresponds to which label, we can inspect the features of our raw_train_dataset. This will tell us the type of each column:

raw_train_dataset.features
{'sentence1': Value(dtype='string', id=None),\n 'sentence2': Value(dtype='string', id=None),\n 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None),\n 'idx': Value(dtype='int32', id=None)}

Behind the scenes, label is of type ClassLabel, and the mapping of integers to label name is stored in the names folder. 0 corresponds to not_equivalent, and 1 corresponds to equivalent.

✏️ Try it out! Look at element 15 of the training set and element 87 of the validation set. What are their labels?

Preprocessing a dataset

To preprocess the dataset, we need to convert the text to numbers the model can make sense of. As you saw in the previous chapter, this is done with a tokenizer. We can feed the tokenizer one sentence or a list of sentences, so we can directly tokenize all the first sentences and all the second sentences of each pair like this:

from transformers import AutoTokenizer\n\ncheckpoint = \"bert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\ntokenized_sentences_1 = tokenizer(raw_datasets[\"train\"][\"sentence1\"])\ntokenized_sentences_2 = tokenizer(raw_datasets[\"train\"][\"sentence2\"])

However, we can’t just pass two sequences to the model and get a prediction of whether the two sentences are paraphrases or not. We need to handle the two sequences as a pair, and apply the appropriate preprocessing. Fortunately, the tokenizer can also take a pair of sequences and prepare it the way our BERT model expects:

inputs = tokenizer(\"This is the first sentence.\", \"This is the second one.\")\ninputs
{ \n  'input_ids': [101, 2023, 2003, 1996, 2034, 6251, 1012, 102, 2023, 2003, 1996, 2117, 2028, 1012, 102],\n  'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1],\n  'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n}

We discussed the input_ids and attention_mask keys in Chapter 2, but we put off talking about token_type_ids. In this example, this is what tells the model which part of the input is the first sentence and which is the second sentence.

✏️ Try it out! Take element 15 of the training set and tokenize the two sentences separately and as a pair. What’s the difference between the two results?

If we decode the IDs inside input_ids back to words:

tokenizer.convert_ids_to_tokens(inputs[\"input_ids\"])

we will get:

['[CLS]', 'this', 'is', 'the', 'first', 'sentence', '.', '[SEP]', 'this', 'is', 'the', 'second', 'one', '.', '[SEP]']

So we see the model expects the inputs to be of the form [CLS] sentence1 [SEP] sentence2 [SEP] when there are two sentences. Aligning this with the token_type_ids gives us:

['[CLS]', 'this', 'is', 'the', 'first', 'sentence', '.', '[SEP]', 'this', 'is', 'the', 'second', 'one', '.', '[SEP]']\n[      0,      0,    0,     0,       0,          0,   0,       0,      1,    1,     1,        1,     1,   1,       1]

As you can see, the parts of the input corresponding to [CLS] sentence1 [SEP] all have a token type ID of 0, while the other parts, corresponding to sentence2 [SEP], all have a token type ID of 1.

Note that if you select a different checkpoint, you won’t necessarily have the token_type_ids in your tokenized inputs (for instance, they’re not returned if you use a DistilBERT model). They are only returned when the model will know what to do with them, because it has seen them during its pretraining.

Here, BERT is pretrained with token type IDs, and on top of the masked language modeling objective we talked about in Chapter 1, it has an additional objective called next sentence prediction. The goal with this task is to model the relationship between pairs of sentences.

With next sentence prediction, the model is provided pairs of sentences (with randomly masked tokens) and asked to predict whether the second sentence follows the first. To make the task non-trivial, half of the time the sentences follow each other in the original document they were extracted from, and the other half of the time the two sentences come from two different documents.

In general, you don’t need to worry about whether or not there are token_type_ids in your tokenized inputs: as long as you use the same checkpoint for the tokenizer and the model, everything will be fine as the tokenizer knows what to provide to its model.

Now that we have seen how our tokenizer can deal with one pair of sentences, we can use it to tokenize our whole dataset: like in the previous chapter, we can feed the tokenizer a list of pairs of sentences by giving it the list of first sentences, then the list of second sentences. This is also compatible with the padding and truncation options we saw in Chapter 2. So, one way to preprocess the training dataset is:

tokenized_dataset = tokenizer(\n    raw_datasets[\"train\"][\"sentence1\"],\n    raw_datasets[\"train\"][\"sentence2\"],\n    padding=True,\n    truncation=True,\n)

This works well, but it has the disadvantage of returning a dictionary (with our keys, input_ids, attention_mask, and token_type_ids, and values that are lists of lists). It will also only work if you have enough RAM to store your whole dataset during the tokenization (whereas the datasets from the 🤗 Datasets library are Apache Arrow files stored on the disk, so you only keep the samples you ask for loaded in memory).

To keep the data as a dataset, we will use the Dataset.map() method. This also allows us some extra flexibility, if we need more preprocessing done than just tokenization. The map() method works by applying a function on each element of the dataset, so let’s define a function that tokenizes our inputs:

def tokenize_function(example):\n    return tokenizer(example[\"sentence1\"], example[\"sentence2\"], truncation=True)

This function takes a dictionary (like the items of our dataset) and returns a new dictionary with the keys input_ids, attention_mask, and token_type_ids. Note that it also works if the example dictionary contains several samples (each key as a list of sentences) since the tokenizer works on lists of pairs of sentences, as seen before. This will allow us to use the option batched=True in our call to map(), which will greatly speed up the tokenization. The tokenizer is backed by a tokenizer written in Rust from the 🤗 Tokenizers library. This tokenizer can be very fast, but only if we give it lots of inputs at once.

Note that we’ve left the padding argument out in our tokenization function for now. This is because padding all the samples to the maximum length is not efficient: it’s better to pad the samples when we’re building a batch, as then we only need to pad to the maximum length in that batch, and not the maximum length in the entire dataset. This can save a lot of time and processing power when the inputs have very variable lengths!

Here is how we apply the tokenization function on all our datasets at once. We’re using batched=True in our call to map so the function is applied to multiple elements of our dataset at once, and not on each element separately. This allows for faster preprocessing.

tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\ntokenized_datasets

The way the 🤗 Datasets library applies this processing is by adding new fields to the datasets, one for each key in the dictionary returned by the preprocessing function:

DatasetDict({\n    train: Dataset({\n        features: ['attention_mask', 'idx', 'input_ids', 'label', 'sentence1', 'sentence2', 'token_type_ids'],\n        num_rows: 3668\n    })\n    validation: Dataset({\n        features: ['attention_mask', 'idx', 'input_ids', 'label', 'sentence1', 'sentence2', 'token_type_ids'],\n        num_rows: 408\n    })\n    test: Dataset({\n        features: ['attention_mask', 'idx', 'input_ids', 'label', 'sentence1', 'sentence2', 'token_type_ids'],\n        num_rows: 1725\n    })\n})

You can even use multiprocessing when applying your preprocessing function with map() by passing along a num_proc argument. We didn’t do this here because the 🤗 Tokenizers library already uses multiple threads to tokenize our samples faster, but if you are not using a fast tokenizer backed by this library, this could speed up your preprocessing.

Our tokenize_function returns a dictionary with the keys input_ids, attention_mask, and token_type_ids, so those three fields are added to all splits of our dataset. Note that we could also have changed existing fields if our preprocessing function returned a new value for an existing key in the dataset to which we applied map().

The last thing we will need to do is pad all the examples to the length of the longest element when we batch elements together — a technique we refer to as dynamic padding.

Dynamic padding

The function that is responsible for putting together samples inside a batch is called a collate function. It’s an argument you can pass when you build a DataLoader, the default being a function that will just convert your samples to PyTorch tensors and concatenate them (recursively if your elements are lists, tuples, or dictionaries). This won’t be possible in our case since the inputs we have won’t all be of the same size. We have deliberately postponed the padding, to only apply it as necessary on each batch and avoid having over-long inputs with a lot of padding. This will speed up training by quite a bit, but note that if you’re training on a TPU it can cause problems — TPUs prefer fixed shapes, even when that requires extra padding.

To do this in practice, we have to define a collate function that will apply the correct amount of padding to the items of the dataset we want to batch together. Fortunately, the 🤗 Transformers library provides us with such a function via DataCollatorWithPadding. It takes a tokenizer when you instantiate it (to know which padding token to use, and whether the model expects padding to be on the left or on the right of the inputs) and will do everything you need:

from transformers import DataCollatorWithPadding\n\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)

To test this new toy, let’s grab a few samples from our training set that we would like to batch together. Here, we remove the columns idx, sentence1, and sentence2 as they won’t be needed and contain strings (and we can’t create tensors with strings) and have a look at the lengths of each entry in the batch:

samples = tokenized_datasets[\"train\"][:8]\nsamples = {k: v for k, v in samples.items() if k not in [\"idx\", \"sentence1\", \"sentence2\"]}\n[len(x) for x in samples[\"input_ids\"]]
[50, 59, 47, 67, 59, 50, 62, 32]

No surprise, we get samples of varying length, from 32 to 67. Dynamic padding means the samples in this batch should all be padded to a length of 67, the maximum length inside the batch. Without dynamic padding, all of the samples would have to be padded to the maximum length in the whole dataset, or the maximum length the model can accept. Let’s double-check that our data_collator is dynamically padding the batch properly:

batch = data_collator(samples)\n{k: v.shape for k, v in batch.items()}
{'attention_mask': torch.Size([8, 67]),\n 'input_ids': torch.Size([8, 67]),\n 'token_type_ids': torch.Size([8, 67]),\n 'labels': torch.Size([8])}

Looking good! Now that we’ve gone from raw text to batches our model can deal with, we’re ready to fine-tune it!

✏️ Try it out! Replicate the preprocessing on the GLUE SST-2 dataset. It’s a little bit different since it’s composed of single sentences instead of pairs, but the rest of what we did should look the same. For a harder challenge, try to write a preprocessing function that works on any of the GLUE tasks.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:12.418Z"} {"title":"Fine-tuning a model with the Trainer API - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter3/3?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#fine-tuning-a-model-with-the-trainer-api)Fine-tuning a model with the Trainer API\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-3-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter3/section3.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter3/section3.ipynb)\n\n🤗 Transformers provides a `Trainer` class to help you fine-tune any of the pretrained models it provides on your dataset. Once you’ve done all the data preprocessing work in the last section, you have just a few steps left to define the `Trainer`. The hardest part is likely to be preparing the environment to run `Trainer.train()`, as it will run very slowly on a CPU. If you don’t have a GPU set up, you can get access to free GPUs or TPUs on [Google Colab](https://colab.research.google.com/).\n\nThe code examples below assume you have already executed the examples in the previous section. Here is a short summary recapping what you need:\n\n```\nfrom datasets import load_dataset\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\n\nraw_datasets = load_dataset(\"glue\", \"mrpc\")\ncheckpoint = \"bert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\n\n\ndef tokenize_function(example):\n return tokenizer(example[\"sentence1\"], example[\"sentence2\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)```\n\n### [](#training)Training\n\nThe first step before we can define our `Trainer` is to define a `TrainingArguments` class that will contain all the hyperparameters the `Trainer` will use for training and evaluation. The only argument you have to provide is a directory where the trained model will be saved, as well as the checkpoints along the way. For all the rest, you can leave the defaults, which should work pretty well for a basic fine-tuning.\n\n```\nfrom transformers import TrainingArguments\n\ntraining_args = TrainingArguments(\"test-trainer\")```\n\n💡 If you want to automatically upload your model to the Hub during training, pass along `push_to_hub=True` in the `TrainingArguments`. We will learn more about this in [Chapter 4](/course/chapter4/3)\n\nThe second step is to define our model. As in the [previous chapter](/course/chapter2), we will use the `AutoModelForSequenceClassification` class, with two labels:\n\n```\nfrom transformers import AutoModelForSequenceClassification\n\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)```\n\nYou will notice that unlike in [Chapter 2](/course/chapter2), you get a warning after instantiating this pretrained model. This is because BERT has not been pretrained on classifying pairs of sentences, so the head of the pretrained model has been discarded and a new head suitable for sequence classification has been added instead. The warnings indicate that some weights were not used (the ones corresponding to the dropped pretraining head) and that some others were randomly initialized (the ones for the new head). It concludes by encouraging you to train the model, which is exactly what we are going to do now.\n\nOnce we have our model, we can define a `Trainer` by passing it all the objects constructed up to now — the `model`, the `training_args`, the training and validation datasets, our `data_collator`, and our `tokenizer`:\n\n```\nfrom transformers import Trainer\n\ntrainer = Trainer(\n model,\n training_args,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"validation\"],\n data_collator=data_collator,\n tokenizer=tokenizer,\n)```\n\nNote that when you pass the `tokenizer` as we did here, the default `data_collator` used by the `Trainer` will be a `DataCollatorWithPadding` as defined previously, so you can skip the line `data_collator=data_collator` in this call. It was still important to show you this part of the processing in section 2!\n\nTo fine-tune the model on our dataset, we just have to call the `train()` method of our `Trainer`:\n\nThis will start the fine-tuning (which should take a couple of minutes on a GPU) and report the training loss every 500 steps. It won’t, however, tell you how well (or badly) your model is performing. This is because:\n\n1. We didn’t tell the `Trainer` to evaluate during training by setting `evaluation_strategy` to either `\"steps\"` (evaluate every `eval_steps`) or `\"epoch\"` (evaluate at the end of each epoch).\n2. We didn’t provide the `Trainer` with a `compute_metrics()` function to calculate a metric during said evaluation (otherwise the evaluation would just have printed the loss, which is not a very intuitive number).\n\n### [](#evaluation)Evaluation\n\nLet’s see how we can build a useful `compute_metrics()` function and use it the next time we train. The function must take an `EvalPrediction` object (which is a named tuple with a `predictions` field and a `label_ids` field) and will return a dictionary mapping strings to floats (the strings being the names of the metrics returned, and the floats their values). To get some predictions from our model, we can use the `Trainer.predict()` command:\n\n```\npredictions = trainer.predict(tokenized_datasets[\"validation\"])\nprint(predictions.predictions.shape, predictions.label_ids.shape)```\n\nThe output of the `predict()` method is another named tuple with three fields: `predictions`, `label_ids`, and `metrics`. The `metrics` field will just contain the loss on the dataset passed, as well as some time metrics (how long it took to predict, in total and on average). Once we complete our `compute_metrics()` function and pass it to the `Trainer`, that field will also contain the metrics returned by `compute_metrics()`.\n\nAs you can see, `predictions` is a two-dimensional array with shape 408 x 2 (408 being the number of elements in the dataset we used). Those are the logits for each element of the dataset we passed to `predict()` (as you saw in the [previous chapter](/course/chapter2), all Transformer models return logits). To transform them into predictions that we can compare to our labels, we need to take the index with the maximum value on the second axis:\n\n```\nimport numpy as np\n\npreds = np.argmax(predictions.predictions, axis=-1)```\n\nWe can now compare those `preds` to the labels. To build our `compute_metric()` function, we will rely on the metrics from the 🤗 [Evaluate](https://github.com/huggingface/evaluate/) library. We can load the metrics associated with the MRPC dataset as easily as we loaded the dataset, this time with the `evaluate.load()` function. The object returned has a `compute()` method we can use to do the metric calculation:\n\n```\nimport evaluate\n\nmetric = evaluate.load(\"glue\", \"mrpc\")\nmetric.compute(predictions=preds, references=predictions.label_ids)```\n\n```\n{'accuracy': 0.8578431372549019, 'f1': 0.8996539792387542}```\n\nThe exact results you get may vary, as the random initialization of the model head might change the metrics it achieved. Here, we can see our model has an accuracy of 85.78% on the validation set and an F1 score of 89.97. Those are the two metrics used to evaluate results on the MRPC dataset for the GLUE benchmark. The table in the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf) reported an F1 score of 88.9 for the base model. That was the `uncased` model while we are currently using the `cased` model, which explains the better result.\n\nWrapping everything together, we get our `compute_metrics()` function:\n\n```\ndef compute_metrics(eval_preds):\n metric = evaluate.load(\"glue\", \"mrpc\")\n logits, labels = eval_preds\n predictions = np.argmax(logits, axis=-1)\n return metric.compute(predictions=predictions, references=labels)```\n\nAnd to see it used in action to report metrics at the end of each epoch, here is how we define a new `Trainer` with this `compute_metrics()` function:\n\n```\ntraining_args = TrainingArguments(\"test-trainer\", evaluation_strategy=\"epoch\")\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)\n\ntrainer = Trainer(\n model,\n training_args,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"validation\"],\n data_collator=data_collator,\n tokenizer=tokenizer,\n compute_metrics=compute_metrics,\n)```\n\nNote that we create a new `TrainingArguments` with its `evaluation_strategy` set to `\"epoch\"` and a new model — otherwise, we would just be continuing the training of the model we have already trained. To launch a new training run, we execute:\n\nThis time, it will report the validation loss and metrics at the end of each epoch on top of the training loss. Again, the exact accuracy/F1 score you reach might be a bit different from what we found, because of the random head initialization of the model, but it should be in the same ballpark.\n\nThe `Trainer` will work out of the box on multiple GPUs or TPUs and provides lots of options, like mixed-precision training (use `fp16 = True` in your training arguments). We will go over everything it supports in Chapter 10.\n\nThis concludes the introduction to fine-tuning using the `Trainer` API. An example of doing this for most common NLP tasks will be given in [Chapter 7](/course/chapter7), but for now let’s look at how to do the same thing in pure PyTorch.\n\n✏️ **Try it out!** Fine-tune a model on the GLUE SST-2 dataset, using the data processing you did in section 2.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tFine-tuning a model with the Trainer API - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Fine-tuning a model with the Trainer API

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Fine-tuning a model with the Trainer API

\"Ask \"Open \"Open

🤗 Transformers provides a Trainer class to help you fine-tune any of the pretrained models it provides on your dataset. Once you’ve done all the data preprocessing work in the last section, you have just a few steps left to define the Trainer. The hardest part is likely to be preparing the environment to run Trainer.train(), as it will run very slowly on a CPU. If you don’t have a GPU set up, you can get access to free GPUs or TPUs on Google Colab.

The code examples below assume you have already executed the examples in the previous section. Here is a short summary recapping what you need:

from datasets import load_dataset\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\n\nraw_datasets = load_dataset(\"glue\", \"mrpc\")\ncheckpoint = \"bert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\n\n\ndef tokenize_function(example):\n    return tokenizer(example[\"sentence1\"], example[\"sentence2\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)

Training

The first step before we can define our Trainer is to define a TrainingArguments class that will contain all the hyperparameters the Trainer will use for training and evaluation. The only argument you have to provide is a directory where the trained model will be saved, as well as the checkpoints along the way. For all the rest, you can leave the defaults, which should work pretty well for a basic fine-tuning.

from transformers import TrainingArguments\n\ntraining_args = TrainingArguments(\"test-trainer\")

💡 If you want to automatically upload your model to the Hub during training, pass along push_to_hub=True in the TrainingArguments. We will learn more about this in Chapter 4

The second step is to define our model. As in the previous chapter, we will use the AutoModelForSequenceClassification class, with two labels:

from transformers import AutoModelForSequenceClassification\n\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)

You will notice that unlike in Chapter 2, you get a warning after instantiating this pretrained model. This is because BERT has not been pretrained on classifying pairs of sentences, so the head of the pretrained model has been discarded and a new head suitable for sequence classification has been added instead. The warnings indicate that some weights were not used (the ones corresponding to the dropped pretraining head) and that some others were randomly initialized (the ones for the new head). It concludes by encouraging you to train the model, which is exactly what we are going to do now.

Once we have our model, we can define a Trainer by passing it all the objects constructed up to now — the model, the training_args, the training and validation datasets, our data_collator, and our tokenizer:

from transformers import Trainer\n\ntrainer = Trainer(\n    model,\n    training_args,\n    train_dataset=tokenized_datasets[\"train\"],\n    eval_dataset=tokenized_datasets[\"validation\"],\n    data_collator=data_collator,\n    tokenizer=tokenizer,\n)

Note that when you pass the tokenizer as we did here, the default data_collator used by the Trainer will be a DataCollatorWithPadding as defined previously, so you can skip the line data_collator=data_collator in this call. It was still important to show you this part of the processing in section 2!

To fine-tune the model on our dataset, we just have to call the train() method of our Trainer:

trainer.train()

This will start the fine-tuning (which should take a couple of minutes on a GPU) and report the training loss every 500 steps. It won’t, however, tell you how well (or badly) your model is performing. This is because:

  1. We didn’t tell the Trainer to evaluate during training by setting evaluation_strategy to either \"steps\" (evaluate every eval_steps) or \"epoch\" (evaluate at the end of each epoch).
  2. We didn’t provide the Trainer with a compute_metrics() function to calculate a metric during said evaluation (otherwise the evaluation would just have printed the loss, which is not a very intuitive number).

Evaluation

Let’s see how we can build a useful compute_metrics() function and use it the next time we train. The function must take an EvalPrediction object (which is a named tuple with a predictions field and a label_ids field) and will return a dictionary mapping strings to floats (the strings being the names of the metrics returned, and the floats their values). To get some predictions from our model, we can use the Trainer.predict() command:

predictions = trainer.predict(tokenized_datasets[\"validation\"])\nprint(predictions.predictions.shape, predictions.label_ids.shape)
(408, 2) (408,)

The output of the predict() method is another named tuple with three fields: predictions, label_ids, and metrics. The metrics field will just contain the loss on the dataset passed, as well as some time metrics (how long it took to predict, in total and on average). Once we complete our compute_metrics() function and pass it to the Trainer, that field will also contain the metrics returned by compute_metrics().

As you can see, predictions is a two-dimensional array with shape 408 x 2 (408 being the number of elements in the dataset we used). Those are the logits for each element of the dataset we passed to predict() (as you saw in the previous chapter, all Transformer models return logits). To transform them into predictions that we can compare to our labels, we need to take the index with the maximum value on the second axis:

import numpy as np\n\npreds = np.argmax(predictions.predictions, axis=-1)

We can now compare those preds to the labels. To build our compute_metric() function, we will rely on the metrics from the 🤗 Evaluate library. We can load the metrics associated with the MRPC dataset as easily as we loaded the dataset, this time with the evaluate.load() function. The object returned has a compute() method we can use to do the metric calculation:

import evaluate\n\nmetric = evaluate.load(\"glue\", \"mrpc\")\nmetric.compute(predictions=preds, references=predictions.label_ids)
{'accuracy': 0.8578431372549019, 'f1': 0.8996539792387542}

The exact results you get may vary, as the random initialization of the model head might change the metrics it achieved. Here, we can see our model has an accuracy of 85.78% on the validation set and an F1 score of 89.97. Those are the two metrics used to evaluate results on the MRPC dataset for the GLUE benchmark. The table in the BERT paper reported an F1 score of 88.9 for the base model. That was the uncased model while we are currently using the cased model, which explains the better result.

Wrapping everything together, we get our compute_metrics() function:

def compute_metrics(eval_preds):\n    metric = evaluate.load(\"glue\", \"mrpc\")\n    logits, labels = eval_preds\n    predictions = np.argmax(logits, axis=-1)\n    return metric.compute(predictions=predictions, references=labels)

And to see it used in action to report metrics at the end of each epoch, here is how we define a new Trainer with this compute_metrics() function:

training_args = TrainingArguments(\"test-trainer\", evaluation_strategy=\"epoch\")\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)\n\ntrainer = Trainer(\n    model,\n    training_args,\n    train_dataset=tokenized_datasets[\"train\"],\n    eval_dataset=tokenized_datasets[\"validation\"],\n    data_collator=data_collator,\n    tokenizer=tokenizer,\n    compute_metrics=compute_metrics,\n)

Note that we create a new TrainingArguments with its evaluation_strategy set to \"epoch\" and a new model — otherwise, we would just be continuing the training of the model we have already trained. To launch a new training run, we execute:

trainer.train()

This time, it will report the validation loss and metrics at the end of each epoch on top of the training loss. Again, the exact accuracy/F1 score you reach might be a bit different from what we found, because of the random head initialization of the model, but it should be in the same ballpark.

The Trainer will work out of the box on multiple GPUs or TPUs and provides lots of options, like mixed-precision training (use fp16 = True in your training arguments). We will go over everything it supports in Chapter 10.

This concludes the introduction to fine-tuning using the Trainer API. An example of doing this for most common NLP tasks will be given in Chapter 7, but for now let’s look at how to do the same thing in pure PyTorch.

✏️ Try it out! Fine-tune a model on the GLUE SST-2 dataset, using the data processing you did in section 2.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:12.502Z"} {"title":"A full training - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter3/4?fw=pt","markdown":"## [](#a-full-training)A full training\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-3-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter3/section4.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter3/section4.ipynb)\n\nNow we’ll see how to achieve the same results as we did in the last section without using the `Trainer` class. Again, we assume you have done the data processing in section 2. Here is a short summary covering everything you will need:\n\n```\nfrom datasets import load_dataset\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\n\nraw_datasets = load_dataset(\"glue\", \"mrpc\")\ncheckpoint = \"bert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\n\n\ndef tokenize_function(example):\n return tokenizer(example[\"sentence1\"], example[\"sentence2\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)```\n\n### [](#prepare-for-training)Prepare for training\n\nBefore actually writing our training loop, we will need to define a few objects. The first ones are the dataloaders we will use to iterate over batches. But before we can define those dataloaders, we need to apply a bit of postprocessing to our `tokenized_datasets`, to take care of some things that the `Trainer` did for us automatically. Specifically, we need to:\n\n- Remove the columns corresponding to values the model does not expect (like the `sentence1` and `sentence2` columns).\n- Rename the column `label` to `labels` (because the model expects the argument to be named `labels`).\n- Set the format of the datasets so they return PyTorch tensors instead of lists.\n\nOur `tokenized_datasets` has one method for each of those steps:\n\n```\ntokenized_datasets = tokenized_datasets.remove_columns([\"sentence1\", \"sentence2\", \"idx\"])\ntokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\ntokenized_datasets.set_format(\"torch\")\ntokenized_datasets[\"train\"].column_names```\n\nWe can then check that the result only has columns that our model will accept:\n\n```\n[\"attention_mask\", \"input_ids\", \"labels\", \"token_type_ids\"]```\n\nNow that this is done, we can easily define our dataloaders:\n\n```\nfrom torch.utils.data import DataLoader\n\ntrain_dataloader = DataLoader(\n tokenized_datasets[\"train\"], shuffle=True, batch_size=8, collate_fn=data_collator\n)\neval_dataloader = DataLoader(\n tokenized_datasets[\"validation\"], batch_size=8, collate_fn=data_collator\n)```\n\nTo quickly check there is no mistake in the data processing, we can inspect a batch like this:\n\n```\nfor batch in train_dataloader:\n break\n{k: v.shape for k, v in batch.items()}```\n\n```\n{'attention_mask': torch.Size([8, 65]),\n 'input_ids': torch.Size([8, 65]),\n 'labels': torch.Size([8]),\n 'token_type_ids': torch.Size([8, 65])}```\n\nNote that the actual shapes will probably be slightly different for you since we set `shuffle=True` for the training dataloader and we are padding to the maximum length inside the batch.\n\nNow that we’re completely finished with data preprocessing (a satisfying yet elusive goal for any ML practitioner), let’s turn to the model. We instantiate it exactly as we did in the previous section:\n\n```\nfrom transformers import AutoModelForSequenceClassification\n\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)```\n\nTo make sure that everything will go smoothly during training, we pass our batch to this model:\n\n```\noutputs = model(**batch)\nprint(outputs.loss, outputs.logits.shape)```\n\n```\ntensor(0.5441, grad_fn=) torch.Size([8, 2])```\n\nAll 🤗 Transformers models will return the loss when `labels` are provided, and we also get the logits (two for each input in our batch, so a tensor of size 8 x 2).\n\nWe’re almost ready to write our training loop! We’re just missing two things: an optimizer and a learning rate scheduler. Since we are trying to replicate what the `Trainer` was doing by hand, we will use the same defaults. The optimizer used by the `Trainer` is `AdamW`, which is the same as Adam, but with a twist for weight decay regularization (see [“Decoupled Weight Decay Regularization”](https://arxiv.org/abs/1711.05101) by Ilya Loshchilov and Frank Hutter):\n\n```\nfrom transformers import AdamW\n\noptimizer = AdamW(model.parameters(), lr=5e-5)```\n\nFinally, the learning rate scheduler used by default is just a linear decay from the maximum value (5e-5) to 0. To properly define it, we need to know the number of training steps we will take, which is the number of epochs we want to run multiplied by the number of training batches (which is the length of our training dataloader). The `Trainer` uses three epochs by default, so we will follow that:\n\n```\nfrom transformers import get_scheduler\n\nnum_epochs = 3\nnum_training_steps = num_epochs * len(train_dataloader)\nlr_scheduler = get_scheduler(\n \"linear\",\n optimizer=optimizer,\n num_warmup_steps=0,\n num_training_steps=num_training_steps,\n)\nprint(num_training_steps)```\n\n### [](#the-training-loop)The training loop\n\nOne last thing: we will want to use the GPU if we have access to one (on a CPU, training might take several hours instead of a couple of minutes). To do this, we define a `device` we will put our model and our batches on:\n\n```\nimport torch\n\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\nmodel.to(device)\ndevice```\n\nWe are now ready to train! To get some sense of when training will be finished, we add a progress bar over our number of training steps, using the `tqdm` library:\n\n```\nfrom tqdm.auto import tqdm\n\nprogress_bar = tqdm(range(num_training_steps))\n\nmodel.train()\nfor epoch in range(num_epochs):\n for batch in train_dataloader:\n batch = {k: v.to(device) for k, v in batch.items()}\n outputs = model(**batch)\n loss = outputs.loss\n loss.backward()\n\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n progress_bar.update(1)```\n\nYou can see that the core of the training loop looks a lot like the one in the introduction. We didn’t ask for any reporting, so this training loop will not tell us anything about how the model fares. We need to add an evaluation loop for that.\n\n### [](#the-evaluation-loop)The evaluation loop\n\nAs we did earlier, we will use a metric provided by the 🤗 Evaluate library. We’ve already seen the `metric.compute()` method, but metrics can actually accumulate batches for us as we go over the prediction loop with the method `add_batch()`. Once we have accumulated all the batches, we can get the final result with `metric.compute()`. Here’s how to implement all of this in an evaluation loop:\n\n```\nimport evaluate\n\nmetric = evaluate.load(\"glue\", \"mrpc\")\nmodel.eval()\nfor batch in eval_dataloader:\n batch = {k: v.to(device) for k, v in batch.items()}\n with torch.no_grad():\n outputs = model(**batch)\n\n logits = outputs.logits\n predictions = torch.argmax(logits, dim=-1)\n metric.add_batch(predictions=predictions, references=batch[\"labels\"])\n\nmetric.compute()```\n\n```\n{'accuracy': 0.8431372549019608, 'f1': 0.8907849829351535}```\n\nAgain, your results will be slightly different because of the randomness in the model head initialization and the data shuffling, but they should be in the same ballpark.\n\n✏️ **Try it out!** Modify the previous training loop to fine-tune your model on the SST-2 dataset.\n\n### [](#supercharge-your-training-loop-with-accelerate)Supercharge your training loop with 🤗 Accelerate\n\nThe training loop we defined earlier works fine on a single CPU or GPU. But using the [🤗 Accelerate](https://github.com/huggingface/accelerate) library, with just a few adjustments we can enable distributed training on multiple GPUs or TPUs. Starting from the creation of the training and validation dataloaders, here is what our manual training loop looks like:\n\n```\nfrom transformers import AdamW, AutoModelForSequenceClassification, get_scheduler\n\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)\noptimizer = AdamW(model.parameters(), lr=3e-5)\n\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\nmodel.to(device)\n\nnum_epochs = 3\nnum_training_steps = num_epochs * len(train_dataloader)\nlr_scheduler = get_scheduler(\n \"linear\",\n optimizer=optimizer,\n num_warmup_steps=0,\n num_training_steps=num_training_steps,\n)\n\nprogress_bar = tqdm(range(num_training_steps))\n\nmodel.train()\nfor epoch in range(num_epochs):\n for batch in train_dataloader:\n batch = {k: v.to(device) for k, v in batch.items()}\n outputs = model(**batch)\n loss = outputs.loss\n loss.backward()\n\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n progress_bar.update(1)```\n\nAnd here are the changes:\n\n```\n+ from accelerate import Accelerator\n from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler\n\n+ accelerator = Accelerator()\n\n model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)\n optimizer = AdamW(model.parameters(), lr=3e-5)\n\n- device = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\n- model.to(device)\n\n+ train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(\n+ train_dataloader, eval_dataloader, model, optimizer\n+ )\n\n num_epochs = 3\n num_training_steps = num_epochs * len(train_dataloader)\n lr_scheduler = get_scheduler(\n \"linear\",\n optimizer=optimizer,\n num_warmup_steps=0,\n num_training_steps=num_training_steps\n )\n\n progress_bar = tqdm(range(num_training_steps))\n\n model.train()\n for epoch in range(num_epochs):\n for batch in train_dataloader:\n- batch = {k: v.to(device) for k, v in batch.items()}\n outputs = model(**batch)\n loss = outputs.loss\n- loss.backward()\n+ accelerator.backward(loss)\n\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n progress_bar.update(1)```\n\nThe first line to add is the import line. The second line instantiates an `Accelerator` object that will look at the environment and initialize the proper distributed setup. 🤗 Accelerate handles the device placement for you, so you can remove the lines that put the model on the device (or, if you prefer, change them to use `accelerator.device` instead of `device`).\n\nThen the main bulk of the work is done in the line that sends the dataloaders, the model, and the optimizer to `accelerator.prepare()`. This will wrap those objects in the proper container to make sure your distributed training works as intended. The remaining changes to make are removing the line that puts the batch on the `device` (again, if you want to keep this you can just change it to use `accelerator.device`) and replacing `loss.backward()` with `accelerator.backward(loss)`.\n\n⚠️ In order to benefit from the speed-up offered by Cloud TPUs, we recommend padding your samples to a fixed length with the \\`padding=\"max\\_length\"\\` and \\`max\\_length\\` arguments of the tokenizer.\n\nIf you’d like to copy and paste it to play around, here’s what the complete training loop looks like with 🤗 Accelerate:\n\n```\nfrom accelerate import Accelerator\nfrom transformers import AdamW, AutoModelForSequenceClassification, get_scheduler\n\naccelerator = Accelerator()\n\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)\noptimizer = AdamW(model.parameters(), lr=3e-5)\n\ntrain_dl, eval_dl, model, optimizer = accelerator.prepare(\n train_dataloader, eval_dataloader, model, optimizer\n)\n\nnum_epochs = 3\nnum_training_steps = num_epochs * len(train_dl)\nlr_scheduler = get_scheduler(\n \"linear\",\n optimizer=optimizer,\n num_warmup_steps=0,\n num_training_steps=num_training_steps,\n)\n\nprogress_bar = tqdm(range(num_training_steps))\n\nmodel.train()\nfor epoch in range(num_epochs):\n for batch in train_dl:\n outputs = model(**batch)\n loss = outputs.loss\n accelerator.backward(loss)\n\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n progress_bar.update(1)```\n\nPutting this in a `train.py` script will make that script runnable on any kind of distributed setup. To try it out in your distributed setup, run the command:\n\nwhich will prompt you to answer a few questions and dump your answers in a configuration file used by this command:\n\n```\naccelerate launch train.py```\n\nwhich will launch the distributed training.\n\nIf you want to try this in a Notebook (for instance, to test it with TPUs on Colab), just paste the code in a `training_function()` and run a last cell with:\n\n```\nfrom accelerate import notebook_launcher\n\nnotebook_launcher(training_function)```\n\nYou can find more examples in the [🤗 Accelerate repo](https://github.com/huggingface/accelerate/tree/main/examples).","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tA full training - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

A full training

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

A full training

\"Ask \"Open \"Open

Now we’ll see how to achieve the same results as we did in the last section without using the Trainer class. Again, we assume you have done the data processing in section 2. Here is a short summary covering everything you will need:

from datasets import load_dataset\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\n\nraw_datasets = load_dataset(\"glue\", \"mrpc\")\ncheckpoint = \"bert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\n\n\ndef tokenize_function(example):\n    return tokenizer(example[\"sentence1\"], example[\"sentence2\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)

Prepare for training

Before actually writing our training loop, we will need to define a few objects. The first ones are the dataloaders we will use to iterate over batches. But before we can define those dataloaders, we need to apply a bit of postprocessing to our tokenized_datasets, to take care of some things that the Trainer did for us automatically. Specifically, we need to:

  • Remove the columns corresponding to values the model does not expect (like the sentence1 and sentence2 columns).
  • Rename the column label to labels (because the model expects the argument to be named labels).
  • Set the format of the datasets so they return PyTorch tensors instead of lists.

Our tokenized_datasets has one method for each of those steps:

tokenized_datasets = tokenized_datasets.remove_columns([\"sentence1\", \"sentence2\", \"idx\"])\ntokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\ntokenized_datasets.set_format(\"torch\")\ntokenized_datasets[\"train\"].column_names

We can then check that the result only has columns that our model will accept:

[\"attention_mask\", \"input_ids\", \"labels\", \"token_type_ids\"]

Now that this is done, we can easily define our dataloaders:

from torch.utils.data import DataLoader\n\ntrain_dataloader = DataLoader(\n    tokenized_datasets[\"train\"], shuffle=True, batch_size=8, collate_fn=data_collator\n)\neval_dataloader = DataLoader(\n    tokenized_datasets[\"validation\"], batch_size=8, collate_fn=data_collator\n)

To quickly check there is no mistake in the data processing, we can inspect a batch like this:

for batch in train_dataloader:\n    break\n{k: v.shape for k, v in batch.items()}
{'attention_mask': torch.Size([8, 65]),\n 'input_ids': torch.Size([8, 65]),\n 'labels': torch.Size([8]),\n 'token_type_ids': torch.Size([8, 65])}

Note that the actual shapes will probably be slightly different for you since we set shuffle=True for the training dataloader and we are padding to the maximum length inside the batch.

Now that we’re completely finished with data preprocessing (a satisfying yet elusive goal for any ML practitioner), let’s turn to the model. We instantiate it exactly as we did in the previous section:

from transformers import AutoModelForSequenceClassification\n\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)

To make sure that everything will go smoothly during training, we pass our batch to this model:

outputs = model(**batch)\nprint(outputs.loss, outputs.logits.shape)
tensor(0.5441, grad_fn=<NllLossBackward>) torch.Size([8, 2])

All 🤗 Transformers models will return the loss when labels are provided, and we also get the logits (two for each input in our batch, so a tensor of size 8 x 2).

We’re almost ready to write our training loop! We’re just missing two things: an optimizer and a learning rate scheduler. Since we are trying to replicate what the Trainer was doing by hand, we will use the same defaults. The optimizer used by the Trainer is AdamW, which is the same as Adam, but with a twist for weight decay regularization (see “Decoupled Weight Decay Regularization” by Ilya Loshchilov and Frank Hutter):

from transformers import AdamW\n\noptimizer = AdamW(model.parameters(), lr=5e-5)

Finally, the learning rate scheduler used by default is just a linear decay from the maximum value (5e-5) to 0. To properly define it, we need to know the number of training steps we will take, which is the number of epochs we want to run multiplied by the number of training batches (which is the length of our training dataloader). The Trainer uses three epochs by default, so we will follow that:

from transformers import get_scheduler\n\nnum_epochs = 3\nnum_training_steps = num_epochs * len(train_dataloader)\nlr_scheduler = get_scheduler(\n    \"linear\",\n    optimizer=optimizer,\n    num_warmup_steps=0,\n    num_training_steps=num_training_steps,\n)\nprint(num_training_steps)
1377

The training loop

One last thing: we will want to use the GPU if we have access to one (on a CPU, training might take several hours instead of a couple of minutes). To do this, we define a device we will put our model and our batches on:

import torch\n\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\nmodel.to(device)\ndevice
device(type='cuda')

We are now ready to train! To get some sense of when training will be finished, we add a progress bar over our number of training steps, using the tqdm library:

from tqdm.auto import tqdm\n\nprogress_bar = tqdm(range(num_training_steps))\n\nmodel.train()\nfor epoch in range(num_epochs):\n    for batch in train_dataloader:\n        batch = {k: v.to(device) for k, v in batch.items()}\n        outputs = model(**batch)\n        loss = outputs.loss\n        loss.backward()\n\n        optimizer.step()\n        lr_scheduler.step()\n        optimizer.zero_grad()\n        progress_bar.update(1)

You can see that the core of the training loop looks a lot like the one in the introduction. We didn’t ask for any reporting, so this training loop will not tell us anything about how the model fares. We need to add an evaluation loop for that.

The evaluation loop

As we did earlier, we will use a metric provided by the 🤗 Evaluate library. We’ve already seen the metric.compute() method, but metrics can actually accumulate batches for us as we go over the prediction loop with the method add_batch(). Once we have accumulated all the batches, we can get the final result with metric.compute(). Here’s how to implement all of this in an evaluation loop:

import evaluate\n\nmetric = evaluate.load(\"glue\", \"mrpc\")\nmodel.eval()\nfor batch in eval_dataloader:\n    batch = {k: v.to(device) for k, v in batch.items()}\n    with torch.no_grad():\n        outputs = model(**batch)\n\n    logits = outputs.logits\n    predictions = torch.argmax(logits, dim=-1)\n    metric.add_batch(predictions=predictions, references=batch[\"labels\"])\n\nmetric.compute()
{'accuracy': 0.8431372549019608, 'f1': 0.8907849829351535}

Again, your results will be slightly different because of the randomness in the model head initialization and the data shuffling, but they should be in the same ballpark.

✏️ Try it out! Modify the previous training loop to fine-tune your model on the SST-2 dataset.

Supercharge your training loop with 🤗 Accelerate

The training loop we defined earlier works fine on a single CPU or GPU. But using the 🤗 Accelerate library, with just a few adjustments we can enable distributed training on multiple GPUs or TPUs. Starting from the creation of the training and validation dataloaders, here is what our manual training loop looks like:

from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler\n\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)\noptimizer = AdamW(model.parameters(), lr=3e-5)\n\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\nmodel.to(device)\n\nnum_epochs = 3\nnum_training_steps = num_epochs * len(train_dataloader)\nlr_scheduler = get_scheduler(\n    \"linear\",\n    optimizer=optimizer,\n    num_warmup_steps=0,\n    num_training_steps=num_training_steps,\n)\n\nprogress_bar = tqdm(range(num_training_steps))\n\nmodel.train()\nfor epoch in range(num_epochs):\n    for batch in train_dataloader:\n        batch = {k: v.to(device) for k, v in batch.items()}\n        outputs = model(**batch)\n        loss = outputs.loss\n        loss.backward()\n\n        optimizer.step()\n        lr_scheduler.step()\n        optimizer.zero_grad()\n        progress_bar.update(1)

And here are the changes:

+ from accelerate import Accelerator\n  from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler\n\n+ accelerator = Accelerator()\n\n  model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)\n  optimizer = AdamW(model.parameters(), lr=3e-5)\n\n- device = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\n- model.to(device)\n\n+ train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(\n+     train_dataloader, eval_dataloader, model, optimizer\n+ )\n\n  num_epochs = 3\n  num_training_steps = num_epochs * len(train_dataloader)\n  lr_scheduler = get_scheduler(\n      \"linear\",\n      optimizer=optimizer,\n      num_warmup_steps=0,\n      num_training_steps=num_training_steps\n  )\n\n  progress_bar = tqdm(range(num_training_steps))\n\n  model.train()\n  for epoch in range(num_epochs):\n      for batch in train_dataloader:\n-         batch = {k: v.to(device) for k, v in batch.items()}\n          outputs = model(**batch)\n          loss = outputs.loss\n-         loss.backward()\n+         accelerator.backward(loss)\n\n          optimizer.step()\n          lr_scheduler.step()\n          optimizer.zero_grad()\n          progress_bar.update(1)

The first line to add is the import line. The second line instantiates an Accelerator object that will look at the environment and initialize the proper distributed setup. 🤗 Accelerate handles the device placement for you, so you can remove the lines that put the model on the device (or, if you prefer, change them to use accelerator.device instead of device).

Then the main bulk of the work is done in the line that sends the dataloaders, the model, and the optimizer to accelerator.prepare(). This will wrap those objects in the proper container to make sure your distributed training works as intended. The remaining changes to make are removing the line that puts the batch on the device (again, if you want to keep this you can just change it to use accelerator.device) and replacing loss.backward() with accelerator.backward(loss).

⚠️ In order to benefit from the speed-up offered by Cloud TPUs, we recommend padding your samples to a fixed length with the `padding=\"max_length\"` and `max_length` arguments of the tokenizer.

If you’d like to copy and paste it to play around, here’s what the complete training loop looks like with 🤗 Accelerate:

from accelerate import Accelerator\nfrom transformers import AdamW, AutoModelForSequenceClassification, get_scheduler\n\naccelerator = Accelerator()\n\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)\noptimizer = AdamW(model.parameters(), lr=3e-5)\n\ntrain_dl, eval_dl, model, optimizer = accelerator.prepare(\n    train_dataloader, eval_dataloader, model, optimizer\n)\n\nnum_epochs = 3\nnum_training_steps = num_epochs * len(train_dl)\nlr_scheduler = get_scheduler(\n    \"linear\",\n    optimizer=optimizer,\n    num_warmup_steps=0,\n    num_training_steps=num_training_steps,\n)\n\nprogress_bar = tqdm(range(num_training_steps))\n\nmodel.train()\nfor epoch in range(num_epochs):\n    for batch in train_dl:\n        outputs = model(**batch)\n        loss = outputs.loss\n        accelerator.backward(loss)\n\n        optimizer.step()\n        lr_scheduler.step()\n        optimizer.zero_grad()\n        progress_bar.update(1)

Putting this in a train.py script will make that script runnable on any kind of distributed setup. To try it out in your distributed setup, run the command:

accelerate config

which will prompt you to answer a few questions and dump your answers in a configuration file used by this command:

accelerate launch train.py

which will launch the distributed training.

If you want to try this in a Notebook (for instance, to test it with TPUs on Colab), just paste the code in a training_function() and run a last cell with:

from accelerate import notebook_launcher\n\nnotebook_launcher(training_function)

You can find more examples in the 🤗 Accelerate repo.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:12.683Z"} {"title":"Fine-tuning, Check! - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter3/5?fw=pt","markdown":"![Hugging Face's logo](/front/assets/huggingface_logo-noborder.svg)\n\nJoin the Hugging Face community\n\nand get access to the augmented documentation experience\n\nCollaborate on models, datasets and Spaces\n\nFaster examples with accelerated inference\n\nSwitch between documentation themes\n\n[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#fine-tuning-check)Fine-tuning, Check!\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-3-questions)\n\nThat was fun! In the first two chapters you learned about models and tokenizers, and now you know how to fine-tune them for your own data. To recap, in this chapter you:\n\n- Learned about datasets in the [Hub](https://huggingface.co/datasets)\n- Learned how to load and preprocess datasets, including using dynamic padding and collators\n- Implemented your own fine-tuning and evaluation of a model\n- Implemented a lower-level training loop\n- Used 🤗 Accelerate to easily adapt your training loop so it works for multiple GPUs or TPUs","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tFine-tuning, Check! - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Fine-tuning, Check!

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Fine-tuning, Check!

\"Ask

That was fun! In the first two chapters you learned about models and tokenizers, and now you know how to fine-tune them for your own data. To recap, in this chapter you:

  • Learned about datasets in the Hub
  • Learned how to load and preprocess datasets, including using dynamic padding and collators
  • Implemented your own fine-tuning and evaluation of a model
  • Implemented a lower-level training loop
  • Used 🤗 Accelerate to easily adapt your training loop so it works for multiple GPUs or TPUs
\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:13.154Z"} {"title":"End-of-chapter quiz - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter3/6?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#end-of-chapter-quiz)End-of-chapter quiz\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-3-questions)\n\nTest what you learned in this chapter!\n\n### 1\\. The `emotion` dataset contains Twitter messages labeled with emotions. Search for it in the [Hub](https://huggingface.co/datasets), and read the dataset card. Which of these is not one of its basic emotions?\n\n### [](#2.-search-for-the-ar_sarcasm-dataset-in-the-hub.-which-task-does-it-support?)2\\. Search for the `ar_sarcasm` dataset in the [Hub](https://huggingface.co/datasets). Which task does it support?\n\n### [](#3.-how-does-the-bert-model-expect-a-pair-of-sentences-to-be-processed?)3\\. How does the BERT model expect a pair of sentences to be processed?\n\n### [](#4.-what-are-the-benefits-of-the-dataset.map()-method?)4\\. What are the benefits of the `Dataset.map()` method?\n\n### [](#5.-what-does-dynamic-padding-mean?)5\\. What does dynamic padding mean?\n\n### [](#6.-what-is-the-purpose-of-a-collate-function?)6\\. What is the purpose of a collate function?\n\n### [](#7.-what-happens-when-you-instantiate-one-of-the-automodelforxxx-classes-with-a-pretrained-language-model-(such-as-bert-base-uncased)-that-corresponds-to-a-different-task-than-the-one-for-which-it-was-trained?)7\\. What happens when you instantiate one of the `AutoModelForXxx` classes with a pretrained language model (such as `bert-base-uncased`) that corresponds to a different task than the one for which it was trained?\n\n### [](#8.-what’s-the-purpose-of-trainingarguments?)8\\. What’s the purpose of `TrainingArguments`?\n\n### [](#9.-why-should-you-use-the-🤗-accelerate-library?)9\\. Why should you use the 🤗 Accelerate library?","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tEnd-of-chapter quiz - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

End-of-chapter quiz

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

End-of-chapter quiz

\"Ask

Test what you learned in this chapter!

1. The emotion dataset contains Twitter messages labeled with emotions. Search for it in the Hub, and read the dataset card. Which of these is not one of its basic emotions?

ar_sarcasm-dataset-in-the-hub.-which-task-does-it-support?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#2.-search-for-the-ar_sarcasm-dataset-in-the-hub.-which-task-does-it-support?\"> 2. Search for the ar_sarcasm dataset in the Hub. Which task does it support?

3. How does the BERT model expect a pair of sentences to be processed?

dataset.map()-method?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#4.-what-are-the-benefits-of-the-dataset.map()-method?\"> 4. What are the benefits of the Dataset.map() method?

5. What does dynamic padding mean?

6. What is the purpose of a collate function?

automodelforxxx-classes-with-a-pretrained-language-model-(such-as-bert-base-uncased)-that-corresponds-to-a-different-task-than-the-one-for-which-it-was-trained?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#7.-what-happens-when-you-instantiate-one-of-the-automodelforxxx-classes-with-a-pretrained-language-model-(such-as-bert-base-uncased)-that-corresponds-to-a-different-task-than-the-one-for-which-it-was-trained?\"> 7. What happens when you instantiate one of the AutoModelForXxx classes with a pretrained language model (such as bert-base-uncased) that corresponds to a different task than the one for which it was trained?

trainingarguments?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#8.-what’s-the-purpose-of-trainingarguments?\"> 8. What’s the purpose of TrainingArguments?

9. Why should you use the 🤗 Accelerate library?

\n\t\t\t\t
Fine-tuning, Check!\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tNext chapter
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:13.348Z"} {"title":"The Hugging Face Hub - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter4/1?fw=pt","markdown":"3\\. Fine-tuning a pretrained model\n\n4\\. Sharing models and tokenizers\n\n5\\. The 🤗 Datasets library\n\n6\\. The 🤗 Tokenizers library\n\n9\\. Building and sharing demos new\n\n## [](#the-hugging-face-hub)The Hugging Face Hub\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-4-questions)\n\nThe [Hugging Face Hub](https://huggingface.co/) –- our main website –- is a central platform that enables anyone to discover, use, and contribute new state-of-the-art models and datasets. It hosts a wide variety of models, with more than 10,000 publicly available. We’ll focus on the models in this chapter, and take a look at the datasets in Chapter 5.\n\nThe models in the Hub are not limited to 🤗 Transformers or even NLP. There are models from [Flair](https://github.com/flairNLP/flair) and [AllenNLP](https://github.com/allenai/allennlp) for NLP, [Asteroid](https://github.com/asteroid-team/asteroid) and [pyannote](https://github.com/pyannote/pyannote-audio) for speech, and [timm](https://github.com/rwightman/pytorch-image-models) for vision, to name a few.\n\nEach of these models is hosted as a Git repository, which allows versioning and reproducibility. Sharing a model on the Hub means opening it up to the community and making it accessible to anyone looking to easily use it, in turn eliminating their need to train a model on their own and simplifying sharing and usage.\n\nAdditionally, sharing a model on the Hub automatically deploys a hosted Inference API for that model. Anyone in the community is free to test it out directly on the model’s page, with custom inputs and appropriate widgets.\n\nThe best part is that sharing and using any public model on the Hub is completely free! [Paid plans](https://huggingface.co/pricing) also exist if you wish to share models privately.\n\nThe video below shows how to navigate the Hub.\n\nHaving a huggingface.co account is required to follow along this part, as we’ll be creating and managing repositories on the Hugging Face Hub: [create an account](https://huggingface.co/join)","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tThe Hugging Face Hub - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

The Hugging Face Hub

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

The Hugging Face Hub

\"Ask

The Hugging Face Hub –- our main website –- is a central platform that enables anyone to discover, use, and contribute new state-of-the-art models and datasets. It hosts a wide variety of models, with more than 10,000 publicly available. We’ll focus on the models in this chapter, and take a look at the datasets in Chapter 5.

The models in the Hub are not limited to 🤗 Transformers or even NLP. There are models from Flair and AllenNLP for NLP, Asteroid and pyannote for speech, and timm for vision, to name a few.

Each of these models is hosted as a Git repository, which allows versioning and reproducibility. Sharing a model on the Hub means opening it up to the community and making it accessible to anyone looking to easily use it, in turn eliminating their need to train a model on their own and simplifying sharing and usage.

Additionally, sharing a model on the Hub automatically deploys a hosted Inference API for that model. Anyone in the community is free to test it out directly on the model’s page, with custom inputs and appropriate widgets.

The best part is that sharing and using any public model on the Hub is completely free! Paid plans also exist if you wish to share models privately.

The video below shows how to navigate the Hub.

Having a huggingface.co account is required to follow along this part, as we’ll be creating and managing repositories on the Hugging Face Hub: create an account

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:14.814Z"} {"title":"Using pretrained models - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter4/2?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#using-pretrained-models)Using pretrained models\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-4-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter4/section2_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter4/section2_pt.ipynb)\n\nThe Model Hub makes selecting the appropriate model simple, so that using it in any downstream library can be done in a few lines of code. Let’s take a look at how to actually use one of these models, and how to contribute back to the community.\n\nLet’s say we’re looking for a French-based model that can perform mask filling.\n\n![Selecting the Camembert model.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter4/camembert.gif)\n\nWe select the `camembert-base` checkpoint to try it out. The identifier `camembert-base` is all we need to start using it! As you’ve seen in previous chapters, we can instantiate it using the `pipeline()` function:\n\n```\nfrom transformers import pipeline\n\ncamembert_fill_mask = pipeline(\"fill-mask\", model=\"camembert-base\")\nresults = camembert_fill_mask(\"Le camembert est :)\")```\n\n```\n[\n {'sequence': 'Le camembert est délicieux :)', 'score': 0.49091005325317383, 'token': 7200, 'token_str': 'délicieux'}, \n {'sequence': 'Le camembert est excellent :)', 'score': 0.1055697426199913, 'token': 2183, 'token_str': 'excellent'}, \n {'sequence': 'Le camembert est succulent :)', 'score': 0.03453313186764717, 'token': 26202, 'token_str': 'succulent'}, \n {'sequence': 'Le camembert est meilleur :)', 'score': 0.0330314114689827, 'token': 528, 'token_str': 'meilleur'}, \n {'sequence': 'Le camembert est parfait :)', 'score': 0.03007650189101696, 'token': 1654, 'token_str': 'parfait'}\n]```\n\nAs you can see, loading a model within a pipeline is extremely simple. The only thing you need to watch out for is that the chosen checkpoint is suitable for the task it’s going to be used for. For example, here we are loading the `camembert-base` checkpoint in the `fill-mask` pipeline, which is completely fine. But if we were to load this checkpoint in the `text-classification` pipeline, the results would not make any sense because the head of `camembert-base` is not suitable for this task! We recommend using the task selector in the Hugging Face Hub interface in order to select the appropriate checkpoints:\n\n![The task selector on the web interface.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter4/tasks.png)\n\nYou can also instantiate the checkpoint using the model architecture directly:\n\n```\nfrom transformers import CamembertTokenizer, CamembertForMaskedLM\n\ntokenizer = CamembertTokenizer.from_pretrained(\"camembert-base\")\nmodel = CamembertForMaskedLM.from_pretrained(\"camembert-base\")```\n\nHowever, we recommend using the [`Auto*` classes](https://huggingface.co/transformers/model_doc/auto.html?highlight=auto#auto-classes) instead, as these are by design architecture-agnostic. While the previous code sample limits users to checkpoints loadable in the CamemBERT architecture, using the `Auto*` classes makes switching checkpoints simple:\n\n```\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM\n\ntokenizer = AutoTokenizer.from_pretrained(\"camembert-base\")\nmodel = AutoModelForMaskedLM.from_pretrained(\"camembert-base\")```\n\nWhen using a pretrained model, make sure to check how it was trained, on which datasets, its limits, and its biases. All of this information should be indicated on its model card.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tUsing pretrained models - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Using pretrained models

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Using pretrained models

\"Ask \"Open \"Open

The Model Hub makes selecting the appropriate model simple, so that using it in any downstream library can be done in a few lines of code. Let’s take a look at how to actually use one of these models, and how to contribute back to the community.

Let’s say we’re looking for a French-based model that can perform mask filling.

\"Selecting

We select the camembert-base checkpoint to try it out. The identifier camembert-base is all we need to start using it! As you’ve seen in previous chapters, we can instantiate it using the pipeline() function:

from transformers import pipeline\n\ncamembert_fill_mask = pipeline(\"fill-mask\", model=\"camembert-base\")\nresults = camembert_fill_mask(\"Le camembert est <mask> :)\")
[\n  {'sequence': 'Le camembert est délicieux :)', 'score': 0.49091005325317383, 'token': 7200, 'token_str': 'délicieux'}, \n  {'sequence': 'Le camembert est excellent :)', 'score': 0.1055697426199913, 'token': 2183, 'token_str': 'excellent'}, \n  {'sequence': 'Le camembert est succulent :)', 'score': 0.03453313186764717, 'token': 26202, 'token_str': 'succulent'}, \n  {'sequence': 'Le camembert est meilleur :)', 'score': 0.0330314114689827, 'token': 528, 'token_str': 'meilleur'}, \n  {'sequence': 'Le camembert est parfait :)', 'score': 0.03007650189101696, 'token': 1654, 'token_str': 'parfait'}\n]

As you can see, loading a model within a pipeline is extremely simple. The only thing you need to watch out for is that the chosen checkpoint is suitable for the task it’s going to be used for. For example, here we are loading the camembert-base checkpoint in the fill-mask pipeline, which is completely fine. But if we were to load this checkpoint in the text-classification pipeline, the results would not make any sense because the head of camembert-base is not suitable for this task! We recommend using the task selector in the Hugging Face Hub interface in order to select the appropriate checkpoints:

\"The

You can also instantiate the checkpoint using the model architecture directly:

from transformers import CamembertTokenizer, CamembertForMaskedLM\n\ntokenizer = CamembertTokenizer.from_pretrained(\"camembert-base\")\nmodel = CamembertForMaskedLM.from_pretrained(\"camembert-base\")

However, we recommend using the Auto* classes instead, as these are by design architecture-agnostic. While the previous code sample limits users to checkpoints loadable in the CamemBERT architecture, using the Auto* classes makes switching checkpoints simple:

from transformers import AutoTokenizer, AutoModelForMaskedLM\n\ntokenizer = AutoTokenizer.from_pretrained(\"camembert-base\")\nmodel = AutoModelForMaskedLM.from_pretrained(\"camembert-base\")
When using a pretrained model, make sure to check how it was trained, on which datasets, its limits, and its biases. All of this information should be indicated on its model card.
\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:15.954Z"} {"title":"Sharing pretrained models - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter4/3?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#sharing-pretrained-models)Sharing pretrained models\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-4-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter4/section3_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter4/section3_pt.ipynb)\n\nIn the steps below, we’ll take a look at the easiest ways to share pretrained models to the 🤗 Hub. There are tools and utilities available that make it simple to share and update models directly on the Hub, which we will explore below.\n\nWe encourage all users that train models to contribute by sharing them with the community — sharing models, even when trained on very specific datasets, will help others, saving them time and compute resources and providing access to useful trained artifacts. In turn, you can benefit from the work that others have done!\n\nThere are three ways to go about creating new model repositories:\n\n- Using the `push_to_hub` API\n- Using the `huggingface_hub` Python library\n- Using the web interface\n\nOnce you’ve created a repository, you can upload files to it via git and git-lfs. We’ll walk you through creating model repositories and uploading files to them in the following sections.\n\n## [](#using-the-pushtohub-api)Using the `push_to_hub` API\n\nThe simplest way to upload files to the Hub is by leveraging the `push_to_hub` API.\n\nBefore going further, you’ll need to generate an authentication token so that the `huggingface_hub` API knows who you are and what namespaces you have write access to. Make sure you are in an environment where you have `transformers` installed (see [Setup](/course/chapter0)). If you are in a notebook, you can use the following function to login:\n\n```\nfrom huggingface_hub import notebook_login\n\nnotebook_login()```\n\nIn a terminal, you can run:\n\nIn both cases, you should be prompted for your username and password, which are the same ones you use to log in to the Hub. If you do not have a Hub profile yet, you should create one [here](https://huggingface.co/join).\n\nGreat! You now have your authentication token stored in your cache folder. Let’s create some repositories!\n\nIf you have played around with the `Trainer` API to train a model, the easiest way to upload it to the Hub is to set `push_to_hub=True` when you define your `TrainingArguments`:\n\n```\nfrom transformers import TrainingArguments\n\ntraining_args = TrainingArguments(\n \"bert-finetuned-mrpc\", save_strategy=\"epoch\", push_to_hub=True\n)```\n\nWhen you call `trainer.train()`, the `Trainer` will then upload your model to the Hub each time it is saved (here every epoch) in a repository in your namespace. That repository will be named like the output directory you picked (here `bert-finetuned-mrpc`) but you can choose a different name with `hub_model_id = \"a_different_name\"`.\n\nTo upload your model to an organization you are a member of, just pass it with `hub_model_id = \"my_organization/my_repo_name\"`.\n\nOnce your training is finished, you should do a final `trainer.push_to_hub()` to upload the last version of your model. It will also generate a model card with all the relevant metadata, reporting the hyperparameters used and the evaluation results! Here is an example of the content you might find in a such a model card:\n\n![An example of an auto-generated model card.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter4/model_card.png)\n\nAt a lower level, accessing the Model Hub can be done directly on models, tokenizers, and configuration objects via their `push_to_hub()` method. This method takes care of both the repository creation and pushing the model and tokenizer files directly to the repository. No manual handling is required, unlike with the API we’ll see below.\n\nTo get an idea of how it works, let’s first initialize a model and a tokenizer:\n\n```\nfrom transformers import AutoModelForMaskedLM, AutoTokenizer\n\ncheckpoint = \"camembert-base\"\n\nmodel = AutoModelForMaskedLM.from_pretrained(checkpoint)\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)```\n\nYou’re free to do whatever you want with these — add tokens to the tokenizer, train the model, fine-tune it. Once you’re happy with the resulting model, weights, and tokenizer, you can leverage the `push_to_hub()` method directly available on the `model` object:\n\n```\nmodel.push_to_hub(\"dummy-model\")```\n\nThis will create the new repository `dummy-model` in your profile, and populate it with your model files. Do the same with the tokenizer, so that all the files are now available in this repository:\n\n```\ntokenizer.push_to_hub(\"dummy-model\")```\n\nIf you belong to an organization, simply specify the `organization` argument to upload to that organization’s namespace:\n\n```\ntokenizer.push_to_hub(\"dummy-model\", organization=\"huggingface\")```\n\nIf you wish to use a specific Hugging Face token, you’re free to specify it to the `push_to_hub()` method as well:\n\n```\ntokenizer.push_to_hub(\"dummy-model\", organization=\"huggingface\", use_auth_token=\"\")```\n\nNow head to the Model Hub to find your newly uploaded model: _[https://huggingface.co/user-or-organization/dummy-model](https://huggingface.co/user-or-organization/dummy-model)_.\n\nClick on the “Files and versions” tab, and you should see the files visible in the following screenshot:\n\n![Dummy model containing both the tokenizer and model files.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter4/push_to_hub_dummy_model.png)\n\n✏️ **Try it out!** Take the model and tokenizer associated with the `bert-base-cased` checkpoint and upload them to a repo in your namespace using the `push_to_hub()` method. Double-check that the repo appears properly on your page before deleting it.\n\nAs you’ve seen, the `push_to_hub()` method accepts several arguments, making it possible to upload to a specific repository or organization namespace, or to use a different API token. We recommend you take a look at the method specification available directly in the [🤗 Transformers documentation](https://huggingface.co/transformers/model_sharing.html) to get an idea of what is possible.\n\nThe `push_to_hub()` method is backed by the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) Python package, which offers a direct API to the Hugging Face Hub. It’s integrated within 🤗 Transformers and several other machine learning libraries, like [`allenlp`](https://github.com/allenai/allennlp). Although we focus on the 🤗 Transformers integration in this chapter, integrating it into your own code or library is simple.\n\nJump to the last section to see how to upload files to your newly created repository!\n\n## [](#using-the-huggingfacehub-python-library)Using the `huggingface_hub` Python library\n\nThe `huggingface_hub` Python library is a package which offers a set of tools for the model and datasets hubs. It provides simple methods and classes for common tasks like getting information about repositories on the hub and managing them. It provides simple APIs that work on top of git to manage those repositories’ content and to integrate the Hub in your projects and libraries.\n\nSimilarly to using the `push_to_hub` API, this will require you to have your API token saved in your cache. In order to do this, you will need to use the `login` command from the CLI, as mentioned in the previous section (again, make sure to prepend these commands with the `!` character if running in Google Colab):\n\nThe `huggingface_hub` package offers several methods and classes which are useful for our purpose. Firstly, there are a few methods to manage repository creation, deletion, and others:\n\n```\nfrom huggingface_hub import (\n \n login,\n logout,\n whoami,\n\n \n create_repo,\n delete_repo,\n update_repo_visibility,\n\n \n list_models,\n list_datasets,\n list_metrics,\n list_repo_files,\n upload_file,\n delete_file,\n)```\n\nAdditionally, it offers the very powerful `Repository` class to manage a local repository. We will explore these methods and that class in the next few section to understand how to leverage them.\n\nThe `create_repo` method can be used to create a new repository on the hub:\n\n```\nfrom huggingface_hub import create_repo\n\ncreate_repo(\"dummy-model\")```\n\nThis will create the repository `dummy-model` in your namespace. If you like, you can specify which organization the repository should belong to using the `organization` argument:\n\n```\nfrom huggingface_hub import create_repo\n\ncreate_repo(\"dummy-model\", organization=\"huggingface\")```\n\nThis will create the `dummy-model` repository in the `huggingface` namespace, assuming you belong to that organization. Other arguments which may be useful are:\n\n- `private`, in order to specify if the repository should be visible from others or not.\n- `token`, if you would like to override the token stored in your cache by a given token.\n- `repo_type`, if you would like to create a `dataset` or a `space` instead of a model. Accepted values are `\"dataset\"` and `\"space\"`.\n\nOnce the repository is created, we should add files to it! Jump to the next section to see the three ways this can be handled.\n\n## [](#using-the-web-interface)Using the web interface\n\nThe web interface offers tools to manage repositories directly in the Hub. Using the interface, you can easily create repositories, add files (even large ones!), explore models, visualize diffs, and much more.\n\nTo create a new repository, visit [huggingface.co/new](https://huggingface.co/new):\n\n![Page showcasing the model used for the creation of a new model repository.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter4/new_model.png)\n\nFirst, specify the owner of the repository: this can be either you or any of the organizations you’re affiliated with. If you choose an organization, the model will be featured on the organization’s page and every member of the organization will have the ability to contribute to the repository.\n\nNext, enter your model’s name. This will also be the name of the repository. Finally, you can specify whether you want your model to be public or private. Private models are hidden from public view.\n\nAfter creating your model repository, you should see a page like this:\n\n![An empty model page after creating a new repository.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter4/empty_model.png)\n\nThis is where your model will be hosted. To start populating it, you can add a README file directly from the web interface.\n\n![The README file showing the Markdown capabilities.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter4/dummy_model.png)\n\nThe README file is in Markdown — feel free to go wild with it! The third part of this chapter is dedicated to building a model card. These are of prime importance in bringing value to your model, as they’re where you tell others what it can do.\n\nIf you look at the “Files and versions” tab, you’ll see that there aren’t many files there yet — just the _README.md_ you just created and the _.gitattributes_ file that keeps track of large files.\n\n![The 'Files and versions' tab only shows the .gitattributes and README.md files.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter4/files.png)\n\nWe’ll take a look at how to add some new files next.\n\n## [](#uploading-the-model-files)Uploading the model files\n\nThe system to manage files on the Hugging Face Hub is based on git for regular files, and git-lfs (which stands for [Git Large File Storage](https://git-lfs.github.com/)) for larger files.\n\nIn the next section, we go over three different ways of uploading files to the Hub: through `huggingface_hub` and through git commands.\n\n### [](#the-uploadfile-approach)The `upload_file` approach\n\nUsing `upload_file` does not require git and git-lfs to be installed on your system. It pushes files directly to the 🤗 Hub using HTTP POST requests. A limitation of this approach is that it doesn’t handle files that are larger than 5GB in size. If your files are larger than 5GB, please follow the two other methods detailed below.\n\nThe API may be used as follows:\n\n```\nfrom huggingface_hub import upload_file\n\nupload_file(\n \"/config.json\",\n path_in_repo=\"config.json\",\n repo_id=\"/dummy-model\",\n)```\n\nThis will upload the file `config.json` available at `` to the root of the repository as `config.json`, to the `dummy-model` repository. Other arguments which may be useful are:\n\n- `token`, if you would like to override the token stored in your cache by a given token.\n- `repo_type`, if you would like to upload to a `dataset` or a `space` instead of a model. Accepted values are `\"dataset\"` and `\"space\"`.\n\n### [](#the-repository-class)The `Repository` class\n\nThe `Repository` class manages a local repository in a git-like manner. It abstracts most of the pain points one may have with git to provide all features that we require.\n\nUsing this class requires having git and git-lfs installed, so make sure you have git-lfs installed (see [here](https://git-lfs.github.com/) for installation instructions) and set up before you begin.\n\nIn order to start playing around with the repository we have just created, we can start by initialising it into a local folder by cloning the remote repository:\n\n```\nfrom huggingface_hub import Repository\n\nrepo = Repository(\"\", clone_from=\"/dummy-model\")```\n\nThis created the folder `` in our working directory. This folder only contains the `.gitattributes` file as that’s the only file created when instantiating the repository through `create_repo`.\n\nFrom this point on, we may leverage several of the traditional git methods:\n\n```\nrepo.git_pull()\nrepo.git_add()\nrepo.git_commit()\nrepo.git_push()\nrepo.git_tag()```\n\nAnd others! We recommend taking a look at the `Repository` documentation available [here](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub#advanced-programmatic-repository-management) for an overview of all available methods.\n\nAt present, we have a model and a tokenizer that we would like to push to the hub. We have successfully cloned the repository, we can therefore save the files within that repository.\n\nWe first make sure that our local clone is up to date by pulling the latest changes:\n\nOnce that is done, we save the model and tokenizer files:\n\n```\nmodel.save_pretrained(\"\")\ntokenizer.save_pretrained(\"\")```\n\nThe `` now contains all the model and tokenizer files. We follow the usual git workflow by adding files to the staging area, committing them and pushing them to the hub:\n\n```\nrepo.git_add()\nrepo.git_commit(\"Add model and tokenizer files\")\nrepo.git_push()```\n\nCongratulations! You just pushed your first files on the hub.\n\n### [](#the-git-based-approach)The git-based approach\n\nThis is the very barebones approach to uploading files: we’ll do so with git and git-lfs directly. Most of the difficulty is abstracted away by previous approaches, but there are a few caveats with the following method so we’ll follow a more complex use-case.\n\nUsing this class requires having git and git-lfs installed, so make sure you have [git-lfs](https://git-lfs.github.com/) installed (see here for installation instructions) and set up before you begin.\n\nFirst start by initializing git-lfs:\n\n```\nUpdated git hooks.\nGit LFS initialized.```\n\nOnce that’s done, the first step is to clone your model repository:\n\n```\ngit clone https://huggingface.co//```\n\nMy username is `lysandre` and I’ve used the model name `dummy`, so for me the command ends up looking like the following:\n\n```\ngit clone https://huggingface.co/lysandre/dummy```\n\nI now have a folder named _dummy_ in my working directory. I can `cd` into the folder and have a look at the contents:\n\nIf you just created your repository using Hugging Face Hub’s `create_repo` method, this folder should only contain a hidden `.gitattributes` file. If you followed the instructions in the previous section to create a repository using the web interface, the folder should contain a single _README.md_ file alongside the hidden `.gitattributes` file, as shown here.\n\nAdding a regular-sized file, such as a configuration file, a vocabulary file, or basically any file under a few megabytes, is done exactly as one would do it in any git-based system. However, bigger files must be registered through git-lfs in order to push them to _huggingface.co_.\n\nLet’s go back to Python for a bit to generate a model and tokenizer that we’d like to commit to our dummy repository:\n\n```\nfrom transformers import AutoModelForMaskedLM, AutoTokenizer\n\ncheckpoint = \"camembert-base\"\n\nmodel = AutoModelForMaskedLM.from_pretrained(checkpoint)\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\n\n\n\nmodel.save_pretrained(\"\")\ntokenizer.save_pretrained(\"\")```\n\nNow that we’ve saved some model and tokenizer artifacts, let’s take another look at the _dummy_ folder:\n\n```\nconfig.json pytorch_model.bin README.md sentencepiece.bpe.model special_tokens_map.json tokenizer_config.json tokenizer.json```\n\nIf you look at the file sizes (for example, with `ls -lh`), you should see that the model state dict file (_pytorch\\_model.bin_) is the only outlier, at more than 400 MB.\n\n✏️ When creating the repository from the web interface, the \\*.gitattributes\\* file is automatically set up to consider files with certain extensions, such as \\*.bin\\* and \\*.h5\\*, as large files, and git-lfs will track them with no necessary setup on your side.\n\nWe can now go ahead and proceed like we would usually do with traditional Git repositories. We can add all the files to Git’s staging environment using the `git add` command:\n\nWe can then have a look at the files that are currently staged:\n\n```\nOn branch main\nYour branch is up to date with 'origin/main'.\n\nChanges to be committed:\n (use \"git restore --staged ...\" to unstage)\n modified: .gitattributes\n\tnew file: config.json\n\tnew file: pytorch_model.bin\n\tnew file: sentencepiece.bpe.model\n\tnew file: special_tokens_map.json\n\tnew file: tokenizer.json\n\tnew file: tokenizer_config.json```\n\nSimilarly, we can make sure that git-lfs is tracking the correct files by using its `status` command:\n\n```\nOn branch main\nObjects to be pushed to origin/main:\n\n\nObjects to be committed:\n\n\tconfig.json (Git: bc20ff2)\n\tpytorch_model.bin (LFS: 35686c2)\n\tsentencepiece.bpe.model (LFS: 988bc5a)\n\tspecial_tokens_map.json (Git: cb23931)\n\ttokenizer.json (Git: 851ff3e)\n\ttokenizer_config.json (Git: f0f7783)\n\nObjects not staged for commit:\n\n```\n\nWe can see that all files have `Git` as a handler, except _pytorch\\_model.bin_ and _sentencepiece.bpe.model_, which have `LFS`. Great!\n\nLet’s proceed to the final steps, committing and pushing to the _huggingface.co_ remote repository:\n\n```\ngit commit -m \"First model version\"```\n\n```\n[main b08aab1] First model version\n 7 files changed, 29027 insertions(+)\n 6 files changed, 36 insertions(+)\n create mode 100644 config.json\n create mode 100644 pytorch_model.bin\n create mode 100644 sentencepiece.bpe.model\n create mode 100644 special_tokens_map.json\n create mode 100644 tokenizer.json\n create mode 100644 tokenizer_config.json```\n\nPushing can take a bit of time, depending on the speed of your internet connection and the size of your files:\n\n```\nUploading LFS objects: 100% (1/1), 433 MB | 1.3 MB/s, done.\nEnumerating objects: 11, done.\nCounting objects: 100% (11/11), done.\nDelta compression using up to 12 threads\nCompressing objects: 100% (9/9), done.\nWriting objects: 100% (9/9), 288.27 KiB | 6.27 MiB/s, done.\nTotal 9 (delta 1), reused 0 (delta 0), pack-reused 0\nTo https://huggingface.co/lysandre/dummy\n 891b41d..b08aab1 main -> main```\n\nIf we take a look at the model repository when this is finished, we can see all the recently added files:\n\n![The 'Files and versions' tab now contains all the recently uploaded files.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter4/full_model.png)\n\nThe UI allows you to explore the model files and commits and to see the diff introduced by each commit:\n\n![The diff introduced by the recent commit.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter4/diffs.gif)","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tSharing pretrained models - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Sharing pretrained models

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Sharing pretrained models

\"Ask \"Open \"Open

In the steps below, we’ll take a look at the easiest ways to share pretrained models to the 🤗 Hub. There are tools and utilities available that make it simple to share and update models directly on the Hub, which we will explore below.

We encourage all users that train models to contribute by sharing them with the community — sharing models, even when trained on very specific datasets, will help others, saving them time and compute resources and providing access to useful trained artifacts. In turn, you can benefit from the work that others have done!

There are three ways to go about creating new model repositories:

  • Using the push_to_hub API
  • Using the huggingface_hub Python library
  • Using the web interface

Once you’ve created a repository, you can upload files to it via git and git-lfs. We’ll walk you through creating model repositories and uploading files to them in the following sections.

Using the push_to_hub API

The simplest way to upload files to the Hub is by leveraging the push_to_hub API.

Before going further, you’ll need to generate an authentication token so that the huggingface_hub API knows who you are and what namespaces you have write access to. Make sure you are in an environment where you have transformers installed (see Setup). If you are in a notebook, you can use the following function to login:

from huggingface_hub import notebook_login\n\nnotebook_login()

In a terminal, you can run:

huggingface-cli login

In both cases, you should be prompted for your username and password, which are the same ones you use to log in to the Hub. If you do not have a Hub profile yet, you should create one here.

Great! You now have your authentication token stored in your cache folder. Let’s create some repositories!

If you have played around with the Trainer API to train a model, the easiest way to upload it to the Hub is to set push_to_hub=True when you define your TrainingArguments:

from transformers import TrainingArguments\n\ntraining_args = TrainingArguments(\n    \"bert-finetuned-mrpc\", save_strategy=\"epoch\", push_to_hub=True\n)

When you call trainer.train(), the Trainer will then upload your model to the Hub each time it is saved (here every epoch) in a repository in your namespace. That repository will be named like the output directory you picked (here bert-finetuned-mrpc) but you can choose a different name with hub_model_id = \"a_different_name\".

To upload your model to an organization you are a member of, just pass it with hub_model_id = \"my_organization/my_repo_name\".

Once your training is finished, you should do a final trainer.push_to_hub() to upload the last version of your model. It will also generate a model card with all the relevant metadata, reporting the hyperparameters used and the evaluation results! Here is an example of the content you might find in a such a model card:

\"An

At a lower level, accessing the Model Hub can be done directly on models, tokenizers, and configuration objects via their push_to_hub() method. This method takes care of both the repository creation and pushing the model and tokenizer files directly to the repository. No manual handling is required, unlike with the API we’ll see below.

To get an idea of how it works, let’s first initialize a model and a tokenizer:

from transformers import AutoModelForMaskedLM, AutoTokenizer\n\ncheckpoint = \"camembert-base\"\n\nmodel = AutoModelForMaskedLM.from_pretrained(checkpoint)\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)

You’re free to do whatever you want with these — add tokens to the tokenizer, train the model, fine-tune it. Once you’re happy with the resulting model, weights, and tokenizer, you can leverage the push_to_hub() method directly available on the model object:

model.push_to_hub(\"dummy-model\")

This will create the new repository dummy-model in your profile, and populate it with your model files.\nDo the same with the tokenizer, so that all the files are now available in this repository:

tokenizer.push_to_hub(\"dummy-model\")

If you belong to an organization, simply specify the organization argument to upload to that organization’s namespace:

tokenizer.push_to_hub(\"dummy-model\", organization=\"huggingface\")

If you wish to use a specific Hugging Face token, you’re free to specify it to the push_to_hub() method as well:

tokenizer.push_to_hub(\"dummy-model\", organization=\"huggingface\", use_auth_token=\"<TOKEN>\")

Now head to the Model Hub to find your newly uploaded model: https://huggingface.co/user-or-organization/dummy-model.

Click on the “Files and versions” tab, and you should see the files visible in the following screenshot:

\"Dummy

✏️ Try it out! Take the model and tokenizer associated with the bert-base-cased checkpoint and upload them to a repo in your namespace using the push_to_hub() method. Double-check that the repo appears properly on your page before deleting it.

As you’ve seen, the push_to_hub() method accepts several arguments, making it possible to upload to a specific repository or organization namespace, or to use a different API token. We recommend you take a look at the method specification available directly in the 🤗 Transformers documentation to get an idea of what is possible.

The push_to_hub() method is backed by the huggingface_hub Python package, which offers a direct API to the Hugging Face Hub. It’s integrated within 🤗 Transformers and several other machine learning libraries, like allenlp. Although we focus on the 🤗 Transformers integration in this chapter, integrating it into your own code or library is simple.

Jump to the last section to see how to upload files to your newly created repository!

Using the huggingface_hub Python library

The huggingface_hub Python library is a package which offers a set of tools for the model and datasets hubs. It provides simple methods and classes for common tasks like\ngetting information about repositories on the hub and managing them. It provides simple APIs that work on top of git to manage those repositories’ content and to integrate the Hub\nin your projects and libraries.

Similarly to using the push_to_hub API, this will require you to have your API token saved in your cache. In order to do this, you will need to use the login command from the CLI, as mentioned in the previous section (again, make sure to prepend these commands with the ! character if running in Google Colab):

huggingface-cli login

The huggingface_hub package offers several methods and classes which are useful for our purpose. Firstly, there are a few methods to manage repository creation, deletion, and others:

from huggingface_hub import (\n    # User management\n    login,\n    logout,\n    whoami,\n\n    # Repository creation and management\n    create_repo,\n    delete_repo,\n    update_repo_visibility,\n\n    # And some methods to retrieve/change information about the content\n    list_models,\n    list_datasets,\n    list_metrics,\n    list_repo_files,\n    upload_file,\n    delete_file,\n)

Additionally, it offers the very powerful Repository class to manage a local repository. We will explore these methods and that class in the next few section to understand how to leverage them.

The create_repo method can be used to create a new repository on the hub:

from huggingface_hub import create_repo\n\ncreate_repo(\"dummy-model\")

This will create the repository dummy-model in your namespace. If you like, you can specify which organization the repository should belong to using the organization argument:

from huggingface_hub import create_repo\n\ncreate_repo(\"dummy-model\", organization=\"huggingface\")

This will create the dummy-model repository in the huggingface namespace, assuming you belong to that organization.\nOther arguments which may be useful are:

  • private, in order to specify if the repository should be visible from others or not.
  • token, if you would like to override the token stored in your cache by a given token.
  • repo_type, if you would like to create a dataset or a space instead of a model. Accepted values are \"dataset\" and \"space\".

Once the repository is created, we should add files to it! Jump to the next section to see the three ways this can be handled.

Using the web interface

The web interface offers tools to manage repositories directly in the Hub. Using the interface, you can easily create repositories, add files (even large ones!), explore models, visualize diffs, and much more.

To create a new repository, visit huggingface.co/new:

\"Page

First, specify the owner of the repository: this can be either you or any of the organizations you’re affiliated with. If you choose an organization, the model will be featured on the organization’s page and every member of the organization will have the ability to contribute to the repository.

Next, enter your model’s name. This will also be the name of the repository. Finally, you can specify whether you want your model to be public or private. Private models are hidden from public view.

After creating your model repository, you should see a page like this:

\"An

This is where your model will be hosted. To start populating it, you can add a README file directly from the web interface.

\"The

The README file is in Markdown — feel free to go wild with it! The third part of this chapter is dedicated to building a model card. These are of prime importance in bringing value to your model, as they’re where you tell others what it can do.

If you look at the “Files and versions” tab, you’ll see that there aren’t many files there yet — just the README.md you just created and the .gitattributes file that keeps track of large files.

\"The

We’ll take a look at how to add some new files next.

Uploading the model files

The system to manage files on the Hugging Face Hub is based on git for regular files, and git-lfs (which stands for Git Large File Storage) for larger files.

In the next section, we go over three different ways of uploading files to the Hub: through huggingface_hub and through git commands.

The upload_file approach

Using upload_file does not require git and git-lfs to be installed on your system. It pushes files directly to the 🤗 Hub using HTTP POST requests. A limitation of this approach is that it doesn’t handle files that are larger than 5GB in size.\nIf your files are larger than 5GB, please follow the two other methods detailed below.

The API may be used as follows:

from huggingface_hub import upload_file\n\nupload_file(\n    \"<path_to_file>/config.json\",\n    path_in_repo=\"config.json\",\n    repo_id=\"<namespace>/dummy-model\",\n)

This will upload the file config.json available at <path_to_file> to the root of the repository as config.json, to the dummy-model repository.\nOther arguments which may be useful are:

  • token, if you would like to override the token stored in your cache by a given token.
  • repo_type, if you would like to upload to a dataset or a space instead of a model. Accepted values are \"dataset\" and \"space\".

The Repository class

The Repository class manages a local repository in a git-like manner. It abstracts most of the pain points one may have with git to provide all features that we require.

Using this class requires having git and git-lfs installed, so make sure you have git-lfs installed (see here for installation instructions) and set up before you begin.

In order to start playing around with the repository we have just created, we can start by initialising it into a local folder by cloning the remote repository:

from huggingface_hub import Repository\n\nrepo = Repository(\"<path_to_dummy_folder>\", clone_from=\"<namespace>/dummy-model\")

This created the folder <path_to_dummy_folder> in our working directory. This folder only contains the .gitattributes file as that’s the only file created when instantiating the repository through create_repo.

From this point on, we may leverage several of the traditional git methods:

repo.git_pull()\nrepo.git_add()\nrepo.git_commit()\nrepo.git_push()\nrepo.git_tag()

And others! We recommend taking a look at the Repository documentation available here for an overview of all available methods.

At present, we have a model and a tokenizer that we would like to push to the hub. We have successfully cloned the repository, we can therefore save the files within that repository.

We first make sure that our local clone is up to date by pulling the latest changes:

repo.git_pull()

Once that is done, we save the model and tokenizer files:

model.save_pretrained(\"<path_to_dummy_folder>\")\ntokenizer.save_pretrained(\"<path_to_dummy_folder>\")

The <path_to_dummy_folder> now contains all the model and tokenizer files. We follow the usual git workflow by adding files to the staging area, committing them and pushing them to the hub:

repo.git_add()\nrepo.git_commit(\"Add model and tokenizer files\")\nrepo.git_push()

Congratulations! You just pushed your first files on the hub.

The git-based approach

This is the very barebones approach to uploading files: we’ll do so with git and git-lfs directly. Most of the difficulty is abstracted away by previous approaches, but there are a few caveats with the following method so we’ll follow a more complex use-case.

Using this class requires having git and git-lfs installed, so make sure you have git-lfs installed (see here for installation instructions) and set up before you begin.

First start by initializing git-lfs:

git lfs install
Updated git hooks.\nGit LFS initialized.

Once that’s done, the first step is to clone your model repository:

git clone https://huggingface.co/<namespace>/<your-model-id>

My username is lysandre and I’ve used the model name dummy, so for me the command ends up looking like the following:

git clone https://huggingface.co/lysandre/dummy

I now have a folder named dummy in my working directory. I can cd into the folder and have a look at the contents:

cd dummy && ls
README.md

If you just created your repository using Hugging Face Hub’s create_repo method, this folder should only contain a hidden .gitattributes file. If you followed the instructions in the previous section to create a repository using the web interface, the folder should contain a single README.md file alongside the hidden .gitattributes file, as shown here.

Adding a regular-sized file, such as a configuration file, a vocabulary file, or basically any file under a few megabytes, is done exactly as one would do it in any git-based system. However, bigger files must be registered through git-lfs in order to push them to huggingface.co.

Let’s go back to Python for a bit to generate a model and tokenizer that we’d like to commit to our dummy repository:

from transformers import AutoModelForMaskedLM, AutoTokenizer\n\ncheckpoint = \"camembert-base\"\n\nmodel = AutoModelForMaskedLM.from_pretrained(checkpoint)\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\n\n# Do whatever with the model, train it, fine-tune it...\n\nmodel.save_pretrained(\"<path_to_dummy_folder>\")\ntokenizer.save_pretrained(\"<path_to_dummy_folder>\")

Now that we’ve saved some model and tokenizer artifacts, let’s take another look at the dummy folder:

ls
config.json  pytorch_model.bin  README.md  sentencepiece.bpe.model  special_tokens_map.json tokenizer_config.json  tokenizer.json

If you look at the file sizes (for example, with ls -lh), you should see that the model state dict file (pytorch_model.bin) is the only outlier, at more than 400 MB.

✏️ When creating the repository from the web interface, the *.gitattributes* file is automatically set up to consider files with certain extensions, such as *.bin* and *.h5*, as large files, and git-lfs will track them with no necessary setup on your side.

We can now go ahead and proceed like we would usually do with traditional Git repositories. We can add all the files to Git’s staging environment using the git add command:

git add .

We can then have a look at the files that are currently staged:

git status
On branch main\nYour branch is up to date with 'origin/main'.\n\nChanges to be committed:\n  (use \"git restore --staged <file>...\" to unstage)\n  modified:   .gitattributes\n\tnew file:   config.json\n\tnew file:   pytorch_model.bin\n\tnew file:   sentencepiece.bpe.model\n\tnew file:   special_tokens_map.json\n\tnew file:   tokenizer.json\n\tnew file:   tokenizer_config.json

Similarly, we can make sure that git-lfs is tracking the correct files by using its status command:

git lfs status
On branch main\nObjects to be pushed to origin/main:\n\n\nObjects to be committed:\n\n\tconfig.json (Git: bc20ff2)\n\tpytorch_model.bin (LFS: 35686c2)\n\tsentencepiece.bpe.model (LFS: 988bc5a)\n\tspecial_tokens_map.json (Git: cb23931)\n\ttokenizer.json (Git: 851ff3e)\n\ttokenizer_config.json (Git: f0f7783)\n\nObjects not staged for commit:\n\n

We can see that all files have Git as a handler, except pytorch_model.bin and sentencepiece.bpe.model, which have LFS. Great!

Let’s proceed to the final steps, committing and pushing to the huggingface.co remote repository:

git commit -m \"First model version\"
[main b08aab1] First model version\n 7 files changed, 29027 insertions(+)\n  6 files changed, 36 insertions(+)\n create mode 100644 config.json\n create mode 100644 pytorch_model.bin\n create mode 100644 sentencepiece.bpe.model\n create mode 100644 special_tokens_map.json\n create mode 100644 tokenizer.json\n create mode 100644 tokenizer_config.json

Pushing can take a bit of time, depending on the speed of your internet connection and the size of your files:

git push
Uploading LFS objects: 100% (1/1), 433 MB | 1.3 MB/s, done.\nEnumerating objects: 11, done.\nCounting objects: 100% (11/11), done.\nDelta compression using up to 12 threads\nCompressing objects: 100% (9/9), done.\nWriting objects: 100% (9/9), 288.27 KiB | 6.27 MiB/s, done.\nTotal 9 (delta 1), reused 0 (delta 0), pack-reused 0\nTo https://huggingface.co/lysandre/dummy\n   891b41d..b08aab1  main -> main

If we take a look at the model repository when this is finished, we can see all the recently added files:

\"The

The UI allows you to explore the model files and commits and to see the diff introduced by each commit:

\"The
\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:16.174Z"} {"title":"Building a model card - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter4/4?fw=pt","markdown":"## [](#building-a-model-card)Building a model card\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-4-questions)\n\nThe model card is a file which is arguably as important as the model and tokenizer files in a model repository. It is the central definition of the model, ensuring reusability by fellow community members and reproducibility of results, and providing a platform on which other members may build their artifacts.\n\nDocumenting the training and evaluation process helps others understand what to expect of a model — and providing sufficient information regarding the data that was used and the preprocessing and postprocessing that were done ensures that the limitations, biases, and contexts in which the model is and is not useful can be identified and understood.\n\nTherefore, creating a model card that clearly defines your model is a very important step. Here, we provide some tips that will help you with this. Creating the model card is done through the _README.md_ file you saw earlier, which is a Markdown file.\n\nThe “model card” concept originates from a research direction from Google, first shared in the paper [“Model Cards for Model Reporting”](https://arxiv.org/abs/1810.03993) by Margaret Mitchell et al. A lot of information contained here is based on that paper, and we recommend you take a look at it to understand why model cards are so important in a world that values reproducibility, reusability, and fairness.\n\nThe model card usually starts with a very brief, high-level overview of what the model is for, followed by additional details in the following sections:\n\n- Model description\n- Intended uses & limitations\n- How to use\n- Limitations and bias\n- Training data\n- Training procedure\n- Evaluation results\n\nLet’s take a look at what each of these sections should contain.\n\n### [](#model-description)Model description\n\nThe model description provides basic details about the model. This includes the architecture, version, if it was introduced in a paper, if an original implementation is available, the author, and general information about the model. Any copyright should be attributed here. General information about training procedures, parameters, and important disclaimers can also be mentioned in this section.\n\n### [](#intended-uses-limitations)Intended uses & limitations\n\nHere you describe the use cases the model is intended for, including the languages, fields, and domains where it can be applied. This section of the model card can also document areas that are known to be out of scope for the model, or where it is likely to perform suboptimally.\n\n### [](#how-to-use)How to use\n\nThis section should include some examples of how to use the model. This can showcase usage of the `pipeline()` function, usage of the model and tokenizer classes, and any other code you think might be helpful.\n\n### [](#training-data)Training data\n\nThis part should indicate which dataset(s) the model was trained on. A brief description of the dataset(s) is also welcome.\n\n### [](#training-procedure)Training procedure\n\nIn this section you should describe all the relevant aspects of training that are useful from a reproducibility perspective. This includes any preprocessing and postprocessing that were done on the data, as well as details such as the number of epochs the model was trained for, the batch size, the learning rate, and so on.\n\n### [](#variable-and-metrics)Variable and metrics\n\nHere you should describe the metrics you use for evaluation, and the different factors you are mesuring. Mentioning which metric(s) were used, on which dataset and which dataset split, makes it easy to compare you model’s performance compared to that of other models. These should be informed by the previous sections, such as the intended users and use cases.\n\n### [](#evaluation-results)Evaluation results\n\nFinally, provide an indication of how well the model performs on the evaluation dataset. If the model uses a decision threshold, either provide the decision threshold used in the evaluation, or provide details on evaluation at different thresholds for the intended uses.\n\n## [](#example)Example\n\nCheck out the following for a few examples of well-crafted model cards:\n\n- [`bert-base-cased`](https://huggingface.co/bert-base-cased)\n- [`gpt2`](https://huggingface.co/gpt2)\n- [`distilbert`](https://huggingface.co/distilbert-base-uncased)\n\nMore examples from different organizations and companies are available [here](https://github.com/huggingface/model_card/blob/master/examples.md).\n\n## [](#note)Note\n\nModel cards are not a requirement when publishing models, and you don’t need to include all of the sections described above when you make one. However, explicit documentation of the model can only benefit future users, so we recommend that you fill in as many of the sections as possible to the best of your knowledge and ability.\n\n## [](#model-card-metadata)Model card metadata\n\nIf you have done a little exploring of the Hugging Face Hub, you should have seen that some models belong to certain categories: you can filter them by tasks, languages, libraries, and more. The categories a model belongs to are identified according to the metadata you add in the model card header.\n\nFor example, if you take a look at the [`camembert-base` model card](https://huggingface.co/camembert-base/blob/main/README.md), you should see the following lines in the model card header:\n\n```\n---\nlanguage: fr\nlicense: mit\ndatasets:\n- oscar\n---```\n\nThis metadata is parsed by the Hugging Face Hub, which then identifies this model as being a French model, with an MIT license, trained on the Oscar dataset.\n\nThe [full model card specification](https://github.com/huggingface/hub-docs/blame/main/modelcard.md) allows specifying languages, licenses, tags, datasets, metrics, as well as the evaluation results the model obtained when training.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tBuilding a model card - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Building a model card

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Building a model card

\"Ask

The model card is a file which is arguably as important as the model and tokenizer files in a model repository. It is the central definition of the model, ensuring reusability by fellow community members and reproducibility of results, and providing a platform on which other members may build their artifacts.

Documenting the training and evaluation process helps others understand what to expect of a model — and providing sufficient information regarding the data that was used and the preprocessing and postprocessing that were done ensures that the limitations, biases, and contexts in which the model is and is not useful can be identified and understood.

Therefore, creating a model card that clearly defines your model is a very important step. Here, we provide some tips that will help you with this. Creating the model card is done through the README.md file you saw earlier, which is a Markdown file.

The “model card” concept originates from a research direction from Google, first shared in the paper “Model Cards for Model Reporting” by Margaret Mitchell et al. A lot of information contained here is based on that paper, and we recommend you take a look at it to understand why model cards are so important in a world that values reproducibility, reusability, and fairness.

The model card usually starts with a very brief, high-level overview of what the model is for, followed by additional details in the following sections:

  • Model description
  • Intended uses & limitations
  • How to use
  • Limitations and bias
  • Training data
  • Training procedure
  • Evaluation results

Let’s take a look at what each of these sections should contain.

Model description

The model description provides basic details about the model. This includes the architecture, version, if it was introduced in a paper, if an original implementation is available, the author, and general information about the model. Any copyright should be attributed here. General information about training procedures, parameters, and important disclaimers can also be mentioned in this section.

Intended uses & limitations

Here you describe the use cases the model is intended for, including the languages, fields, and domains where it can be applied. This section of the model card can also document areas that are known to be out of scope for the model, or where it is likely to perform suboptimally.

How to use

This section should include some examples of how to use the model. This can showcase usage of the pipeline() function, usage of the model and tokenizer classes, and any other code you think might be helpful.

Training data

This part should indicate which dataset(s) the model was trained on. A brief description of the dataset(s) is also welcome.

Training procedure

In this section you should describe all the relevant aspects of training that are useful from a reproducibility perspective. This includes any preprocessing and postprocessing that were done on the data, as well as details such as the number of epochs the model was trained for, the batch size, the learning rate, and so on.

Variable and metrics

Here you should describe the metrics you use for evaluation, and the different factors you are mesuring. Mentioning which metric(s) were used, on which dataset and which dataset split, makes it easy to compare you model’s performance compared to that of other models. These should be informed by the previous sections, such as the intended users and use cases.

Evaluation results

Finally, provide an indication of how well the model performs on the evaluation dataset. If the model uses a decision threshold, either provide the decision threshold used in the evaluation, or provide details on evaluation at different thresholds for the intended uses.

Example

Check out the following for a few examples of well-crafted model cards:

More examples from different organizations and companies are available here.

Note

Model cards are not a requirement when publishing models, and you don’t need to include all of the sections described above when you make one. However, explicit documentation of the model can only benefit future users, so we recommend that you fill in as many of the sections as possible to the best of your knowledge and ability.

Model card metadata

If you have done a little exploring of the Hugging Face Hub, you should have seen that some models belong to certain categories: you can filter them by tasks, languages, libraries, and more. The categories a model belongs to are identified according to the metadata you add in the model card header.

For example, if you take a look at the camembert-base model card, you should see the following lines in the model card header:

---\nlanguage: fr\nlicense: mit\ndatasets:\n- oscar\n---

This metadata is parsed by the Hugging Face Hub, which then identifies this model as being a French model, with an MIT license, trained on the Oscar dataset.

The full model card specification allows specifying languages, licenses, tags, datasets, metrics, as well as the evaluation results the model obtained when training.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:16.231Z"} {"title":"Part 1 completed! - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter4/5?fw=pt","markdown":"## [](#part-1-completed)Part 1 completed!\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-4-questions)\n\nThis is the end of the first part of the course! Part 2 will be released on November 15th with a big community event, see more information [here](https://huggingface.co/blog/course-launch-event).\n\nYou should now be able to fine-tune a pretrained model on a text classification problem (single or pairs of sentences) and upload the result to the Model Hub. To make sure you mastered this first section, you should do exactly that on a problem that interests you (and not necessarily in English if you speak another language)! You can find help in the [Hugging Face forums](https://discuss.huggingface.co/) and share your project in [this topic](https://discuss.huggingface.co/t/share-your-projects/6803) once you’re finished.\n\nWe can’t wait to see what you will build with this!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tPart 1 completed! - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Part 1 completed!

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Part 1 completed!

\"Ask

This is the end of the first part of the course! Part 2 will be released on November 15th with a big community event, see more information here.

You should now be able to fine-tune a pretrained model on a text classification problem (single or pairs of sentences) and upload the result to the Model Hub. To make sure you mastered this first section, you should do exactly that on a problem that interests you (and not necessarily in English if you speak another language)! You can find help in the Hugging Face forums and share your project in this topic once you’re finished.

We can’t wait to see what you will build with this!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:16.566Z"} {"title":"End-of-chapter quiz - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter4/6?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#end-of-chapter-quiz)End-of-chapter quiz\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-4-questions)\n\nLet’s test what you learned in this chapter!\n\n### [](#1.-what-are-models-on-the-hub-limited-to?)1\\. What are models on the Hub limited to?\n\n### [](#2.-how-can-you-manage-models-on-the-hub?)2\\. How can you manage models on the Hub?\n\n### [](#3.-what-can-you-do-using-the-hugging-face-hub-web-interface?)3\\. What can you do using the Hugging Face Hub web interface?\n\n### [](#4.-what-is-a-model-card?)4\\. What is a model card?\n\n### [](#5.-which-of-these-objects-of-the-🤗-transformers-library-can-be-directly-shared-on-the-hub-with-push_to_hub()?)5\\. Which of these objects of the 🤗 Transformers library can be directly shared on the Hub with `push_to_hub()`?\n\n### [](#6.-what-is-the-first-step-when-using-the-push_to_hub()-method-or-the-cli-tools?)6\\. What is the first step when using the `push_to_hub()` method or the CLI tools?\n\n### [](#7.-you’re-using-a-model-and-a-tokenizer-—-how-can-you-upload-them-to-the-hub?)7\\. You’re using a model and a tokenizer — how can you upload them to the Hub?\n\n### [](#8.-which-git-operations-can-you-do-with-the-repository-class?)8\\. Which git operations can you do with the `Repository` class?","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tEnd-of-chapter quiz - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

End-of-chapter quiz

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

End-of-chapter quiz

\"Ask

Let’s test what you learned in this chapter!

1. What are models on the Hub limited to?

2. How can you manage models on the Hub?

3. What can you do using the Hugging Face Hub web interface?

4. What is a model card?

push_to_hub()?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#5.-which-of-these-objects-of-the-🤗-transformers-library-can-be-directly-shared-on-the-hub-with-push_to_hub()?\"> 5. Which of these objects of the 🤗 Transformers library can be directly shared on the Hub with push_to_hub()?

push_to_hub()-method-or-the-cli-tools?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#6.-what-is-the-first-step-when-using-the-push_to_hub()-method-or-the-cli-tools?\"> 6. What is the first step when using the push_to_hub() method or the CLI tools?

7. You’re using a model and a tokenizer — how can you upload them to the Hub?

repository-class?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#8.-which-git-operations-can-you-do-with-the-repository-class?\"> 8. Which git operations can you do with the Repository class?

\n\t\t\t\t
Part 1 completed!\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tNext chapter
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:16.904Z"} {"title":"Introduction - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter5/1?fw=pt","markdown":"## [](#introduction)Introduction\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-5-questions)\n\nIn [Chapter 3](/course/chapter3) you got your first taste of the 🤗 Datasets library and saw that there were three main steps when it came to fine-tuning a model:\n\n1. Load a dataset from the Hugging Face Hub.\n2. Preprocess the data with `Dataset.map()`.\n3. Load and compute metrics.\n\nBut this is just scratching the surface of what 🤗 Datasets can do! In this chapter, we will take a deep dive into the library. Along the way, we’ll find answers to the following questions:\n\n- What do you do when your dataset is not on the Hub?\n- How can you slice and dice a dataset? (And what if you _really_ need to use Pandas?)\n- What do you do when your dataset is huge and will melt your laptop’s RAM?\n- What the heck are “memory mapping” and Apache Arrow?\n- How can you create your own dataset and push it to the Hub?\n\nThe techniques you learn here will prepare you for the advanced tokenization and fine-tuning tasks in [Chapter 6](/course/chapter6) and [Chapter 7](/course/chapter7) — so grab a coffee and let’s get started!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tIntroduction - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Introduction

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Introduction

\"Ask

In Chapter 3 you got your first taste of the 🤗 Datasets library and saw that there were three main steps when it came to fine-tuning a model:

  1. Load a dataset from the Hugging Face Hub.
  2. Preprocess the data with Dataset.map().
  3. Load and compute metrics.

But this is just scratching the surface of what 🤗 Datasets can do! In this chapter, we will take a deep dive into the library. Along the way, we’ll find answers to the following questions:

  • What do you do when your dataset is not on the Hub?
  • How can you slice and dice a dataset? (And what if you really need to use Pandas?)
  • What do you do when your dataset is huge and will melt your laptop’s RAM?
  • What the heck are “memory mapping” and Apache Arrow?
  • How can you create your own dataset and push it to the Hub?

The techniques you learn here will prepare you for the advanced tokenization and fine-tuning tasks in Chapter 6 and Chapter 7 — so grab a coffee and let’s get started!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:17.511Z"} {"title":"What if my dataset isn't on the Hub? - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter5/2?fw=pt","markdown":"## [](#what-if-my-dataset-isnt-on-the-hub)What if my dataset isn't on the Hub?\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-5-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter5/section2.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter5/section2.ipynb)\n\nYou know how to use the [Hugging Face Hub](https://huggingface.co/datasets) to download datasets, but you’ll often find yourself working with data that is stored either on your laptop or on a remote server. In this section we’ll show you how 🤗 Datasets can be used to load datasets that aren’t available on the Hugging Face Hub.\n\n## [](#working-with-local-and-remote-datasets)Working with local and remote datasets\n\n🤗 Datasets provides loading scripts to handle the loading of local and remote datasets. It supports several common data formats, such as:\n\n| Data format | Loading script | Example |\n| --- | --- | --- |\n| CSV & TSV | `csv` | `load_dataset(\"csv\", data_files=\"my_file.csv\")` |\n| Text files | `text` | `load_dataset(\"text\", data_files=\"my_file.txt\")` |\n| JSON & JSON Lines | `json` | `load_dataset(\"json\", data_files=\"my_file.jsonl\")` |\n| Pickled DataFrames | `pandas` | `load_dataset(\"pandas\", data_files=\"my_dataframe.pkl\")` |\n\nAs shown in the table, for each data format we just need to specify the type of loading script in the `load_dataset()` function, along with a `data_files` argument that specifies the path to one or more files. Let’s start by loading a dataset from local files; later we’ll see how to do the same with remote files.\n\n## [](#loading-a-local-dataset)Loading a local dataset\n\nFor this example we’ll use the [SQuAD-it dataset](https://github.com/crux82/squad-it/), which is a large-scale dataset for question answering in Italian.\n\nThe training and test splits are hosted on GitHub, so we can download them with a simple `wget` command:\n\n```\n!wget https://github.com/crux82/squad-it/raw/master/SQuAD_it-train.json.gz\n!wget https://github.com/crux82/squad-it/raw/master/SQuAD_it-test.json.gz```\n\nThis will download two compressed files called _SQuAD\\_it-train.json.gz_ and _SQuAD\\_it-test.json.gz_, which we can decompress with the Linux `gzip` command:\n\n```\n!gzip -dkv SQuAD_it-*.json.gz```\n\n```\nSQuAD_it-test.json.gz:\t 87.4% -- replaced with SQuAD_it-test.json\nSQuAD_it-train.json.gz:\t 82.2% -- replaced with SQuAD_it-train.json```\n\nWe can see that the compressed files have been replaced with _SQuAD\\_it-train.json_ and _SQuAD\\_it-test.json_, and that the data is stored in the JSON format.\n\n✎ If you’re wondering why there’s a `!` character in the above shell commands, that’s because we’re running them within a Jupyter notebook. Simply remove the prefix if you want to download and unzip the dataset within a terminal.\n\nTo load a JSON file with the `load_dataset()` function, we just need to know if we’re dealing with ordinary JSON (similar to a nested dictionary) or JSON Lines (line-separated JSON). Like many question answering datasets, SQuAD-it uses the nested format, with all the text stored in a `data` field. This means we can load the dataset by specifying the `field` argument as follows:\n\n```\nfrom datasets import load_dataset\n\nsquad_it_dataset = load_dataset(\"json\", data_files=\"SQuAD_it-train.json\", field=\"data\")```\n\nBy default, loading local files creates a `DatasetDict` object with a `train` split. We can see this by inspecting the `squad_it_dataset` object:\n\n```\nDatasetDict({\n train: Dataset({\n features: ['title', 'paragraphs'],\n num_rows: 442\n })\n})```\n\nThis shows us the number of rows and the column names associated with the training set. We can view one of the examples by indexing into the `train` split as follows:\n\n```\nsquad_it_dataset[\"train\"][0]```\n\n```\n{\n \"title\": \"Terremoto del Sichuan del 2008\",\n \"paragraphs\": [\n {\n \"context\": \"Il terremoto del Sichuan del 2008 o il terremoto...\",\n \"qas\": [\n {\n \"answers\": [{\"answer_start\": 29, \"text\": \"2008\"}],\n \"id\": \"56cdca7862d2951400fa6826\",\n \"question\": \"In quale anno si è verificato il terremoto nel Sichuan?\",\n },\n ...\n ],\n },\n ...\n ],\n}```\n\nGreat, we’ve loaded our first local dataset! But while this worked for the training set, what we really want is to include both the `train` and `test` splits in a single `DatasetDict` object so we can apply `Dataset.map()` functions across both splits at once. To do this, we can provide a dictionary to the `data_files` argument that maps each split name to a file associated with that split:\n\n```\ndata_files = {\"train\": \"SQuAD_it-train.json\", \"test\": \"SQuAD_it-test.json\"}\nsquad_it_dataset = load_dataset(\"json\", data_files=data_files, field=\"data\")\nsquad_it_dataset```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['title', 'paragraphs'],\n num_rows: 442\n })\n test: Dataset({\n features: ['title', 'paragraphs'],\n num_rows: 48\n })\n})```\n\nThis is exactly what we wanted. Now, we can apply various preprocessing techniques to clean up the data, tokenize the reviews, and so on.\n\nThe `data_files` argument of the `load_dataset()` function is quite flexible and can be either a single file path, a list of file paths, or a dictionary that maps split names to file paths. You can also glob files that match a specified pattern according to the rules used by the Unix shell (e.g., you can glob all the JSON files in a directory as a single split by setting `data_files=\"*.json\"`). See the 🤗 Datasets [documentation](https://huggingface.co/docs/datasets/loading.html#local-and-remote-files) for more details.\n\nThe loading scripts in 🤗 Datasets actually support automatic decompression of the input files, so we could have skipped the use of `gzip` by pointing the `data_files` argument directly to the compressed files:\n\n```\ndata_files = {\"train\": \"SQuAD_it-train.json.gz\", \"test\": \"SQuAD_it-test.json.gz\"}\nsquad_it_dataset = load_dataset(\"json\", data_files=data_files, field=\"data\")```\n\nThis can be useful if you don’t want to manually decompress many GZIP files. The automatic decompression also applies to other common formats like ZIP and TAR, so you just need to point `data_files` to the compressed files and you’re good to go!\n\nNow that you know how to load local files on your laptop or desktop, let’s take a look at loading remote files.\n\n## [](#loading-a-remote-dataset)Loading a remote dataset\n\nIf you’re working as a data scientist or coder in a company, there’s a good chance the datasets you want to analyze are stored on some remote server. Fortunately, loading remote files is just as simple as loading local ones! Instead of providing a path to local files, we point the `data_files` argument of `load_dataset()` to one or more URLs where the remote files are stored. For example, for the SQuAD-it dataset hosted on GitHub, we can just point `data_files` to the _SQuAD\\_it-\\*.json.gz_ URLs as follows:\n\n```\nurl = \"https://github.com/crux82/squad-it/raw/master/\"\ndata_files = {\n \"train\": url + \"SQuAD_it-train.json.gz\",\n \"test\": url + \"SQuAD_it-test.json.gz\",\n}\nsquad_it_dataset = load_dataset(\"json\", data_files=data_files, field=\"data\")```\n\nThis returns the same `DatasetDict` object obtained above, but saves us the step of manually downloading and decompressing the _SQuAD\\_it-\\*.json.gz_ files. This wraps up our foray into the various ways to load datasets that aren’t hosted on the Hugging Face Hub. Now that we’ve got a dataset to play with, let’s get our hands dirty with various data-wrangling techniques!\n\n✏️ **Try it out!** Pick another dataset hosted on GitHub or the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php) and try loading it both locally and remotely using the techniques introduced above. For bonus points, try loading a dataset that’s stored in a CSV or text format (see the [documentation](https://huggingface.co/docs/datasets/loading.html#local-and-remote-files) for more information on these formats).","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tWhat if my dataset isn't on the Hub? - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

What if my dataset isn't on the Hub?

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

What if my dataset isn't on the Hub?

\"Ask \"Open \"Open

You know how to use the Hugging Face Hub to download datasets, but you’ll often find yourself working with data that is stored either on your laptop or on a remote server. In this section we’ll show you how 🤗 Datasets can be used to load datasets that aren’t available on the Hugging Face Hub.

Working with local and remote datasets

🤗 Datasets provides loading scripts to handle the loading of local and remote datasets. It supports several common data formats, such as:

Data format Loading script Example
CSV & TSV csv load_dataset(\"csv\", data_files=\"my_file.csv\")
Text files text load_dataset(\"text\", data_files=\"my_file.txt\")
JSON & JSON Lines json load_dataset(\"json\", data_files=\"my_file.jsonl\")
Pickled DataFrames pandas load_dataset(\"pandas\", data_files=\"my_dataframe.pkl\")

As shown in the table, for each data format we just need to specify the type of loading script in the load_dataset() function, along with a data_files argument that specifies the path to one or more files. Let’s start by loading a dataset from local files; later we’ll see how to do the same with remote files.

Loading a local dataset

For this example we’ll use the SQuAD-it dataset, which is a large-scale dataset for question answering in Italian.

The training and test splits are hosted on GitHub, so we can download them with a simple wget command:

!wget https://github.com/crux82/squad-it/raw/master/SQuAD_it-train.json.gz\n!wget https://github.com/crux82/squad-it/raw/master/SQuAD_it-test.json.gz

This will download two compressed files called SQuAD_it-train.json.gz and SQuAD_it-test.json.gz, which we can decompress with the Linux gzip command:

!gzip -dkv SQuAD_it-*.json.gz
SQuAD_it-test.json.gz:\t   87.4% -- replaced with SQuAD_it-test.json\nSQuAD_it-train.json.gz:\t   82.2% -- replaced with SQuAD_it-train.json

We can see that the compressed files have been replaced with SQuAD_it-train.json and SQuAD_it-test.json, and that the data is stored in the JSON format.

✎ If you’re wondering why there’s a ! character in the above shell commands, that’s because we’re running them within a Jupyter notebook. Simply remove the prefix if you want to download and unzip the dataset within a terminal.

To load a JSON file with the load_dataset() function, we just need to know if we’re dealing with ordinary JSON (similar to a nested dictionary) or JSON Lines (line-separated JSON). Like many question answering datasets, SQuAD-it uses the nested format, with all the text stored in a data field. This means we can load the dataset by specifying the field argument as follows:

from datasets import load_dataset\n\nsquad_it_dataset = load_dataset(\"json\", data_files=\"SQuAD_it-train.json\", field=\"data\")

By default, loading local files creates a DatasetDict object with a train split. We can see this by inspecting the squad_it_dataset object:

squad_it_dataset
DatasetDict({\n    train: Dataset({\n        features: ['title', 'paragraphs'],\n        num_rows: 442\n    })\n})

This shows us the number of rows and the column names associated with the training set. We can view one of the examples by indexing into the train split as follows:

squad_it_dataset[\"train\"][0]
{\n    \"title\": \"Terremoto del Sichuan del 2008\",\n    \"paragraphs\": [\n        {\n            \"context\": \"Il terremoto del Sichuan del 2008 o il terremoto...\",\n            \"qas\": [\n                {\n                    \"answers\": [{\"answer_start\": 29, \"text\": \"2008\"}],\n                    \"id\": \"56cdca7862d2951400fa6826\",\n                    \"question\": \"In quale anno si è verificato il terremoto nel Sichuan?\",\n                },\n                ...\n            ],\n        },\n        ...\n    ],\n}

Great, we’ve loaded our first local dataset! But while this worked for the training set, what we really want is to include both the train and test splits in a single DatasetDict object so we can apply Dataset.map() functions across both splits at once. To do this, we can provide a dictionary to the data_files argument that maps each split name to a file associated with that split:

data_files = {\"train\": \"SQuAD_it-train.json\", \"test\": \"SQuAD_it-test.json\"}\nsquad_it_dataset = load_dataset(\"json\", data_files=data_files, field=\"data\")\nsquad_it_dataset
DatasetDict({\n    train: Dataset({\n        features: ['title', 'paragraphs'],\n        num_rows: 442\n    })\n    test: Dataset({\n        features: ['title', 'paragraphs'],\n        num_rows: 48\n    })\n})

This is exactly what we wanted. Now, we can apply various preprocessing techniques to clean up the data, tokenize the reviews, and so on.

The data_files argument of the load_dataset() function is quite flexible and can be either a single file path, a list of file paths, or a dictionary that maps split names to file paths. You can also glob files that match a specified pattern according to the rules used by the Unix shell (e.g., you can glob all the JSON files in a directory as a single split by setting data_files=\"*.json\"). See the 🤗 Datasets documentation for more details.

The loading scripts in 🤗 Datasets actually support automatic decompression of the input files, so we could have skipped the use of gzip by pointing the data_files argument directly to the compressed files:

data_files = {\"train\": \"SQuAD_it-train.json.gz\", \"test\": \"SQuAD_it-test.json.gz\"}\nsquad_it_dataset = load_dataset(\"json\", data_files=data_files, field=\"data\")

This can be useful if you don’t want to manually decompress many GZIP files. The automatic decompression also applies to other common formats like ZIP and TAR, so you just need to point data_files to the compressed files and you’re good to go!

Now that you know how to load local files on your laptop or desktop, let’s take a look at loading remote files.

Loading a remote dataset

If you’re working as a data scientist or coder in a company, there’s a good chance the datasets you want to analyze are stored on some remote server. Fortunately, loading remote files is just as simple as loading local ones! Instead of providing a path to local files, we point the data_files argument of load_dataset() to one or more URLs where the remote files are stored. For example, for the SQuAD-it dataset hosted on GitHub, we can just point data_files to the SQuAD_it-*.json.gz URLs as follows:

url = \"https://github.com/crux82/squad-it/raw/master/\"\ndata_files = {\n    \"train\": url + \"SQuAD_it-train.json.gz\",\n    \"test\": url + \"SQuAD_it-test.json.gz\",\n}\nsquad_it_dataset = load_dataset(\"json\", data_files=data_files, field=\"data\")

This returns the same DatasetDict object obtained above, but saves us the step of manually downloading and decompressing the SQuAD_it-*.json.gz files. This wraps up our foray into the various ways to load datasets that aren’t hosted on the Hugging Face Hub. Now that we’ve got a dataset to play with, let’s get our hands dirty with various data-wrangling techniques!

✏️ Try it out! Pick another dataset hosted on GitHub or the UCI Machine Learning Repository and try loading it both locally and remotely using the techniques introduced above. For bonus points, try loading a dataset that’s stored in a CSV or text format (see the documentation for more information on these formats).

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:18.400Z"} {"title":"Time to slice and dice - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter5/3?fw=pt","markdown":"## [](#time-to-slice-and-dice)Time to slice and dice\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-5-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter5/section3.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter5/section3.ipynb)\n\nMost of the time, the data you work with won’t be perfectly prepared for training models. In this section we’ll explore the various features that 🤗 Datasets provides to clean up your datasets.\n\n## [](#slicing-and-dicing-our-data)Slicing and dicing our data\n\nSimilar to Pandas, 🤗 Datasets provides several functions to manipulate the contents of `Dataset` and `DatasetDict` objects. We already encountered the `Dataset.map()` method in [Chapter 3](/course/chapter3), and in this section we’ll explore some of the other functions at our disposal.\n\nFor this example we’ll use the [Drug Review Dataset](https://archive.ics.uci.edu/ml/datasets/Drug+Review+Dataset+%28Drugs.com%29) that’s hosted on the [UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php), which contains patient reviews on various drugs, along with the condition being treated and a 10-star rating of the patient’s satisfaction.\n\nFirst we need to download and extract the data, which can be done with the `wget` and `unzip` commands:\n\n```\n!wget \"https://archive.ics.uci.edu/ml/machine-learning-databases/00462/drugsCom_raw.zip\"\n!unzip drugsCom_raw.zip```\n\nSince TSV is just a variant of CSV that uses tabs instead of commas as the separator, we can load these files by using the `csv` loading script and specifying the `delimiter` argument in the `load_dataset()` function as follows:\n\n```\nfrom datasets import load_dataset\n\ndata_files = {\"train\": \"drugsComTrain_raw.tsv\", \"test\": \"drugsComTest_raw.tsv\"}\n\ndrug_dataset = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")```\n\nA good practice when doing any sort of data analysis is to grab a small random sample to get a quick feel for the type of data you’re working with. In 🤗 Datasets, we can create a random sample by chaining the `Dataset.shuffle()` and `Dataset.select()` functions together:\n\n```\ndrug_sample = drug_dataset[\"train\"].shuffle(seed=42).select(range(1000))\n\ndrug_sample[:3]```\n\n```\n{'Unnamed: 0': [87571, 178045, 80482],\n 'drugName': ['Naproxen', 'Duloxetine', 'Mobic'],\n 'condition': ['Gout, Acute', 'ibromyalgia', 'Inflammatory Conditions'],\n 'review': ['\"like the previous person mention, I'm a strong believer of aleve, it works faster for my gout than the prescription meds I take. No more going to the doctor for refills.....Aleve works!\"',\n '\"I have taken Cymbalta for about a year and a half for fibromyalgia pain. It is great\\r\\nas a pain reducer and an anti-depressant, however, the side effects outweighed \\r\\nany benefit I got from it. I had trouble with restlessness, being tired constantly,\\r\\ndizziness, dry mouth, numbness and tingling in my feet, and horrible sweating. I am\\r\\nbeing weaned off of it now. Went from 60 mg to 30mg and now to 15 mg. I will be\\r\\noff completely in about a week. The fibro pain is coming back, but I would rather deal with it than the side effects.\"',\n '\"I have been taking Mobic for over a year with no side effects other than an elevated blood pressure. I had severe knee and ankle pain which completely went away after taking Mobic. I attempted to stop the medication however pain returned after a few days.\"'],\n 'rating': [9.0, 3.0, 10.0],\n 'date': ['September 2, 2015', 'November 7, 2011', 'June 5, 2013'],\n 'usefulCount': [36, 13, 128]}```\n\nNote that we’ve fixed the seed in `Dataset.shuffle()` for reproducibility purposes. `Dataset.select()` expects an iterable of indices, so we’ve passed `range(1000)` to grab the first 1,000 examples from the shuffled dataset. From this sample we can already see a few quirks in our dataset:\n\n- The `Unnamed: 0` column looks suspiciously like an anonymized ID for each patient.\n- The `condition` column includes a mix of uppercase and lowercase labels.\n- The reviews are of varying length and contain a mix of Python line separators (`\\r\\n`) as well as HTML character codes like `&\\#039;`.\n\nLet’s see how we can use 🤗 Datasets to deal with each of these issues. To test the patient ID hypothesis for the `Unnamed: 0` column, we can use the `Dataset.unique()` function to verify that the number of IDs matches the number of rows in each split:\n\n```\nfor split in drug_dataset.keys():\n assert len(drug_dataset[split]) == len(drug_dataset[split].unique(\"Unnamed: 0\"))```\n\nThis seems to confirm our hypothesis, so let’s clean up the dataset a bit by renaming the `Unnamed: 0` column to something a bit more interpretable. We can use the `DatasetDict.rename_column()` function to rename the column across both splits in one go:\n\n```\ndrug_dataset = drug_dataset.rename_column(\n original_column_name=\"Unnamed: 0\", new_column_name=\"patient_id\"\n)\ndrug_dataset```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount'],\n num_rows: 161297\n })\n test: Dataset({\n features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount'],\n num_rows: 53766\n })\n})```\n\n✏️ **Try it out!** Use the `Dataset.unique()` function to find the number of unique drugs and conditions in the training and test sets.\n\nNext, let’s normalize all the `condition` labels using `Dataset.map()`. As we did with tokenization in [Chapter 3](/course/chapter3), we can define a simple function that can be applied across all the rows of each split in `drug_dataset`:\n\n```\ndef lowercase_condition(example):\n return {\"condition\": example[\"condition\"].lower()}\n\n\ndrug_dataset.map(lowercase_condition)```\n\n```\nAttributeError: 'NoneType' object has no attribute 'lower'```\n\nOh no, we’ve run into a problem with our map function! From the error we can infer that some of the entries in the `condition` column are `None`, which cannot be lowercased as they’re not strings. Let’s drop these rows using `Dataset.filter()`, which works in a similar way to `Dataset.map()` and expects a function that receives a single example of the dataset. Instead of writing an explicit function like:\n\n```\ndef filter_nones(x):\n return x[\"condition\"] is not None```\n\nand then running `drug_dataset.filter(filter_nones)`, we can do this in one line using a _lambda function_. In Python, lambda functions are small functions that you can define without explicitly naming them. They take the general form:\n\n```\nlambda : ```\n\nwhere `lambda` is one of Python’s special [keywords](https://docs.python.org/3/reference/lexical_analysis.html#keywords), `` is a list/set of comma-separated values that define the inputs to the function, and `` represents the operations you wish to execute. For example, we can define a simple lambda function that squares a number as follows:\n\nTo apply this function to an input, we need to wrap it and the input in parentheses:\n\nSimilarly, we can define lambda functions with multiple arguments by separating them with commas. For example, we can compute the area of a triangle as follows:\n\n```\n(lambda base, height: 0.5 * base * height)(4, 8)```\n\nLambda functions are handy when you want to define small, single-use functions (for more information about them, we recommend reading the excellent [Real Python tutorial](https://realpython.com/python-lambda/) by Andre Burgaud). In the 🤗 Datasets context, we can use lambda functions to define simple map and filter operations, so let’s use this trick to eliminate the `None` entries in our dataset:\n\n```\ndrug_dataset = drug_dataset.filter(lambda x: x[\"condition\"] is not None)```\n\nWith the `None` entries removed, we can normalize our `condition` column:\n\n```\ndrug_dataset = drug_dataset.map(lowercase_condition)\n\ndrug_dataset[\"train\"][\"condition\"][:3]```\n\n```\n['left ventricular dysfunction', 'adhd', 'birth control']```\n\nIt works! Now that we’ve cleaned up the labels, let’s take a look at cleaning up the reviews themselves.\n\n## [](#creating-new-columns)Creating new columns\n\nWhenever you’re dealing with customer reviews, a good practice is to check the number of words in each review. A review might be just a single word like “Great!” or a full-blown essay with thousands of words, and depending on the use case you’ll need to handle these extremes differently. To compute the number of words in each review, we’ll use a rough heuristic based on splitting each text by whitespace.\n\nLet’s define a simple function that counts the number of words in each review:\n\n```\ndef compute_review_length(example):\n return {\"review_length\": len(example[\"review\"].split())}```\n\nUnlike our `lowercase_condition()` function, `compute_review_length()` returns a dictionary whose key does not correspond to one of the column names in the dataset. In this case, when `compute_review_length()` is passed to `Dataset.map()`, it will be applied to all the rows in the dataset to create a new `review_length` column:\n\n```\ndrug_dataset = drug_dataset.map(compute_review_length)\n\ndrug_dataset[\"train\"][0]```\n\n```\n{'patient_id': 206461,\n 'drugName': 'Valsartan',\n 'condition': 'left ventricular dysfunction',\n 'review': '\"It has no side effect, I take it in combination of Bystolic 5 Mg and Fish Oil\"',\n 'rating': 9.0,\n 'date': 'May 20, 2012',\n 'usefulCount': 27,\n 'review_length': 17}```\n\nAs expected, we can see a `review_length` column has been added to our training set. We can sort this new column with `Dataset.sort()` to see what the extreme values look like:\n\n```\ndrug_dataset[\"train\"].sort(\"review_length\")[:3]```\n\n```\n{'patient_id': [103488, 23627, 20558],\n 'drugName': ['Loestrin 21 1 / 20', 'Chlorzoxazone', 'Nucynta'],\n 'condition': ['birth control', 'muscle spasm', 'pain'],\n 'review': ['\"Excellent.\"', '\"useless\"', '\"ok\"'],\n 'rating': [10.0, 1.0, 6.0],\n 'date': ['November 4, 2008', 'March 24, 2017', 'August 20, 2016'],\n 'usefulCount': [5, 2, 10],\n 'review_length': [1, 1, 1]}```\n\nAs we suspected, some reviews contain just a single word, which, although it may be okay for sentiment analysis, would not be informative if we want to predict the condition.\n\n🙋 An alternative way to add new columns to a dataset is with the `Dataset.add_column()` function. This allows you to provide the column as a Python list or NumPy array and can be handy in situations where `Dataset.map()` is not well suited for your analysis.\n\nLet’s use the `Dataset.filter()` function to remove reviews that contain fewer than 30 words. Similarly to what we did with the `condition` column, we can filter out the very short reviews by requiring that the reviews have a length above this threshold:\n\n```\ndrug_dataset = drug_dataset.filter(lambda x: x[\"review_length\"] > 30)\nprint(drug_dataset.num_rows)```\n\n```\n{'train': 138514, 'test': 46108}```\n\nAs you can see, this has removed around 15% of the reviews from our original training and test sets.\n\n✏️ **Try it out!** Use the `Dataset.sort()` function to inspect the reviews with the largest numbers of words. See the [documentation](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.sort) to see which argument you need to use sort the reviews by length in descending order.\n\nThe last thing we need to deal with is the presence of HTML character codes in our reviews. We can use Python’s `html` module to unescape these characters, like so:\n\n```\nimport html\n\ntext = \"I'm a transformer called BERT\"\nhtml.unescape(text)```\n\n```\n\"I'm a transformer called BERT\"```\n\nWe’ll use `Dataset.map()` to unescape all the HTML characters in our corpus:\n\n```\ndrug_dataset = drug_dataset.map(lambda x: {\"review\": html.unescape(x[\"review\"])})```\n\nAs you can see, the `Dataset.map()` method is quite useful for processing data — and we haven’t even scratched the surface of everything it can do!\n\n## [](#the-map-methods-superpowers)The `map()` method's superpowers\n\nThe `Dataset.map()` method takes a `batched` argument that, if set to `True`, causes it to send a batch of examples to the map function at once (the batch size is configurable but defaults to 1,000). For instance, the previous map function that unescaped all the HTML took a bit of time to run (you can read the time taken from the progress bars). We can speed this up by processing several elements at the same time using a list comprehension.\n\nWhen you specify `batched=True` the function receives a dictionary with the fields of the dataset, but each value is now a _list of values_, and not just a single value. The return value of `Dataset.map()` should be the same: a dictionary with the fields we want to update or add to our dataset, and a list of values. For example, here is another way to unescape all HTML characters, but using `batched=True`:\n\n```\nnew_drug_dataset = drug_dataset.map(\n lambda x: {\"review\": [html.unescape(o) for o in x[\"review\"]]}, batched=True\n)```\n\nIf you’re running this code in a notebook, you’ll see that this command executes way faster than the previous one. And it’s not because our reviews have already been HTML-unescaped — if you re-execute the instruction from the previous section (without `batched=True`), it will take the same amount of time as before. This is because list comprehensions are usually faster than executing the same code in a `for` loop, and we also gain some performance by accessing lots of elements at the same time instead of one by one.\n\nUsing `Dataset.map()` with `batched=True` will be essential to unlock the speed of the “fast” tokenizers that we’ll encounter in [Chapter 6](/course/chapter6), which can quickly tokenize big lists of texts. For instance, to tokenize all the drug reviews with a fast tokenizer, we could use a function like this:\n\n```\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\n\n\ndef tokenize_function(examples):\n return tokenizer(examples[\"review\"], truncation=True)```\n\nAs you saw in [Chapter 3](/course/chapter3), we can pass one or several examples to the tokenizer, so we can use this function with or without `batched=True`. Let’s take this opportunity to compare the performance of the different options. In a notebook, you can time a one-line instruction by adding `%time` before the line of code you wish to measure:\n\n```\n%time tokenized_dataset = drug_dataset.map(tokenize_function, batched=True)```\n\nYou can also time a whole cell by putting `%%time` at the beginning of the cell. On the hardware we executed this on, it showed 10.8s for this instruction (it’s the number written after “Wall time”).\n\n✏️ **Try it out!** Execute the same instruction with and without `batched=True`, then try it with a slow tokenizer (add `use_fast=False` in the `AutoTokenizer.from_pretrained()` method) so you can see what numbers you get on your hardware.\n\nHere are the results we obtained with and without batching, with a fast and a slow tokenizer:\n\n| Options | Fast tokenizer | Slow tokenizer |\n| --- | --- | --- |\n| `batched=True` | 10.8s | 4min41s |\n| `batched=False` | 59.2s | 5min3s |\n\nThis means that using a fast tokenizer with the `batched=True` option is 30 times faster than its slow counterpart with no batching — this is truly amazing! That’s the main reason why fast tokenizers are the default when using `AutoTokenizer` (and why they are called “fast”). They’re able to achieve such a speedup because behind the scenes the tokenization code is executed in Rust, which is a language that makes it easy to parallelize code execution.\n\nParallelization is also the reason for the nearly 6x speedup the fast tokenizer achieves with batching: you can’t parallelize a single tokenization operation, but when you want to tokenize lots of texts at the same time you can just split the execution across several processes, each responsible for its own texts.\n\n`Dataset.map()` also has some parallelization capabilities of its own. Since they are not backed by Rust, they won’t let a slow tokenizer catch up with a fast one, but they can still be helpful (especially if you’re using a tokenizer that doesn’t have a fast version). To enable multiprocessing, use the `num_proc` argument and specify the number of processes to use in your call to `Dataset.map()`:\n\n```\nslow_tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\", use_fast=False)\n\n\ndef slow_tokenize_function(examples):\n return slow_tokenizer(examples[\"review\"], truncation=True)\n\n\ntokenized_dataset = drug_dataset.map(slow_tokenize_function, batched=True, num_proc=8)```\n\nYou can experiment a little with timing to determine the optimal number of processes to use; in our case 8 seemed to produce the best speed gain. Here are the numbers we got with and without multiprocessing:\n\n| Options | Fast tokenizer | Slow tokenizer |\n| --- | --- | --- |\n| `batched=True` | 10.8s | 4min41s |\n| `batched=False` | 59.2s | 5min3s |\n| `batched=True`, `num_proc=8` | 6.52s | 41.3s |\n| `batched=False`, `num_proc=8` | 9.49s | 45.2s |\n\nThose are much more reasonable results for the slow tokenizer, but the performance of the fast tokenizer was also substantially improved. Note, however, that won’t always be the case — for values of `num_proc` other than 8, our tests showed that it was faster to use `batched=True` without that option. In general, we don’t recommend using Python multiprocessing for fast tokenizers with `batched=True`.\n\nUsing `num_proc` to speed up your processing is usually a great idea, as long as the function you are using is not already doing some kind of multiprocessing of its own.\n\nAll of this functionality condensed into a single method is already pretty amazing, but there’s more! With `Dataset.map()` and `batched=True` you can change the number of elements in your dataset. This is super useful in many situations where you want to create several training features from one example, and we will need to do this as part of the preprocessing for several of the NLP tasks we’ll undertake in [Chapter 7](/course/chapter7).\n\n💡 In machine learning, an _example_ is usually defined as the set of _features_ that we feed to the model. In some contexts, these features will be the set of columns in a `Dataset`, but in others (like here and for question answering), multiple features can be extracted from a single example and belong to a single column.\n\nLet’s have a look at how it works! Here we will tokenize our examples and truncate them to a maximum length of 128, but we will ask the tokenizer to return _all_ the chunks of the texts instead of just the first one. This can be done with `return_overflowing_tokens=True`:\n\n```\ndef tokenize_and_split(examples):\n return tokenizer(\n examples[\"review\"],\n truncation=True,\n max_length=128,\n return_overflowing_tokens=True,\n )```\n\nLet’s test this on one example before using `Dataset.map()` on the whole dataset:\n\n```\nresult = tokenize_and_split(drug_dataset[\"train\"][0])\n[len(inp) for inp in result[\"input_ids\"]]```\n\nSo, our first example in the training set became two features because it was tokenized to more than the maximum number of tokens we specified: the first one of length 128 and the second one of length 49. Now let’s do this for all elements of the dataset!\n\n```\ntokenized_dataset = drug_dataset.map(tokenize_and_split, batched=True)```\n\n```\nArrowInvalid: Column 1 named condition expected length 1463 but got length 1000```\n\nOh no! That didn’t work! Why not? Looking at the error message will give us a clue: there is a mismatch in the lengths of one of the columns, one being of length 1,463 and the other of length 1,000. If you’ve looked at the `Dataset.map()` [documentation](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map), you may recall that it’s the number of samples passed to the function that we are mapping; here those 1,000 examples gave 1,463 new features, resulting in a shape error.\n\nThe problem is that we’re trying to mix two different datasets of different sizes: the `drug_dataset` columns will have a certain number of examples (the 1,000 in our error), but the `tokenized_dataset` we are building will have more (the 1,463 in the error message; it is more than 1,000 because we are tokenizing long reviews into more than one example by using `return_overflowing_tokens=True`). That doesn’t work for a `Dataset`, so we need to either remove the columns from the old dataset or make them the same size as they are in the new dataset. We can do the former with the `remove_columns` argument:\n\n```\ntokenized_dataset = drug_dataset.map(\n tokenize_and_split, batched=True, remove_columns=drug_dataset[\"train\"].column_names\n)```\n\nNow this works without error. We can check that our new dataset has many more elements than the original dataset by comparing the lengths:\n\n```\nlen(tokenized_dataset[\"train\"]), len(drug_dataset[\"train\"])```\n\nWe mentioned that we can also deal with the mismatched length problem by making the old columns the same size as the new ones. To do this, we will need the `overflow_to_sample_mapping` field the tokenizer returns when we set `return_overflowing_tokens=True`. It gives us a mapping from a new feature index to the index of the sample it originated from. Using this, we can associate each key present in our original dataset with a list of values of the right size by repeating the values of each example as many times as it generates new features:\n\n```\ndef tokenize_and_split(examples):\n result = tokenizer(\n examples[\"review\"],\n truncation=True,\n max_length=128,\n return_overflowing_tokens=True,\n )\n \n sample_map = result.pop(\"overflow_to_sample_mapping\")\n for key, values in examples.items():\n result[key] = [values[i] for i in sample_map]\n return result```\n\nWe can see it works with `Dataset.map()` without us needing to remove the old columns:\n\n```\ntokenized_dataset = drug_dataset.map(tokenize_and_split, batched=True)\ntokenized_dataset```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['attention_mask', 'condition', 'date', 'drugName', 'input_ids', 'patient_id', 'rating', 'review', 'review_length', 'token_type_ids', 'usefulCount'],\n num_rows: 206772\n })\n test: Dataset({\n features: ['attention_mask', 'condition', 'date', 'drugName', 'input_ids', 'patient_id', 'rating', 'review', 'review_length', 'token_type_ids', 'usefulCount'],\n num_rows: 68876\n })\n})```\n\nWe get the same number of training features as before, but here we’ve kept all the old fields. If you need them for some post-processing after applying your model, you might want to use this approach.\n\nYou’ve now seen how 🤗 Datasets can be used to preprocess a dataset in various ways. Although the processing functions of 🤗 Datasets will cover most of your model training needs, there may be times when you’ll need to switch to Pandas to access more powerful features, like `DataFrame.groupby()` or high-level APIs for visualization. Fortunately, 🤗 Datasets is designed to be interoperable with libraries such as Pandas, NumPy, PyTorch, TensorFlow, and JAX. Let’s take a look at how this works.\n\n## [](#from-datasets-to-dataframes-and-back)From `Dataset`s to `DataFrame`s and back\n\nTo enable the conversion between various third-party libraries, 🤗 Datasets provides a `Dataset.set_format()` function. This function only changes the _output format_ of the dataset, so you can easily switch to another format without affecting the underlying _data format_, which is Apache Arrow. The formatting is done in place. To demonstrate, let’s convert our dataset to Pandas:\n\n```\ndrug_dataset.set_format(\"pandas\")```\n\nNow when we access elements of the dataset we get a `pandas.DataFrame` instead of a dictionary:\n\n```\ndrug_dataset[\"train\"][:3]```\n\n| | patient\\_id | drugName | condition | review | rating | date | usefulCount | review\\_length |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| 0 | 95260 | Guanfacine | adhd | \"My son is halfway through his fourth week of Intuniv...\" | 8.0 | April 27, 2010 | 192 | 141 |\n| 1 | 92703 | Lybrel | birth control | \"I used to take another oral contraceptive, which had 21 pill cycle, and was very happy- very light periods, max 5 days, no other side effects...\" | 5.0 | December 14, 2009 | 17 | 134 |\n| 2 | 138000 | Ortho Evra | birth control | \"This is my first time using any form of birth control...\" | 8.0 | November 3, 2015 | 10 | 89 |\n\nLet’s create a `pandas.DataFrame` for the whole training set by selecting all the elements of `drug_dataset[\"train\"]`:\n\n```\ntrain_df = drug_dataset[\"train\"][:]```\n\n🚨 Under the hood, `Dataset.set_format()` changes the return format for the dataset’s `__getitem__()` dunder method. This means that when we want to create a new object like `train_df` from a `Dataset` in the `\"pandas\"` format, we need to slice the whole dataset to obtain a `pandas.DataFrame`. You can verify for yourself that the type of `drug_dataset[\"train\"]` is `Dataset`, irrespective of the output format.\n\nFrom here we can use all the Pandas functionality that we want. For example, we can do fancy chaining to compute the class distribution among the `condition` entries:\n\n```\nfrequencies = (\n train_df[\"condition\"]\n .value_counts()\n .to_frame()\n .reset_index()\n .rename(columns={\"index\": \"condition\", \"condition\": \"frequency\"})\n)\nfrequencies.head()```\n\n| | condition | frequency |\n| --- | --- | --- |\n| 0 | birth control | 27655 |\n| 1 | depression | 8023 |\n| 2 | acne | 5209 |\n| 3 | anxiety | 4991 |\n| 4 | pain | 4744 |\n\nAnd once we’re done with our Pandas analysis, we can always create a new `Dataset` object by using the `Dataset.from_pandas()` function as follows:\n\n```\nfrom datasets import Dataset\n\nfreq_dataset = Dataset.from_pandas(frequencies)\nfreq_dataset```\n\n```\nDataset({\n features: ['condition', 'frequency'],\n num_rows: 819\n})```\n\n✏️ **Try it out!** Compute the average rating per drug and store the result in a new `Dataset`.\n\nThis wraps up our tour of the various preprocessing techniques available in 🤗 Datasets. To round out the section, let’s create a validation set to prepare the dataset for training a classifier on. Before doing so, we’ll reset the output format of `drug_dataset` from `\"pandas\"` to `\"arrow\"`:\n\n```\ndrug_dataset.reset_format()```\n\n## [](#creating-a-validation-set)Creating a validation set\n\nAlthough we have a test set we could use for evaluation, it’s a good practice to leave the test set untouched and create a separate validation set during development. Once you are happy with the performance of your models on the validation set, you can do a final sanity check on the test set. This process helps mitigate the risk that you’ll overfit to the test set and deploy a model that fails on real-world data.\n\n🤗 Datasets provides a `Dataset.train_test_split()` function that is based on the famous functionality from `scikit-learn`. Let’s use it to split our training set into `train` and `validation` splits (we set the `seed` argument for reproducibility):\n\n```\ndrug_dataset_clean = drug_dataset[\"train\"].train_test_split(train_size=0.8, seed=42)\n\ndrug_dataset_clean[\"validation\"] = drug_dataset_clean.pop(\"test\")\n\ndrug_dataset_clean[\"test\"] = drug_dataset[\"test\"]\ndrug_dataset_clean```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length', 'review_clean'],\n num_rows: 110811\n })\n validation: Dataset({\n features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length', 'review_clean'],\n num_rows: 27703\n })\n test: Dataset({\n features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length', 'review_clean'],\n num_rows: 46108\n })\n})```\n\nGreat, we’ve now prepared a dataset that’s ready for training some models on! In [section 5](/course/chapter5/5) we’ll show you how to upload datasets to the Hugging Face Hub, but for now let’s cap off our analysis by looking at a few ways you can save datasets on your local machine.\n\n## [](#saving-a-dataset)Saving a dataset\n\nAlthough 🤗 Datasets will cache every downloaded dataset and the operations performed on it, there are times when you’ll want to save a dataset to disk (e.g., in case the cache gets deleted). As shown in the table below, 🤗 Datasets provides three main functions to save your dataset in different formats:\n\n| Data format | Function |\n| --- | --- |\n| Arrow | `Dataset.save_to_disk()` |\n| CSV | `Dataset.to_csv()` |\n| JSON | `Dataset.to_json()` |\n\nFor example, let’s save our cleaned dataset in the Arrow format:\n\n```\ndrug_dataset_clean.save_to_disk(\"drug-reviews\")```\n\nThis will create a directory with the following structure:\n\n```\ndrug-reviews/\n├── dataset_dict.json\n├── test\n│ ├── dataset.arrow\n│ ├── dataset_info.json\n│ └── state.json\n├── train\n│ ├── dataset.arrow\n│ ├── dataset_info.json\n│ ├── indices.arrow\n│ └── state.json\n└── validation\n ├── dataset.arrow\n ├── dataset_info.json\n ├── indices.arrow\n └── state.json```\n\nwhere we can see that each split is associated with its own _dataset.arrow_ table, and some metadata in _dataset\\_info.json_ and _state.json_. You can think of the Arrow format as a fancy table of columns and rows that is optimized for building high-performance applications that process and transport large datasets.\n\nOnce the dataset is saved, we can load it by using the `load_from_disk()` function as follows:\n\n```\nfrom datasets import load_from_disk\n\ndrug_dataset_reloaded = load_from_disk(\"drug-reviews\")\ndrug_dataset_reloaded```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length'],\n num_rows: 110811\n })\n validation: Dataset({\n features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length'],\n num_rows: 27703\n })\n test: Dataset({\n features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length'],\n num_rows: 46108\n })\n})```\n\nFor the CSV and JSON formats, we have to store each split as a separate file. One way to do this is by iterating over the keys and values in the `DatasetDict` object:\n\n```\nfor split, dataset in drug_dataset_clean.items():\n dataset.to_json(f\"drug-reviews-{split}.jsonl\")```\n\nThis saves each split in [JSON Lines format](https://jsonlines.org/), where each row in the dataset is stored as a single line of JSON. Here’s what the first example looks like:\n\n```\n!head -n 1 drug-reviews-train.jsonl```\n\n```\n{\"patient_id\":141780,\"drugName\":\"Escitalopram\",\"condition\":\"depression\",\"review\":\"\\\"I seemed to experience the regular side effects of LEXAPRO, insomnia, low sex drive, sleepiness during the day. I am taking it at night because my doctor said if it made me tired to take it at night. I assumed it would and started out taking it at night. Strange dreams, some pleasant. I was diagnosed with fibromyalgia. Seems to be helping with the pain. Have had anxiety and depression in my family, and have tried quite a few other medications that haven't worked. Only have been on it for two weeks but feel more positive in my mind, want to accomplish more in my life. Hopefully the side effects will dwindle away, worth it to stick with it from hearing others responses. Great medication.\\\"\",\"rating\":9.0,\"date\":\"May 29, 2011\",\"usefulCount\":10,\"review_length\":125}```\n\nWe can then use the techniques from [section 2](/course/chapter5/2) to load the JSON files as follows:\n\n```\ndata_files = {\n \"train\": \"drug-reviews-train.jsonl\",\n \"validation\": \"drug-reviews-validation.jsonl\",\n \"test\": \"drug-reviews-test.jsonl\",\n}\ndrug_dataset_reloaded = load_dataset(\"json\", data_files=data_files)```\n\nAnd that’s it for our excursion into data wrangling with 🤗 Datasets! Now that we have a cleaned dataset for training a model on, here are a few ideas that you could try out:\n\n1. Use the techniques from [Chapter 3](/course/chapter3) to train a classifier that can predict the patient condition based on the drug review.\n2. Use the `summarization` pipeline from [Chapter 1](/course/chapter1) to generate summaries of the reviews.\n\nNext, we’ll take a look at how 🤗 Datasets can enable you to work with huge datasets without blowing up your laptop!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tTime to slice and dice - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Time to slice and dice

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Time to slice and dice

\"Ask \"Open \"Open

Most of the time, the data you work with won’t be perfectly prepared for training models. In this section we’ll explore the various features that 🤗 Datasets provides to clean up your datasets.

Slicing and dicing our data

Similar to Pandas, 🤗 Datasets provides several functions to manipulate the contents of Dataset and DatasetDict objects. We already encountered the Dataset.map() method in Chapter 3, and in this section we’ll explore some of the other functions at our disposal.

For this example we’ll use the Drug Review Dataset that’s hosted on the UC Irvine Machine Learning Repository, which contains patient reviews on various drugs, along with the condition being treated and a 10-star rating of the patient’s satisfaction.

First we need to download and extract the data, which can be done with the wget and unzip commands:

!wget \"https://archive.ics.uci.edu/ml/machine-learning-databases/00462/drugsCom_raw.zip\"\n!unzip drugsCom_raw.zip

Since TSV is just a variant of CSV that uses tabs instead of commas as the separator, we can load these files by using the csv loading script and specifying the delimiter argument in the load_dataset() function as follows:

from datasets import load_dataset\n\ndata_files = {\"train\": \"drugsComTrain_raw.tsv\", \"test\": \"drugsComTest_raw.tsv\"}\n# \\t is the tab character in Python\ndrug_dataset = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")

A good practice when doing any sort of data analysis is to grab a small random sample to get a quick feel for the type of data you’re working with. In 🤗 Datasets, we can create a random sample by chaining the Dataset.shuffle() and Dataset.select() functions together:

drug_sample = drug_dataset[\"train\"].shuffle(seed=42).select(range(1000))\n# Peek at the first few examples\ndrug_sample[:3]
{'Unnamed: 0': [87571, 178045, 80482],\n 'drugName': ['Naproxen', 'Duloxetine', 'Mobic'],\n 'condition': ['Gout, Acute', 'ibromyalgia', 'Inflammatory Conditions'],\n 'review': ['\"like the previous person mention, I&#039;m a strong believer of aleve, it works faster for my gout than the prescription meds I take. No more going to the doctor for refills.....Aleve works!\"',\n  '\"I have taken Cymbalta for about a year and a half for fibromyalgia pain. It is great\\r\\nas a pain reducer and an anti-depressant, however, the side effects outweighed \\r\\nany benefit I got from it. I had trouble with restlessness, being tired constantly,\\r\\ndizziness, dry mouth, numbness and tingling in my feet, and horrible sweating. I am\\r\\nbeing weaned off of it now. Went from 60 mg to 30mg and now to 15 mg. I will be\\r\\noff completely in about a week. The fibro pain is coming back, but I would rather deal with it than the side effects.\"',\n  '\"I have been taking Mobic for over a year with no side effects other than an elevated blood pressure.  I had severe knee and ankle pain which completely went away after taking Mobic.  I attempted to stop the medication however pain returned after a few days.\"'],\n 'rating': [9.0, 3.0, 10.0],\n 'date': ['September 2, 2015', 'November 7, 2011', 'June 5, 2013'],\n 'usefulCount': [36, 13, 128]}

Note that we’ve fixed the seed in Dataset.shuffle() for reproducibility purposes. Dataset.select() expects an iterable of indices, so we’ve passed range(1000) to grab the first 1,000 examples from the shuffled dataset. From this sample we can already see a few quirks in our dataset:

  • The Unnamed: 0 column looks suspiciously like an anonymized ID for each patient.
  • The condition column includes a mix of uppercase and lowercase labels.
  • The reviews are of varying length and contain a mix of Python line separators (\\r\\n) as well as HTML character codes like &\\#039;.

Let’s see how we can use 🤗 Datasets to deal with each of these issues. To test the patient ID hypothesis for the Unnamed: 0 column, we can use the Dataset.unique() function to verify that the number of IDs matches the number of rows in each split:

for split in drug_dataset.keys():\n    assert len(drug_dataset[split]) == len(drug_dataset[split].unique(\"Unnamed: 0\"))

This seems to confirm our hypothesis, so let’s clean up the dataset a bit by renaming the Unnamed: 0 column to something a bit more interpretable. We can use the DatasetDict.rename_column() function to rename the column across both splits in one go:

drug_dataset = drug_dataset.rename_column(\n    original_column_name=\"Unnamed: 0\", new_column_name=\"patient_id\"\n)\ndrug_dataset
DatasetDict({\n    train: Dataset({\n        features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount'],\n        num_rows: 161297\n    })\n    test: Dataset({\n        features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount'],\n        num_rows: 53766\n    })\n})

✏️ Try it out! Use the Dataset.unique() function to find the number of unique drugs and conditions in the training and test sets.

Next, let’s normalize all the condition labels using Dataset.map(). As we did with tokenization in Chapter 3, we can define a simple function that can be applied across all the rows of each split in drug_dataset:

def lowercase_condition(example):\n    return {\"condition\": example[\"condition\"].lower()}\n\n\ndrug_dataset.map(lowercase_condition)
AttributeError: 'NoneType' object has no attribute 'lower'

Oh no, we’ve run into a problem with our map function! From the error we can infer that some of the entries in the condition column are None, which cannot be lowercased as they’re not strings. Let’s drop these rows using Dataset.filter(), which works in a similar way to Dataset.map() and expects a function that receives a single example of the dataset. Instead of writing an explicit function like:

def filter_nones(x):\n    return x[\"condition\"] is not None

and then running drug_dataset.filter(filter_nones), we can do this in one line using a lambda function. In Python, lambda functions are small functions that you can define without explicitly naming them. They take the general form:

lambda <arguments> : <expression>

where lambda is one of Python’s special keywords, <arguments> is a list/set of comma-separated values that define the inputs to the function, and <expression> represents the operations you wish to execute. For example, we can define a simple lambda function that squares a number as follows:

lambda x : x * x

To apply this function to an input, we need to wrap it and the input in parentheses:

(lambda x: x * x)(3)
9

Similarly, we can define lambda functions with multiple arguments by separating them with commas. For example, we can compute the area of a triangle as follows:

(lambda base, height: 0.5 * base * height)(4, 8)
16.0

Lambda functions are handy when you want to define small, single-use functions (for more information about them, we recommend reading the excellent Real Python tutorial by Andre Burgaud). In the 🤗 Datasets context, we can use lambda functions to define simple map and filter operations, so let’s use this trick to eliminate the None entries in our dataset:

drug_dataset = drug_dataset.filter(lambda x: x[\"condition\"] is not None)

With the None entries removed, we can normalize our condition column:

drug_dataset = drug_dataset.map(lowercase_condition)\n# Check that lowercasing worked\ndrug_dataset[\"train\"][\"condition\"][:3]
['left ventricular dysfunction', 'adhd', 'birth control']

It works! Now that we’ve cleaned up the labels, let’s take a look at cleaning up the reviews themselves.

Creating new columns

Whenever you’re dealing with customer reviews, a good practice is to check the number of words in each review. A review might be just a single word like “Great!” or a full-blown essay with thousands of words, and depending on the use case you’ll need to handle these extremes differently. To compute the number of words in each review, we’ll use a rough heuristic based on splitting each text by whitespace.

Let’s define a simple function that counts the number of words in each review:

def compute_review_length(example):\n    return {\"review_length\": len(example[\"review\"].split())}

Unlike our lowercase_condition() function, compute_review_length() returns a dictionary whose key does not correspond to one of the column names in the dataset. In this case, when compute_review_length() is passed to Dataset.map(), it will be applied to all the rows in the dataset to create a new review_length column:

drug_dataset = drug_dataset.map(compute_review_length)\n# Inspect the first training example\ndrug_dataset[\"train\"][0]
{'patient_id': 206461,\n 'drugName': 'Valsartan',\n 'condition': 'left ventricular dysfunction',\n 'review': '\"It has no side effect, I take it in combination of Bystolic 5 Mg and Fish Oil\"',\n 'rating': 9.0,\n 'date': 'May 20, 2012',\n 'usefulCount': 27,\n 'review_length': 17}

As expected, we can see a review_length column has been added to our training set. We can sort this new column with Dataset.sort() to see what the extreme values look like:

drug_dataset[\"train\"].sort(\"review_length\")[:3]
{'patient_id': [103488, 23627, 20558],\n 'drugName': ['Loestrin 21 1 / 20', 'Chlorzoxazone', 'Nucynta'],\n 'condition': ['birth control', 'muscle spasm', 'pain'],\n 'review': ['\"Excellent.\"', '\"useless\"', '\"ok\"'],\n 'rating': [10.0, 1.0, 6.0],\n 'date': ['November 4, 2008', 'March 24, 2017', 'August 20, 2016'],\n 'usefulCount': [5, 2, 10],\n 'review_length': [1, 1, 1]}

As we suspected, some reviews contain just a single word, which, although it may be okay for sentiment analysis, would not be informative if we want to predict the condition.

🙋 An alternative way to add new columns to a dataset is with the Dataset.add_column() function. This allows you to provide the column as a Python list or NumPy array and can be handy in situations where Dataset.map() is not well suited for your analysis.

Let’s use the Dataset.filter() function to remove reviews that contain fewer than 30 words. Similarly to what we did with the condition column, we can filter out the very short reviews by requiring that the reviews have a length above this threshold:

drug_dataset = drug_dataset.filter(lambda x: x[\"review_length\"] > 30)\nprint(drug_dataset.num_rows)
{'train': 138514, 'test': 46108}

As you can see, this has removed around 15% of the reviews from our original training and test sets.

✏️ Try it out! Use the Dataset.sort() function to inspect the reviews with the largest numbers of words. See the documentation to see which argument you need to use sort the reviews by length in descending order.

The last thing we need to deal with is the presence of HTML character codes in our reviews. We can use Python’s html module to unescape these characters, like so:

import html\n\ntext = \"I&#039;m a transformer called BERT\"\nhtml.unescape(text)
\"I'm a transformer called BERT\"

We’ll use Dataset.map() to unescape all the HTML characters in our corpus:

drug_dataset = drug_dataset.map(lambda x: {\"review\": html.unescape(x[\"review\"])})

As you can see, the Dataset.map() method is quite useful for processing data — and we haven’t even scratched the surface of everything it can do!

The map() method's superpowers

The Dataset.map() method takes a batched argument that, if set to True, causes it to send a batch of examples to the map function at once (the batch size is configurable but defaults to 1,000). For instance, the previous map function that unescaped all the HTML took a bit of time to run (you can read the time taken from the progress bars). We can speed this up by processing several elements at the same time using a list comprehension.

When you specify batched=True the function receives a dictionary with the fields of the dataset, but each value is now a list of values, and not just a single value. The return value of Dataset.map() should be the same: a dictionary with the fields we want to update or add to our dataset, and a list of values. For example, here is another way to unescape all HTML characters, but using batched=True:

new_drug_dataset = drug_dataset.map(\n    lambda x: {\"review\": [html.unescape(o) for o in x[\"review\"]]}, batched=True\n)

If you’re running this code in a notebook, you’ll see that this command executes way faster than the previous one. And it’s not because our reviews have already been HTML-unescaped — if you re-execute the instruction from the previous section (without batched=True), it will take the same amount of time as before. This is because list comprehensions are usually faster than executing the same code in a for loop, and we also gain some performance by accessing lots of elements at the same time instead of one by one.

Using Dataset.map() with batched=True will be essential to unlock the speed of the “fast” tokenizers that we’ll encounter in Chapter 6, which can quickly tokenize big lists of texts. For instance, to tokenize all the drug reviews with a fast tokenizer, we could use a function like this:

from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\n\n\ndef tokenize_function(examples):\n    return tokenizer(examples[\"review\"], truncation=True)

As you saw in Chapter 3, we can pass one or several examples to the tokenizer, so we can use this function with or without batched=True. Let’s take this opportunity to compare the performance of the different options. In a notebook, you can time a one-line instruction by adding %time before the line of code you wish to measure:

%time tokenized_dataset = drug_dataset.map(tokenize_function, batched=True)

You can also time a whole cell by putting %%time at the beginning of the cell. On the hardware we executed this on, it showed 10.8s for this instruction (it’s the number written after “Wall time”).

✏️ Try it out! Execute the same instruction with and without batched=True, then try it with a slow tokenizer (add use_fast=False in the AutoTokenizer.from_pretrained() method) so you can see what numbers you get on your hardware.

Here are the results we obtained with and without batching, with a fast and a slow tokenizer:

Options Fast tokenizer Slow tokenizer
batched=True 10.8s 4min41s
batched=False 59.2s 5min3s

This means that using a fast tokenizer with the batched=True option is 30 times faster than its slow counterpart with no batching — this is truly amazing! That’s the main reason why fast tokenizers are the default when using AutoTokenizer (and why they are called “fast”). They’re able to achieve such a speedup because behind the scenes the tokenization code is executed in Rust, which is a language that makes it easy to parallelize code execution.

Parallelization is also the reason for the nearly 6x speedup the fast tokenizer achieves with batching: you can’t parallelize a single tokenization operation, but when you want to tokenize lots of texts at the same time you can just split the execution across several processes, each responsible for its own texts.

Dataset.map() also has some parallelization capabilities of its own. Since they are not backed by Rust, they won’t let a slow tokenizer catch up with a fast one, but they can still be helpful (especially if you’re using a tokenizer that doesn’t have a fast version). To enable multiprocessing, use the num_proc argument and specify the number of processes to use in your call to Dataset.map():

slow_tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\", use_fast=False)\n\n\ndef slow_tokenize_function(examples):\n    return slow_tokenizer(examples[\"review\"], truncation=True)\n\n\ntokenized_dataset = drug_dataset.map(slow_tokenize_function, batched=True, num_proc=8)

You can experiment a little with timing to determine the optimal number of processes to use; in our case 8 seemed to produce the best speed gain. Here are the numbers we got with and without multiprocessing:

Options Fast tokenizer Slow tokenizer
batched=True 10.8s 4min41s
batched=False 59.2s 5min3s
batched=True, num_proc=8 6.52s 41.3s
batched=False, num_proc=8 9.49s 45.2s

Those are much more reasonable results for the slow tokenizer, but the performance of the fast tokenizer was also substantially improved. Note, however, that won’t always be the case — for values of num_proc other than 8, our tests showed that it was faster to use batched=True without that option. In general, we don’t recommend using Python multiprocessing for fast tokenizers with batched=True.

Using num_proc to speed up your processing is usually a great idea, as long as the function you are using is not already doing some kind of multiprocessing of its own.

All of this functionality condensed into a single method is already pretty amazing, but there’s more! With Dataset.map() and batched=True you can change the number of elements in your dataset. This is super useful in many situations where you want to create several training features from one example, and we will need to do this as part of the preprocessing for several of the NLP tasks we’ll undertake in Chapter 7.

💡 In machine learning, an example is usually defined as the set of features that we feed to the model. In some contexts, these features will be the set of columns in a Dataset, but in others (like here and for question answering), multiple features can be extracted from a single example and belong to a single column.

Let’s have a look at how it works! Here we will tokenize our examples and truncate them to a maximum length of 128, but we will ask the tokenizer to return all the chunks of the texts instead of just the first one. This can be done with return_overflowing_tokens=True:

def tokenize_and_split(examples):\n    return tokenizer(\n        examples[\"review\"],\n        truncation=True,\n        max_length=128,\n        return_overflowing_tokens=True,\n    )

Let’s test this on one example before using Dataset.map() on the whole dataset:

result = tokenize_and_split(drug_dataset[\"train\"][0])\n[len(inp) for inp in result[\"input_ids\"]]
[128, 49]

So, our first example in the training set became two features because it was tokenized to more than the maximum number of tokens we specified: the first one of length 128 and the second one of length 49. Now let’s do this for all elements of the dataset!

tokenized_dataset = drug_dataset.map(tokenize_and_split, batched=True)
ArrowInvalid: Column 1 named condition expected length 1463 but got length 1000

Oh no! That didn’t work! Why not? Looking at the error message will give us a clue: there is a mismatch in the lengths of one of the columns, one being of length 1,463 and the other of length 1,000. If you’ve looked at the Dataset.map() documentation, you may recall that it’s the number of samples passed to the function that we are mapping; here those 1,000 examples gave 1,463 new features, resulting in a shape error.

The problem is that we’re trying to mix two different datasets of different sizes: the drug_dataset columns will have a certain number of examples (the 1,000 in our error), but the tokenized_dataset we are building will have more (the 1,463 in the error message; it is more than 1,000 because we are tokenizing long reviews into more than one example by using return_overflowing_tokens=True). That doesn’t work for a Dataset, so we need to either remove the columns from the old dataset or make them the same size as they are in the new dataset. We can do the former with the remove_columns argument:

tokenized_dataset = drug_dataset.map(\n    tokenize_and_split, batched=True, remove_columns=drug_dataset[\"train\"].column_names\n)

Now this works without error. We can check that our new dataset has many more elements than the original dataset by comparing the lengths:

len(tokenized_dataset[\"train\"]), len(drug_dataset[\"train\"])
(206772, 138514)

We mentioned that we can also deal with the mismatched length problem by making the old columns the same size as the new ones. To do this, we will need the overflow_to_sample_mapping field the tokenizer returns when we set return_overflowing_tokens=True. It gives us a mapping from a new feature index to the index of the sample it originated from. Using this, we can associate each key present in our original dataset with a list of values of the right size by repeating the values of each example as many times as it generates new features:

def tokenize_and_split(examples):\n    result = tokenizer(\n        examples[\"review\"],\n        truncation=True,\n        max_length=128,\n        return_overflowing_tokens=True,\n    )\n    # Extract mapping between new and old indices\n    sample_map = result.pop(\"overflow_to_sample_mapping\")\n    for key, values in examples.items():\n        result[key] = [values[i] for i in sample_map]\n    return result

We can see it works with Dataset.map() without us needing to remove the old columns:

tokenized_dataset = drug_dataset.map(tokenize_and_split, batched=True)\ntokenized_dataset
DatasetDict({\n    train: Dataset({\n        features: ['attention_mask', 'condition', 'date', 'drugName', 'input_ids', 'patient_id', 'rating', 'review', 'review_length', 'token_type_ids', 'usefulCount'],\n        num_rows: 206772\n    })\n    test: Dataset({\n        features: ['attention_mask', 'condition', 'date', 'drugName', 'input_ids', 'patient_id', 'rating', 'review', 'review_length', 'token_type_ids', 'usefulCount'],\n        num_rows: 68876\n    })\n})

We get the same number of training features as before, but here we’ve kept all the old fields. If you need them for some post-processing after applying your model, you might want to use this approach.

You’ve now seen how 🤗 Datasets can be used to preprocess a dataset in various ways. Although the processing functions of 🤗 Datasets will cover most of your model training needs,\nthere may be times when you’ll need to switch to Pandas to access more powerful features, like DataFrame.groupby() or high-level APIs for visualization. Fortunately, 🤗 Datasets is designed to be interoperable with libraries such as Pandas, NumPy, PyTorch, TensorFlow, and JAX. Let’s take a look at how this works.

From Datasets to DataFrames and back

To enable the conversion between various third-party libraries, 🤗 Datasets provides a Dataset.set_format() function. This function only changes the output format of the dataset, so you can easily switch to another format without affecting the underlying data format, which is Apache Arrow. The formatting is done in place. To demonstrate, let’s convert our dataset to Pandas:

drug_dataset.set_format(\"pandas\")

Now when we access elements of the dataset we get a pandas.DataFrame instead of a dictionary:

drug_dataset[\"train\"][:3]
patient_id drugName condition review rating date usefulCount review_length
0 95260 Guanfacine adhd \"My son is halfway through his fourth week of Intuniv...\" 8.0 April 27, 2010 192 141
1 92703 Lybrel birth control \"I used to take another oral contraceptive, which had 21 pill cycle, and was very happy- very light periods, max 5 days, no other side effects...\" 5.0 December 14, 2009 17 134
2 138000 Ortho Evra birth control \"This is my first time using any form of birth control...\" 8.0 November 3, 2015 10 89

Let’s create a pandas.DataFrame for the whole training set by selecting all the elements of drug_dataset[\"train\"]:

train_df = drug_dataset[\"train\"][:]

🚨 Under the hood, Dataset.set_format() changes the return format for the dataset’s __getitem__() dunder method. This means that when we want to create a new object like train_df from a Dataset in the \"pandas\" format, we need to slice the whole dataset to obtain a pandas.DataFrame. You can verify for yourself that the type of drug_dataset[\"train\"] is Dataset, irrespective of the output format.

From here we can use all the Pandas functionality that we want. For example, we can do fancy chaining to compute the class distribution among the condition entries:

frequencies = (\n    train_df[\"condition\"]\n    .value_counts()\n    .to_frame()\n    .reset_index()\n    .rename(columns={\"index\": \"condition\", \"condition\": \"frequency\"})\n)\nfrequencies.head()
condition frequency
0 birth control 27655
1 depression 8023
2 acne 5209
3 anxiety 4991
4 pain 4744

And once we’re done with our Pandas analysis, we can always create a new Dataset object by using the Dataset.from_pandas() function as follows:

from datasets import Dataset\n\nfreq_dataset = Dataset.from_pandas(frequencies)\nfreq_dataset
Dataset({\n    features: ['condition', 'frequency'],\n    num_rows: 819\n})

✏️ Try it out! Compute the average rating per drug and store the result in a new Dataset.

This wraps up our tour of the various preprocessing techniques available in 🤗 Datasets. To round out the section, let’s create a validation set to prepare the dataset for training a classifier on. Before doing so, we’ll reset the output format of drug_dataset from \"pandas\" to \"arrow\":

drug_dataset.reset_format()

Creating a validation set

Although we have a test set we could use for evaluation, it’s a good practice to leave the test set untouched and create a separate validation set during development. Once you are happy with the performance of your models on the validation set, you can do a final sanity check on the test set. This process helps mitigate the risk that you’ll overfit to the test set and deploy a model that fails on real-world data.

🤗 Datasets provides a Dataset.train_test_split() function that is based on the famous functionality from scikit-learn. Let’s use it to split our training set into train and validation splits (we set the seed argument for reproducibility):

drug_dataset_clean = drug_dataset[\"train\"].train_test_split(train_size=0.8, seed=42)\n# Rename the default \"test\" split to \"validation\"\ndrug_dataset_clean[\"validation\"] = drug_dataset_clean.pop(\"test\")\n# Add the \"test\" set to our `DatasetDict`\ndrug_dataset_clean[\"test\"] = drug_dataset[\"test\"]\ndrug_dataset_clean
DatasetDict({\n    train: Dataset({\n        features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length', 'review_clean'],\n        num_rows: 110811\n    })\n    validation: Dataset({\n        features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length', 'review_clean'],\n        num_rows: 27703\n    })\n    test: Dataset({\n        features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length', 'review_clean'],\n        num_rows: 46108\n    })\n})

Great, we’ve now prepared a dataset that’s ready for training some models on! In section 5 we’ll show you how to upload datasets to the Hugging Face Hub, but for now let’s cap off our analysis by looking at a few ways you can save datasets on your local machine.

Saving a dataset

Although 🤗 Datasets will cache every downloaded dataset and the operations performed on it, there are times when you’ll want to save a dataset to disk (e.g., in case the cache gets deleted). As shown in the table below, 🤗 Datasets provides three main functions to save your dataset in different formats:

Data format Function
Arrow Dataset.save_to_disk()
CSV Dataset.to_csv()
JSON Dataset.to_json()

For example, let’s save our cleaned dataset in the Arrow format:

drug_dataset_clean.save_to_disk(\"drug-reviews\")

This will create a directory with the following structure:

drug-reviews/\n├── dataset_dict.json\n├── test\n│   ├── dataset.arrow\n│   ├── dataset_info.json\n│   └── state.json\n├── train\n│   ├── dataset.arrow\n│   ├── dataset_info.json\n│   ├── indices.arrow\n│   └── state.json\n└── validation\n    ├── dataset.arrow\n    ├── dataset_info.json\n    ├── indices.arrow\n    └── state.json

where we can see that each split is associated with its own dataset.arrow table, and some metadata in dataset_info.json and state.json. You can think of the Arrow format as a fancy table of columns and rows that is optimized for building high-performance applications that process and transport large datasets.

Once the dataset is saved, we can load it by using the load_from_disk() function as follows:

from datasets import load_from_disk\n\ndrug_dataset_reloaded = load_from_disk(\"drug-reviews\")\ndrug_dataset_reloaded
DatasetDict({\n    train: Dataset({\n        features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length'],\n        num_rows: 110811\n    })\n    validation: Dataset({\n        features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length'],\n        num_rows: 27703\n    })\n    test: Dataset({\n        features: ['patient_id', 'drugName', 'condition', 'review', 'rating', 'date', 'usefulCount', 'review_length'],\n        num_rows: 46108\n    })\n})

For the CSV and JSON formats, we have to store each split as a separate file. One way to do this is by iterating over the keys and values in the DatasetDict object:

for split, dataset in drug_dataset_clean.items():\n    dataset.to_json(f\"drug-reviews-{split}.jsonl\")

This saves each split in JSON Lines format, where each row in the dataset is stored as a single line of JSON. Here’s what the first example looks like:

!head -n 1 drug-reviews-train.jsonl
{\"patient_id\":141780,\"drugName\":\"Escitalopram\",\"condition\":\"depression\",\"review\":\"\\\"I seemed to experience the regular side effects of LEXAPRO, insomnia, low sex drive, sleepiness during the day. I am taking it at night because my doctor said if it made me tired to take it at night. I assumed it would and started out taking it at night. Strange dreams, some pleasant. I was diagnosed with fibromyalgia. Seems to be helping with the pain. Have had anxiety and depression in my family, and have tried quite a few other medications that haven't worked. Only have been on it for two weeks but feel more positive in my mind, want to accomplish more in my life. Hopefully the side effects will dwindle away, worth it to stick with it from hearing others responses. Great medication.\\\"\",\"rating\":9.0,\"date\":\"May 29, 2011\",\"usefulCount\":10,\"review_length\":125}

We can then use the techniques from section 2 to load the JSON files as follows:

data_files = {\n    \"train\": \"drug-reviews-train.jsonl\",\n    \"validation\": \"drug-reviews-validation.jsonl\",\n    \"test\": \"drug-reviews-test.jsonl\",\n}\ndrug_dataset_reloaded = load_dataset(\"json\", data_files=data_files)

And that’s it for our excursion into data wrangling with 🤗 Datasets! Now that we have a cleaned dataset for training a model on, here are a few ideas that you could try out:

  1. Use the techniques from Chapter 3 to train a classifier that can predict the patient condition based on the drug review.
  2. Use the summarization pipeline from Chapter 1 to generate summaries of the reviews.

Next, we’ll take a look at how 🤗 Datasets can enable you to work with huge datasets without blowing up your laptop!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:18.903Z"} {"title":"Big data? 🤗 Datasets to the rescue! - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt","markdown":"## [](#big-data-datasets-to-the-rescue)Big data? 🤗 Datasets to the rescue!\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-5-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter5/section4.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter5/section4.ipynb)\n\nNowadays it is not uncommon to find yourself working with multi-gigabyte datasets, especially if you’re planning to pretrain a transformer like BERT or GPT-2 from scratch. In these cases, even _loading_ the data can be a challenge. For example, the WebText corpus used to pretrain GPT-2 consists of over 8 million documents and 40 GB of text — loading this into your laptop’s RAM is likely to give it a heart attack!\n\nFortunately, 🤗 Datasets has been designed to overcome these limitations. It frees you from memory management problems by treating datasets as _memory-mapped_ files, and from hard drive limits by _streaming_ the entries in a corpus.\n\nIn this section we’ll explore these features of 🤗 Datasets with a huge 825 GB corpus known as [the Pile](https://pile.eleuther.ai/). Let’s get started!\n\n## [](#what-is-the-pile)What is the Pile?\n\nThe Pile is an English text corpus that was created by [EleutherAI](https://www.eleuther.ai/) for training large-scale language models. It includes a diverse range of datasets, spanning scientific articles, GitHub code repositories, and filtered web text. The training corpus is available in [14 GB chunks](https://the-eye.eu/public/AI/pile/), and you can also download several of the [individual components](https://the-eye.eu/public/AI/pile_preliminary_components/). Let’s start by taking a look at the PubMed Abstracts dataset, which is a corpus of abstracts from 15 million biomedical publications on [PubMed](https://pubmed.ncbi.nlm.nih.gov/). The dataset is in [JSON Lines format](https://jsonlines.org/) and is compressed using the `zstandard` library, so first we need to install that:\n\nNext, we can load the dataset using the method for remote files that we learned in [section 2](/course/chapter5/2):\n\n```\nfrom datasets import load_dataset\n\n\ndata_files = \"https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst\"\npubmed_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\npubmed_dataset```\n\n```\nDataset({\n features: ['meta', 'text'],\n num_rows: 15518009\n})```\n\nWe can see that there are 15,518,009 rows and 2 columns in our dataset — that’s a lot!\n\n✎ By default, 🤗 Datasets will decompress the files needed to load a dataset. If you want to preserve hard drive space, you can pass `DownloadConfig(delete_extracted=True)` to the `download_config` argument of `load_dataset()`. See the [documentation](https://huggingface.co/docs/datasets/package_reference/builder_classes.html?#datasets.utils.DownloadConfig) for more details.\n\nLet’s inspect the contents of the first example:\n\n```\n{'meta': {'pmid': 11409574, 'language': 'eng'},\n 'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection.\\nTo determine the prevalence of hypoxaemia in children aged under 5 years suffering acute lower respiratory infections (ALRI), the risk factors for hypoxaemia in children under 5 years of age with ALRI, and the association of hypoxaemia with an increased risk of dying in children of the same age ...'}```\n\nOkay, this looks like the abstract from a medical article. Now let’s see how much RAM we’ve used to load the dataset!\n\n## [](#the-magic-of-memory-mapping)The magic of memory mapping\n\nA simple way to measure memory usage in Python is with the [`psutil`](https://psutil.readthedocs.io/en/latest/) library, which can be installed with `pip` as follows:\n\nIt provides a `Process` class that allows us to check the memory usage of the current process as follows:\n\n```\nimport psutil\n\n\nprint(f\"RAM used: {psutil.Process().memory_info().rss / (1024 * 1024):.2f} MB\")```\n\nHere the `rss` attribute refers to the _resident set size_, which is the fraction of memory that a process occupies in RAM. This measurement also includes the memory used by the Python interpreter and the libraries we’ve loaded, so the actual amount of memory used to load the dataset is a bit smaller. For comparison, let’s see how large the dataset is on disk, using the `dataset_size` attribute. Since the result is expressed in bytes like before, we need to manually convert it to gigabytes:\n\n```\nprint(f\"Number of files in dataset : {pubmed_dataset.dataset_size}\")\nsize_gb = pubmed_dataset.dataset_size / (1024**3)\nprint(f\"Dataset size (cache file) : {size_gb:.2f} GB\")```\n\n```\nNumber of files in dataset : 20979437051\nDataset size (cache file) : 19.54 GB```\n\nNice — despite it being almost 20 GB large, we’re able to load and access the dataset with much less RAM!\n\n✏️ **Try it out!** Pick one of the [subsets](https://the-eye.eu/public/AI/pile_preliminary_components/) from the Pile that is larger than your laptop or desktop’s RAM, load it with 🤗 Datasets, and measure the amount of RAM used. Note that to get an accurate measurement, you’ll want to do this in a new process. You can find the decompressed sizes of each subset in Table 1 of [the Pile paper](https://arxiv.org/abs/2101.00027).\n\nIf you’re familiar with Pandas, this result might come as a surprise because of Wes Kinney’s famous [rule of thumb](https://wesmckinney.com/blog/apache-arrow-pandas-internals/) that you typically need 5 to 10 times as much RAM as the size of your dataset. So how does 🤗 Datasets solve this memory management problem? 🤗 Datasets treats each dataset as a [memory-mapped file](https://en.wikipedia.org/wiki/Memory-mapped_file), which provides a mapping between RAM and filesystem storage that allows the library to access and operate on elements of the dataset without needing to fully load it into memory.\n\nMemory-mapped files can also be shared across multiple processes, which enables methods like `Dataset.map()` to be parallelized without needing to move or copy the dataset. Under the hood, these capabilities are all realized by the [Apache Arrow](https://arrow.apache.org/) memory format and [`pyarrow`](https://arrow.apache.org/docs/python/index.html) library, which make the data loading and processing lightning fast. (For more details about Apache Arrow and comparisons to Pandas, check out [Dejan Simic’s blog post](https://towardsdatascience.com/apache-arrow-read-dataframe-with-zero-memory-69634092b1a).) To see this in action, let’s run a little speed test by iterating over all the elements in the PubMed Abstracts dataset:\n\n```\nimport timeit\n\ncode_snippet = \"\"\"batch_size = 1000\n\nfor idx in range(0, len(pubmed_dataset), batch_size):\n _ = pubmed_dataset[idx:idx + batch_size]\n\"\"\"\n\ntime = timeit.timeit(stmt=code_snippet, number=1, globals=globals())\nprint(\n f\"Iterated over {len(pubmed_dataset)} examples (about {size_gb:.1f} GB) in \"\n f\"{time:.1f}s, i.e. {size_gb/time:.3f} GB/s\"\n)```\n\n```\n'Iterated over 15518009 examples (about 19.5 GB) in 64.2s, i.e. 0.304 GB/s'```\n\nHere we’ve used Python’s `timeit` module to measure the execution time taken by `code_snippet`. You’ll typically be able to iterate over a dataset at speed of a few tenths of a GB/s to several GB/s. This works great for the vast majority of applications, but sometimes you’ll have to work with a dataset that is too large to even store on your laptop’s hard drive. For example, if we tried to download the Pile in its entirety, we’d need 825 GB of free disk space! To handle these cases, 🤗 Datasets provides a streaming feature that allows us to download and access elements on the fly, without needing to download the whole dataset. Let’s take a look at how this works.\n\n💡 In Jupyter notebooks you can also time cells using the [`%%timeit` magic function](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-timeit).\n\n## [](#streaming-datasets)Streaming datasets\n\nTo enable dataset streaming you just need to pass the `streaming=True` argument to the `load_dataset()` function. For example, let’s load the PubMed Abstracts dataset again, but in streaming mode:\n\n```\npubmed_dataset_streamed = load_dataset(\n \"json\", data_files=data_files, split=\"train\", streaming=True\n)```\n\nInstead of the familiar `Dataset` that we’ve encountered elsewhere in this chapter, the object returned with `streaming=True` is an `IterableDataset`. As the name suggests, to access the elements of an `IterableDataset` we need to iterate over it. We can access the first element of our streamed dataset as follows:\n\n```\nnext(iter(pubmed_dataset_streamed))```\n\n```\n{'meta': {'pmid': 11409574, 'language': 'eng'},\n 'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection.\\nTo determine the prevalence of hypoxaemia in children aged under 5 years suffering acute lower respiratory infections (ALRI), the risk factors for hypoxaemia in children under 5 years of age with ALRI, and the association of hypoxaemia with an increased risk of dying in children of the same age ...'}```\n\nThe elements from a streamed dataset can be processed on the fly using `IterableDataset.map()`, which is useful during training if you need to tokenize the inputs. The process is exactly the same as the one we used to tokenize our dataset in [Chapter 3](/course/chapter3), with the only difference being that outputs are returned one by one:\n\n```\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\ntokenized_dataset = pubmed_dataset_streamed.map(lambda x: tokenizer(x[\"text\"]))\nnext(iter(tokenized_dataset))```\n\n```\n{'input_ids': [101, 4958, 5178, 4328, 6779, ...], 'attention_mask': [1, 1, 1, 1, 1, ...]}```\n\n💡 To speed up tokenization with streaming you can pass `batched=True`, as we saw in the last section. It will process the examples batch by batch; the default batch size is 1,000 and can be specified with the `batch_size` argument.\n\nYou can also shuffle a streamed dataset using `IterableDataset.shuffle()`, but unlike `Dataset.shuffle()` this only shuffles the elements in a predefined `buffer_size`:\n\n```\nshuffled_dataset = pubmed_dataset_streamed.shuffle(buffer_size=10_000, seed=42)\nnext(iter(shuffled_dataset))```\n\n```\n{'meta': {'pmid': 11410799, 'language': 'eng'},\n 'text': 'Randomized study of dose or schedule modification of granulocyte colony-stimulating factor in platinum-based chemotherapy for elderly patients with lung cancer ...'}```\n\nIn this example, we selected a random example from the first 10,000 examples in the buffer. Once an example is accessed, its spot in the buffer is filled with the next example in the corpus (i.e., the 10,001st example in the case above). You can also select elements from a streamed dataset using the `IterableDataset.take()` and `IterableDataset.skip()` functions, which act in a similar way to `Dataset.select()`. For example, to select the first 5 examples in the PubMed Abstracts dataset we can do the following:\n\n```\ndataset_head = pubmed_dataset_streamed.take(5)\nlist(dataset_head)```\n\n```\n[{'meta': {'pmid': 11409574, 'language': 'eng'},\n 'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection ...'},\n {'meta': {'pmid': 11409575, 'language': 'eng'},\n 'text': 'Clinical signs of hypoxaemia in children with acute lower respiratory infection: indicators of oxygen therapy ...'},\n {'meta': {'pmid': 11409576, 'language': 'eng'},\n 'text': \"Hypoxaemia in children with severe pneumonia in Papua New Guinea ...\"},\n {'meta': {'pmid': 11409577, 'language': 'eng'},\n 'text': 'Oxygen concentrators and cylinders ...'},\n {'meta': {'pmid': 11409578, 'language': 'eng'},\n 'text': 'Oxygen supply in rural africa: a personal experience ...'}]```\n\nSimilarly, you can use the `IterableDataset.skip()` function to create training and validation splits from a shuffled dataset as follows:\n\n```\ntrain_dataset = shuffled_dataset.skip(1000)\n\nvalidation_dataset = shuffled_dataset.take(1000)```\n\nLet’s round out our exploration of dataset streaming with a common application: combining multiple datasets together to create a single corpus. 🤗 Datasets provides an `interleave_datasets()` function that converts a list of `IterableDataset` objects into a single `IterableDataset`, where the elements of the new dataset are obtained by alternating among the source examples. This function is especially useful when you’re trying to combine large datasets, so as an example let’s stream the FreeLaw subset of the Pile, which is a 51 GB dataset of legal opinions from US courts:\n\n```\nlaw_dataset_streamed = load_dataset(\n \"json\",\n data_files=\"https://the-eye.eu/public/AI/pile_preliminary_components/FreeLaw_Opinions.jsonl.zst\",\n split=\"train\",\n streaming=True,\n)\nnext(iter(law_dataset_streamed))```\n\n```\n{'meta': {'case_ID': '110921.json',\n 'case_jurisdiction': 'scotus.tar.gz',\n 'date_created': '2010-04-28T17:12:49Z'},\n 'text': '\\n461 U.S. 238 (1983)\\nOLIM ET AL.\\nv.\\nWAKINEKONA\\nNo. 81-1581.\\nSupreme Court of United States.\\nArgued January 19, 1983.\\nDecided April 26, 1983.\\nCERTIORARI TO THE UNITED STATES COURT OF APPEALS FOR THE NINTH CIRCUIT\\n*239 Michael A. Lilly, First Deputy Attorney General of Hawaii, argued the cause for petitioners. With him on the brief was James H. Dannenberg, Deputy Attorney General...'}```\n\nThis dataset is large enough to stress the RAM of most laptops, yet we’ve been able to load and access it without breaking a sweat! Let’s now combine the examples from the FreeLaw and PubMed Abstracts datasets with the `interleave_datasets()` function:\n\n```\nfrom itertools import islice\nfrom datasets import interleave_datasets\n\ncombined_dataset = interleave_datasets([pubmed_dataset_streamed, law_dataset_streamed])\nlist(islice(combined_dataset, 2))```\n\n```\n[{'meta': {'pmid': 11409574, 'language': 'eng'},\n 'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection ...'},\n {'meta': {'case_ID': '110921.json',\n 'case_jurisdiction': 'scotus.tar.gz',\n 'date_created': '2010-04-28T17:12:49Z'},\n 'text': '\\n461 U.S. 238 (1983)\\nOLIM ET AL.\\nv.\\nWAKINEKONA\\nNo. 81-1581.\\nSupreme Court of United States.\\nArgued January 19, 1983.\\nDecided April 26, 1983.\\nCERTIORARI TO THE UNITED STATES COURT OF APPEALS FOR THE NINTH CIRCUIT\\n*239 Michael A. Lilly, First Deputy Attorney General of Hawaii, argued the cause for petitioners. With him on the brief was James H. Dannenberg, Deputy Attorney General...'}]```\n\nHere we’ve used the `islice()` function from Python’s `itertools` module to select the first two examples from the combined dataset, and we can see that they match the first examples from each of the two source datasets.\n\nFinally, if you want to stream the Pile in its 825 GB entirety, you can grab all the prepared files as follows:\n\n```\nbase_url = \"https://the-eye.eu/public/AI/pile/\"\ndata_files = {\n \"train\": [base_url + \"train/\" + f\"{idx:02d}.jsonl.zst\" for idx in range(30)],\n \"validation\": base_url + \"val.jsonl.zst\",\n \"test\": base_url + \"test.jsonl.zst\",\n}\npile_dataset = load_dataset(\"json\", data_files=data_files, streaming=True)\nnext(iter(pile_dataset[\"train\"]))```\n\n```\n{'meta': {'pile_set_name': 'Pile-CC'},\n 'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web...'}```\n\n✏️ **Try it out!** Use one of the large Common Crawl corpora like [`mc4`](https://huggingface.co/datasets/mc4) or [`oscar`](https://huggingface.co/datasets/oscar) to create a streaming multilingual dataset that represents the spoken proportions of languages in a country of your choice. For example, the four national languages in Switzerland are German, French, Italian, and Romansh, so you could try creating a Swiss corpus by sampling the Oscar subsets according to their spoken proportion.\n\nYou now have all the tools you need to load and process datasets of all shapes and sizes — but unless you’re exceptionally lucky, there will come a point in your NLP journey where you’ll have to actually create a dataset to solve the problem at hand. That’s the topic of the next section!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tBig data? 🤗 Datasets to the rescue! - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Big data? 🤗 Datasets to the rescue!

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Big data? 🤗 Datasets to the rescue!

\"Ask \"Open \"Open

Nowadays it is not uncommon to find yourself working with multi-gigabyte datasets, especially if you’re planning to pretrain a transformer like BERT or GPT-2 from scratch. In these cases, even loading the data can be a challenge. For example, the WebText corpus used to pretrain GPT-2 consists of over 8 million documents and 40 GB of text — loading this into your laptop’s RAM is likely to give it a heart attack!

Fortunately, 🤗 Datasets has been designed to overcome these limitations. It frees you from memory management problems by treating datasets as memory-mapped files, and from hard drive limits by streaming the entries in a corpus.

In this section we’ll explore these features of 🤗 Datasets with a huge 825 GB corpus known as the Pile. Let’s get started!

What is the Pile?

The Pile is an English text corpus that was created by EleutherAI for training large-scale language models. It includes a diverse range of datasets, spanning scientific articles, GitHub code repositories, and filtered web text. The training corpus is available in 14 GB chunks, and you can also download several of the individual components. Let’s start by taking a look at the PubMed Abstracts dataset, which is a corpus of abstracts from 15 million biomedical publications on PubMed. The dataset is in JSON Lines format and is compressed using the zstandard library, so first we need to install that:

!pip install zstandard

Next, we can load the dataset using the method for remote files that we learned in section 2:

from datasets import load_dataset\n\n# This takes a few minutes to run, so go grab a tea or coffee while you wait :)\ndata_files = \"https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst\"\npubmed_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\npubmed_dataset
Dataset({\n    features: ['meta', 'text'],\n    num_rows: 15518009\n})

We can see that there are 15,518,009 rows and 2 columns in our dataset — that’s a lot!

✎ By default, 🤗 Datasets will decompress the files needed to load a dataset. If you want to preserve hard drive space, you can pass DownloadConfig(delete_extracted=True) to the download_config argument of load_dataset(). See the documentation for more details.

Let’s inspect the contents of the first example:

pubmed_dataset[0]
{'meta': {'pmid': 11409574, 'language': 'eng'},\n 'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection.\\nTo determine the prevalence of hypoxaemia in children aged under 5 years suffering acute lower respiratory infections (ALRI), the risk factors for hypoxaemia in children under 5 years of age with ALRI, and the association of hypoxaemia with an increased risk of dying in children of the same age ...'}

Okay, this looks like the abstract from a medical article. Now let’s see how much RAM we’ve used to load the dataset!

The magic of memory mapping

A simple way to measure memory usage in Python is with the psutil library, which can be installed with pip as follows:

!pip install psutil

It provides a Process class that allows us to check the memory usage of the current process as follows:

import psutil\n\n# Process.memory_info is expressed in bytes, so convert to megabytes\nprint(f\"RAM used: {psutil.Process().memory_info().rss / (1024 * 1024):.2f} MB\")
RAM used: 5678.33 MB

Here the rss attribute refers to the resident set size, which is the fraction of memory that a process occupies in RAM. This measurement also includes the memory used by the Python interpreter and the libraries we’ve loaded, so the actual amount of memory used to load the dataset is a bit smaller. For comparison, let’s see how large the dataset is on disk, using the dataset_size attribute. Since the result is expressed in bytes like before, we need to manually convert it to gigabytes:

print(f\"Number of files in dataset : {pubmed_dataset.dataset_size}\")\nsize_gb = pubmed_dataset.dataset_size / (1024**3)\nprint(f\"Dataset size (cache file) : {size_gb:.2f} GB\")
Number of files in dataset : 20979437051\nDataset size (cache file) : 19.54 GB

Nice — despite it being almost 20 GB large, we’re able to load and access the dataset with much less RAM!

✏️ Try it out! Pick one of the subsets from the Pile that is larger than your laptop or desktop’s RAM, load it with 🤗 Datasets, and measure the amount of RAM used. Note that to get an accurate measurement, you’ll want to do this in a new process. You can find the decompressed sizes of each subset in Table 1 of the Pile paper.

If you’re familiar with Pandas, this result might come as a surprise because of Wes Kinney’s famous rule of thumb that you typically need 5 to 10 times as much RAM as the size of your dataset. So how does 🤗 Datasets solve this memory management problem? 🤗 Datasets treats each dataset as a memory-mapped file, which provides a mapping between RAM and filesystem storage that allows the library to access and operate on elements of the dataset without needing to fully load it into memory.

Memory-mapped files can also be shared across multiple processes, which enables methods like Dataset.map() to be parallelized without needing to move or copy the dataset. Under the hood, these capabilities are all realized by the Apache Arrow memory format and pyarrow library, which make the data loading and processing lightning fast. (For more details about Apache Arrow and comparisons to Pandas, check out Dejan Simic’s blog post.) To see this in action, let’s run a little speed test by iterating over all the elements in the PubMed Abstracts dataset:

import timeit\n\ncode_snippet = \"\"\"batch_size = 1000\n\nfor idx in range(0, len(pubmed_dataset), batch_size):\n    _ = pubmed_dataset[idx:idx + batch_size]\n\"\"\"\n\ntime = timeit.timeit(stmt=code_snippet, number=1, globals=globals())\nprint(\n    f\"Iterated over {len(pubmed_dataset)} examples (about {size_gb:.1f} GB) in \"\n    f\"{time:.1f}s, i.e. {size_gb/time:.3f} GB/s\"\n)
'Iterated over 15518009 examples (about 19.5 GB) in 64.2s, i.e. 0.304 GB/s'

Here we’ve used Python’s timeit module to measure the execution time taken by code_snippet. You’ll typically be able to iterate over a dataset at speed of a few tenths of a GB/s to several GB/s. This works great for the vast majority of applications, but sometimes you’ll have to work with a dataset that is too large to even store on your laptop’s hard drive. For example, if we tried to download the Pile in its entirety, we’d need 825 GB of free disk space! To handle these cases, 🤗 Datasets provides a streaming feature that allows us to download and access elements on the fly, without needing to download the whole dataset. Let’s take a look at how this works.

💡 In Jupyter notebooks you can also time cells using the %%timeit magic function.

Streaming datasets

To enable dataset streaming you just need to pass the streaming=True argument to the load_dataset() function. For example, let’s load the PubMed Abstracts dataset again, but in streaming mode:

pubmed_dataset_streamed = load_dataset(\n    \"json\", data_files=data_files, split=\"train\", streaming=True\n)

Instead of the familiar Dataset that we’ve encountered elsewhere in this chapter, the object returned with streaming=True is an IterableDataset. As the name suggests, to access the elements of an IterableDataset we need to iterate over it. We can access the first element of our streamed dataset as follows:

next(iter(pubmed_dataset_streamed))
{'meta': {'pmid': 11409574, 'language': 'eng'},\n 'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection.\\nTo determine the prevalence of hypoxaemia in children aged under 5 years suffering acute lower respiratory infections (ALRI), the risk factors for hypoxaemia in children under 5 years of age with ALRI, and the association of hypoxaemia with an increased risk of dying in children of the same age ...'}

The elements from a streamed dataset can be processed on the fly using IterableDataset.map(), which is useful during training if you need to tokenize the inputs. The process is exactly the same as the one we used to tokenize our dataset in Chapter 3, with the only difference being that outputs are returned one by one:

from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\ntokenized_dataset = pubmed_dataset_streamed.map(lambda x: tokenizer(x[\"text\"]))\nnext(iter(tokenized_dataset))
{'input_ids': [101, 4958, 5178, 4328, 6779, ...], 'attention_mask': [1, 1, 1, 1, 1, ...]}

💡 To speed up tokenization with streaming you can pass batched=True, as we saw in the last section. It will process the examples batch by batch; the default batch size is 1,000 and can be specified with the batch_size argument.

You can also shuffle a streamed dataset using IterableDataset.shuffle(), but unlike Dataset.shuffle() this only shuffles the elements in a predefined buffer_size:

shuffled_dataset = pubmed_dataset_streamed.shuffle(buffer_size=10_000, seed=42)\nnext(iter(shuffled_dataset))
{'meta': {'pmid': 11410799, 'language': 'eng'},\n 'text': 'Randomized study of dose or schedule modification of granulocyte colony-stimulating factor in platinum-based chemotherapy for elderly patients with lung cancer ...'}

In this example, we selected a random example from the first 10,000 examples in the buffer. Once an example is accessed, its spot in the buffer is filled with the next example in the corpus (i.e., the 10,001st example in the case above). You can also select elements from a streamed dataset using the IterableDataset.take() and IterableDataset.skip() functions, which act in a similar way to Dataset.select(). For example, to select the first 5 examples in the PubMed Abstracts dataset we can do the following:

dataset_head = pubmed_dataset_streamed.take(5)\nlist(dataset_head)
[{'meta': {'pmid': 11409574, 'language': 'eng'},\n  'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection ...'},\n {'meta': {'pmid': 11409575, 'language': 'eng'},\n  'text': 'Clinical signs of hypoxaemia in children with acute lower respiratory infection: indicators of oxygen therapy ...'},\n {'meta': {'pmid': 11409576, 'language': 'eng'},\n  'text': \"Hypoxaemia in children with severe pneumonia in Papua New Guinea ...\"},\n {'meta': {'pmid': 11409577, 'language': 'eng'},\n  'text': 'Oxygen concentrators and cylinders ...'},\n {'meta': {'pmid': 11409578, 'language': 'eng'},\n  'text': 'Oxygen supply in rural africa: a personal experience ...'}]

Similarly, you can use the IterableDataset.skip() function to create training and validation splits from a shuffled dataset as follows:

# Skip the first 1,000 examples and include the rest in the training set\ntrain_dataset = shuffled_dataset.skip(1000)\n# Take the first 1,000 examples for the validation set\nvalidation_dataset = shuffled_dataset.take(1000)

Let’s round out our exploration of dataset streaming with a common application: combining multiple datasets together to create a single corpus. 🤗 Datasets provides an interleave_datasets() function that converts a list of IterableDataset objects into a single IterableDataset, where the elements of the new dataset are obtained by alternating among the source examples. This function is especially useful when you’re trying to combine large datasets, so as an example let’s stream the FreeLaw subset of the Pile, which is a 51 GB dataset of legal opinions from US courts:

law_dataset_streamed = load_dataset(\n    \"json\",\n    data_files=\"https://the-eye.eu/public/AI/pile_preliminary_components/FreeLaw_Opinions.jsonl.zst\",\n    split=\"train\",\n    streaming=True,\n)\nnext(iter(law_dataset_streamed))
{'meta': {'case_ID': '110921.json',\n  'case_jurisdiction': 'scotus.tar.gz',\n  'date_created': '2010-04-28T17:12:49Z'},\n 'text': '\\n461 U.S. 238 (1983)\\nOLIM ET AL.\\nv.\\nWAKINEKONA\\nNo. 81-1581.\\nSupreme Court of United States.\\nArgued January 19, 1983.\\nDecided April 26, 1983.\\nCERTIORARI TO THE UNITED STATES COURT OF APPEALS FOR THE NINTH CIRCUIT\\n*239 Michael A. Lilly, First Deputy Attorney General of Hawaii, argued the cause for petitioners. With him on the brief was James H. Dannenberg, Deputy Attorney General...'}

This dataset is large enough to stress the RAM of most laptops, yet we’ve been able to load and access it without breaking a sweat! Let’s now combine the examples from the FreeLaw and PubMed Abstracts datasets with the interleave_datasets() function:

from itertools import islice\nfrom datasets import interleave_datasets\n\ncombined_dataset = interleave_datasets([pubmed_dataset_streamed, law_dataset_streamed])\nlist(islice(combined_dataset, 2))
[{'meta': {'pmid': 11409574, 'language': 'eng'},\n  'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection ...'},\n {'meta': {'case_ID': '110921.json',\n   'case_jurisdiction': 'scotus.tar.gz',\n   'date_created': '2010-04-28T17:12:49Z'},\n  'text': '\\n461 U.S. 238 (1983)\\nOLIM ET AL.\\nv.\\nWAKINEKONA\\nNo. 81-1581.\\nSupreme Court of United States.\\nArgued January 19, 1983.\\nDecided April 26, 1983.\\nCERTIORARI TO THE UNITED STATES COURT OF APPEALS FOR THE NINTH CIRCUIT\\n*239 Michael A. Lilly, First Deputy Attorney General of Hawaii, argued the cause for petitioners. With him on the brief was James H. Dannenberg, Deputy Attorney General...'}]

Here we’ve used the islice() function from Python’s itertools module to select the first two examples from the combined dataset, and we can see that they match the first examples from each of the two source datasets.

Finally, if you want to stream the Pile in its 825 GB entirety, you can grab all the prepared files as follows:

base_url = \"https://the-eye.eu/public/AI/pile/\"\ndata_files = {\n    \"train\": [base_url + \"train/\" + f\"{idx:02d}.jsonl.zst\" for idx in range(30)],\n    \"validation\": base_url + \"val.jsonl.zst\",\n    \"test\": base_url + \"test.jsonl.zst\",\n}\npile_dataset = load_dataset(\"json\", data_files=data_files, streaming=True)\nnext(iter(pile_dataset[\"train\"]))
{'meta': {'pile_set_name': 'Pile-CC'},\n 'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web...'}

✏️ Try it out! Use one of the large Common Crawl corpora like mc4 or oscar to create a streaming multilingual dataset that represents the spoken proportions of languages in a country of your choice. For example, the four national languages in Switzerland are German, French, Italian, and Romansh, so you could try creating a Swiss corpus by sampling the Oscar subsets according to their spoken proportion.

You now have all the tools you need to load and process datasets of all shapes and sizes — but unless you’re exceptionally lucky, there will come a point in your NLP journey where you’ll have to actually create a dataset to solve the problem at hand. That’s the topic of the next section!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:19.028Z"} {"title":"Creating your own dataset - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter5/5?fw=pt","markdown":"## [](#creating-your-own-dataset)Creating your own dataset\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-5-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter5/section5.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter5/section5.ipynb)\n\nSometimes the dataset that you need to build an NLP application doesn’t exist, so you’ll need to create it yourself. In this section we’ll show you how to create a corpus of [GitHub issues](https://github.com/features/issues/), which are commonly used to track bugs or features in GitHub repositories. This corpus could be used for various purposes, including:\n\n- Exploring how long it takes to close open issues or pull requests\n- Training a _multilabel classifier_ that can tag issues with metadata based on the issue’s description (e.g., “bug,” “enhancement,” or “question”)\n- Creating a semantic search engine to find which issues match a user’s query\n\nHere we’ll focus on creating the corpus, and in the next section we’ll tackle the semantic search application. To keep things meta, we’ll use the GitHub issues associated with a popular open source project: 🤗 Datasets! Let’s take a look at how to get the data and explore the information contained in these issues.\n\n## [](#getting-the-data)Getting the data\n\nYou can find all the issues in 🤗 Datasets by navigating to the repository’s [Issues tab](https://github.com/huggingface/datasets/issues). As shown in the following screenshot, at the time of writing there were 331 open issues and 668 closed ones.\n\n![The GitHub issues associated with 🤗 Datasets.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter5/datasets-issues.png)\n\nIf you click on one of these issues you’ll find it contains a title, a description, and a set of labels that characterize the issue. An example is shown in the screenshot below.\n\n![A typical GitHub issue in the 🤗 Datasets repository.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter5/datasets-issues-single.png)\n\nTo download all the repository’s issues, we’ll use the [GitHub REST API](https://docs.github.com/en/rest) to poll the [`Issues` endpoint](https://docs.github.com/en/rest/reference/issues#list-repository-issues). This endpoint returns a list of JSON objects, with each object containing a large number of fields that include the title and description as well as metadata about the status of the issue and so on.\n\nA convenient way to download the issues is via the `requests` library, which is the standard way for making HTTP requests in Python. You can install the library by running:\n\nOnce the library is installed, you can make GET requests to the `Issues` endpoint by invoking the `requests.get()` function. For example, you can run the following command to retrieve the first issue on the first page:\n\n```\nimport requests\n\nurl = \"https://api.github.com/repos/huggingface/datasets/issues?page=1&per_page=1\"\nresponse = requests.get(url)```\n\nThe `response` object contains a lot of useful information about the request, including the HTTP status code:\n\nwhere a `200` status means the request was successful (you can find a list of possible HTTP status codes [here](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes)). What we are really interested in, though, is the _payload_, which can be accessed in various formats like bytes, strings, or JSON. Since we know our issues are in JSON format, let’s inspect the payload as follows:\n\n```\n[{'url': 'https://api.github.com/repos/huggingface/datasets/issues/2792',\n 'repository_url': 'https://api.github.com/repos/huggingface/datasets',\n 'labels_url': 'https://api.github.com/repos/huggingface/datasets/issues/2792/labels{/name}',\n 'comments_url': 'https://api.github.com/repos/huggingface/datasets/issues/2792/comments',\n 'events_url': 'https://api.github.com/repos/huggingface/datasets/issues/2792/events',\n 'html_url': 'https://github.com/huggingface/datasets/pull/2792',\n 'id': 968650274,\n 'node_id': 'MDExOlB1bGxSZXF1ZXN0NzEwNzUyMjc0',\n 'number': 2792,\n 'title': 'Update GooAQ',\n 'user': {'login': 'bhavitvyamalik',\n 'id': 19718818,\n 'node_id': 'MDQ6VXNlcjE5NzE4ODE4',\n 'avatar_url': 'https://avatars.githubusercontent.com/u/19718818?v=4',\n 'gravatar_id': '',\n 'url': 'https://api.github.com/users/bhavitvyamalik',\n 'html_url': 'https://github.com/bhavitvyamalik',\n 'followers_url': 'https://api.github.com/users/bhavitvyamalik/followers',\n 'following_url': 'https://api.github.com/users/bhavitvyamalik/following{/other_user}',\n 'gists_url': 'https://api.github.com/users/bhavitvyamalik/gists{/gist_id}',\n 'starred_url': 'https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}',\n 'subscriptions_url': 'https://api.github.com/users/bhavitvyamalik/subscriptions',\n 'organizations_url': 'https://api.github.com/users/bhavitvyamalik/orgs',\n 'repos_url': 'https://api.github.com/users/bhavitvyamalik/repos',\n 'events_url': 'https://api.github.com/users/bhavitvyamalik/events{/privacy}',\n 'received_events_url': 'https://api.github.com/users/bhavitvyamalik/received_events',\n 'type': 'User',\n 'site_admin': False},\n 'labels': [],\n 'state': 'open',\n 'locked': False,\n 'assignee': None,\n 'assignees': [],\n 'milestone': None,\n 'comments': 1,\n 'created_at': '2021-08-12T11:40:18Z',\n 'updated_at': '2021-08-12T12:31:17Z',\n 'closed_at': None,\n 'author_association': 'CONTRIBUTOR',\n 'active_lock_reason': None,\n 'pull_request': {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/2792',\n 'html_url': 'https://github.com/huggingface/datasets/pull/2792',\n 'diff_url': 'https://github.com/huggingface/datasets/pull/2792.diff',\n 'patch_url': 'https://github.com/huggingface/datasets/pull/2792.patch'},\n 'body': '[GooAQ](https://github.com/allenai/gooaq) dataset was recently updated after splits were added for the same. This PR contains new updated GooAQ with train/val/test splits and updated README as well.',\n 'performed_via_github_app': None}]```\n\nWhoa, that’s a lot of information! We can see useful fields like `title`, `body`, and `number` that describe the issue, as well as information about the GitHub user who opened the issue.\n\n✏️ **Try it out!** Click on a few of the URLs in the JSON payload above to get a feel for what type of information each GitHub issue is linked to.\n\nAs described in the GitHub [documentation](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting), unauthenticated requests are limited to 60 requests per hour. Although you can increase the `per_page` query parameter to reduce the number of requests you make, you will still hit the rate limit on any repository that has more than a few thousand issues. So instead, you should follow GitHub’s [instructions](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token) on creating a _personal access token_ so that you can boost the rate limit to 5,000 requests per hour. Once you have your token, you can include it as part of the request header:\n\n```\nGITHUB_TOKEN = xxx \nheaders = {\"Authorization\": f\"token {GITHUB_TOKEN}\"}```\n\n⚠️ Do not share a notebook with your `GITHUB_TOKEN` pasted in it. We recommend you delete the last cell once you have executed it to avoid leaking this information accidentally. Even better, store the token in a _.env_ file and use the [`python-dotenv` library](https://github.com/theskumar/python-dotenv) to load it automatically for you as an environment variable.\n\nNow that we have our access token, let’s create a function that can download all the issues from a GitHub repository:\n\n```\nimport time\nimport math\nfrom pathlib import Path\nimport pandas as pd\nfrom tqdm.notebook import tqdm\n\n\ndef fetch_issues(\n owner=\"huggingface\",\n repo=\"datasets\",\n num_issues=10_000,\n rate_limit=5_000,\n issues_path=Path(\".\"),\n):\n if not issues_path.is_dir():\n issues_path.mkdir(exist_ok=True)\n\n batch = []\n all_issues = []\n per_page = 100 \n num_pages = math.ceil(num_issues / per_page)\n base_url = \"https://api.github.com/repos\"\n\n for page in tqdm(range(num_pages)):\n \n query = f\"issues?page={page}&per_page={per_page}&state=all\"\n issues = requests.get(f\"{base_url}/{owner}/{repo}/{query}\", headers=headers)\n batch.extend(issues.json())\n\n if len(batch) > rate_limit and len(all_issues) < num_issues:\n all_issues.extend(batch)\n batch = [] \n print(f\"Reached GitHub rate limit. Sleeping for one hour ...\")\n time.sleep(60 * 60 + 1)\n\n all_issues.extend(batch)\n df = pd.DataFrame.from_records(all_issues)\n df.to_json(f\"{issues_path}/{repo}-issues.jsonl\", orient=\"records\", lines=True)\n print(\n f\"Downloaded all the issues for {repo}! Dataset stored at {issues_path}/{repo}-issues.jsonl\"\n )```\n\nNow when we call `fetch_issues()` it will download all the issues in batches to avoid exceeding GitHub’s limit on the number of requests per hour; the result will be stored in a _repository\\_name-issues.jsonl_ file, where each line is a JSON object the represents an issue. Let’s use this function to grab all the issues from 🤗 Datasets:\n\nOnce the issues are downloaded we can load them locally using our newfound skills from [section 2](/course/chapter5/2):\n\n```\nissues_dataset = load_dataset(\"json\", data_files=\"datasets-issues.jsonl\", split=\"train\")\nissues_dataset```\n\n```\nDataset({\n features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'timeline_url', 'performed_via_github_app'],\n num_rows: 3019\n})```\n\nGreat, we’ve created our first dataset from scratch! But why are there several thousand issues when the [Issues tab](https://github.com/huggingface/datasets/issues) of the 🤗 Datasets repository only shows around 1,000 issues in total 🤔? As described in the GitHub [documentation](https://docs.github.com/en/rest/reference/issues#list-issues-assigned-to-the-authenticated-user), that’s because we’ve downloaded all the pull requests as well:\n\n> GitHub’s REST API v3 considers every pull request an issue, but not every issue is a pull request. For this reason, “Issues” endpoints may return both issues and pull requests in the response. You can identify pull requests by the `pull_request` key. Be aware that the `id` of a pull request returned from “Issues” endpoints will be an issue id.\n\nSince the contents of issues and pull requests are quite different, let’s do some minor preprocessing to enable us to distinguish between them.\n\n## [](#cleaning-up-the-data)Cleaning up the data\n\nThe above snippet from GitHub’s documentation tells us that the `pull_request` column can be used to differentiate between issues and pull requests. Let’s look at a random sample to see what the difference is. As we did in [section 3](/course/chapter5/3), we’ll chain `Dataset.shuffle()` and `Dataset.select()` to create a random sample and then zip the `html_url` and `pull_request` columns so we can compare the various URLs:\n\n```\nsample = issues_dataset.shuffle(seed=666).select(range(3))\n\n\nfor url, pr in zip(sample[\"html_url\"], sample[\"pull_request\"]):\n print(f\">> URL: {url}\")\n print(f\">> Pull request: {pr}\\n\")```\n\n```\n>> URL: https://github.com/huggingface/datasets/pull/850\n>> Pull request: {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/850', 'html_url': 'https://github.com/huggingface/datasets/pull/850', 'diff_url': 'https://github.com/huggingface/datasets/pull/850.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/850.patch'}\n\n>> URL: https://github.com/huggingface/datasets/issues/2773\n>> Pull request: None\n\n>> URL: https://github.com/huggingface/datasets/pull/783\n>> Pull request: {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/783', 'html_url': 'https://github.com/huggingface/datasets/pull/783', 'diff_url': 'https://github.com/huggingface/datasets/pull/783.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/783.patch'}```\n\nHere we can see that each pull request is associated with various URLs, while ordinary issues have a `None` entry. We can use this distinction to create a new `is_pull_request` column that checks whether the `pull_request` field is `None` or not:\n\n```\nissues_dataset = issues_dataset.map(\n lambda x: {\"is_pull_request\": False if x[\"pull_request\"] is None else True}\n)```\n\n✏️ **Try it out!** Calculate the average time it takes to close issues in 🤗 Datasets. You may find the `Dataset.filter()` function useful to filter out the pull requests and open issues, and you can use the `Dataset.set_format()` function to convert the dataset to a `DataFrame` so you can easily manipulate the `created_at` and `closed_at` timestamps. For bonus points, calculate the average time it takes to close pull requests.\n\nAlthough we could proceed to further clean up the dataset by dropping or renaming some columns, it is generally a good practice to keep the dataset as “raw” as possible at this stage so that it can be easily used in multiple applications.\n\nBefore we push our dataset to the Hugging Face Hub, let’s deal with one thing that’s missing from it: the comments associated with each issue and pull request. We’ll add them next with — you guessed it — the GitHub REST API!\n\n## [](#augmenting-the-dataset)Augmenting the dataset\n\nAs shown in the following screenshot, the comments associated with an issue or pull request provide a rich source of information, especially if we’re interested in building a search engine to answer user queries about the library.\n\n![Comments associated with an issue about 🤗 Datasets.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter5/datasets-issues-comment.png)\n\nThe GitHub REST API provides a [`Comments` endpoint](https://docs.github.com/en/rest/reference/issues#list-issue-comments) that returns all the comments associated with an issue number. Let’s test the endpoint to see what it returns:\n\n```\nissue_number = 2792\nurl = f\"https://api.github.com/repos/huggingface/datasets/issues/{issue_number}/comments\"\nresponse = requests.get(url, headers=headers)\nresponse.json()```\n\n```\n[{'url': 'https://api.github.com/repos/huggingface/datasets/issues/comments/897594128',\n 'html_url': 'https://github.com/huggingface/datasets/pull/2792#issuecomment-897594128',\n 'issue_url': 'https://api.github.com/repos/huggingface/datasets/issues/2792',\n 'id': 897594128,\n 'node_id': 'IC_kwDODunzps41gDMQ',\n 'user': {'login': 'bhavitvyamalik',\n 'id': 19718818,\n 'node_id': 'MDQ6VXNlcjE5NzE4ODE4',\n 'avatar_url': 'https://avatars.githubusercontent.com/u/19718818?v=4',\n 'gravatar_id': '',\n 'url': 'https://api.github.com/users/bhavitvyamalik',\n 'html_url': 'https://github.com/bhavitvyamalik',\n 'followers_url': 'https://api.github.com/users/bhavitvyamalik/followers',\n 'following_url': 'https://api.github.com/users/bhavitvyamalik/following{/other_user}',\n 'gists_url': 'https://api.github.com/users/bhavitvyamalik/gists{/gist_id}',\n 'starred_url': 'https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}',\n 'subscriptions_url': 'https://api.github.com/users/bhavitvyamalik/subscriptions',\n 'organizations_url': 'https://api.github.com/users/bhavitvyamalik/orgs',\n 'repos_url': 'https://api.github.com/users/bhavitvyamalik/repos',\n 'events_url': 'https://api.github.com/users/bhavitvyamalik/events{/privacy}',\n 'received_events_url': 'https://api.github.com/users/bhavitvyamalik/received_events',\n 'type': 'User',\n 'site_admin': False},\n 'created_at': '2021-08-12T12:21:52Z',\n 'updated_at': '2021-08-12T12:31:17Z',\n 'author_association': 'CONTRIBUTOR',\n 'body': \"@albertvillanova my tests are failing here:\\r\\n```\\r\\ndataset_name = 'gooaq'\\r\\n\\r\\n def test_load_dataset(self, dataset_name):\\r\\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\\r\\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\\r\\n\\r\\ntests/test_dataset_common.py:234: \\r\\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \\r\\ntests/test_dataset_common.py:187: in check_load_dataset\\r\\n self.parent.assertTrue(len(dataset[split]) > 0)\\r\\nE AssertionError: False is not true\\r\\n```\\r\\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?\",\n 'performed_via_github_app': None}]```\n\nWe can see that the comment is stored in the `body` field, so let’s write a simple function that returns all the comments associated with an issue by picking out the `body` contents for each element in `response.json()`:\n\n```\ndef get_comments(issue_number):\n url = f\"https://api.github.com/repos/huggingface/datasets/issues/{issue_number}/comments\"\n response = requests.get(url, headers=headers)\n return [r[\"body\"] for r in response.json()]\n\n\n\nget_comments(2792)```\n\n```\n[\"@albertvillanova my tests are failing here:\\r\\n```\\r\\ndataset_name = 'gooaq'\\r\\n\\r\\n def test_load_dataset(self, dataset_name):\\r\\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\\r\\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\\r\\n\\r\\ntests/test_dataset_common.py:234: \\r\\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \\r\\ntests/test_dataset_common.py:187: in check_load_dataset\\r\\n self.parent.assertTrue(len(dataset[split]) > 0)\\r\\nE AssertionError: False is not true\\r\\n```\\r\\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?\"]```\n\nThis looks good, so let’s use `Dataset.map()` to add a new `comments` column to each issue in our dataset:\n\n```\nissues_with_comments_dataset = issues_dataset.map(\n lambda x: {\"comments\": get_comments(x[\"number\"])}\n)```\n\nThe final step is to push our dataset to the Hub. Let’s take a look at how we can do that.\n\n## [](#uploading-the-dataset-to-the-hugging-face-hub)Uploading the dataset to the Hugging Face Hub\n\nNow that we have our augmented dataset, it’s time to push it to the Hub so we can share it with the community! Uploading a dataset is very simple: just like models and tokenizers from 🤗 Transformers, we can use a `push_to_hub()` method to push a dataset. To do that we need an authentication token, which can be obtained by first logging into the Hugging Face Hub with the `notebook_login()` function:\n\n```\nfrom huggingface_hub import notebook_login\n\nnotebook_login()```\n\nThis will create a widget where you can enter your username and password, and an API token will be saved in _~/.huggingface/token_. If you’re running the code in a terminal, you can log in via the CLI instead:\n\nOnce we’ve done this, we can upload our dataset by running:\n\n```\nissues_with_comments_dataset.push_to_hub(\"github-issues\")```\n\nFrom here, anyone can download the dataset by simply providing `load_dataset()` with the repository ID as the `path` argument:\n\n```\nremote_dataset = load_dataset(\"lewtun/github-issues\", split=\"train\")\nremote_dataset```\n\n```\nDataset({\n features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'performed_via_github_app', 'is_pull_request'],\n num_rows: 2855\n})```\n\nCool, we’ve pushed our dataset to the Hub and it’s available for others to use! There’s just one important thing left to do: adding a _dataset card_ that explains how the corpus was created and provides other useful information for the community.\n\n💡 You can also upload a dataset to the Hugging Face Hub directly from the terminal by using `huggingface-cli` and a bit of Git magic. See the [🤗 Datasets guide](https://huggingface.co/docs/datasets/share.html#add-a-community-dataset) for details on how to do this.\n\n## [](#creating-a-dataset-card)Creating a dataset card\n\nWell-documented datasets are more likely to be useful to others (including your future self!), as they provide the context to enable users to decide whether the dataset is relevant to their task and to evaluate any potential biases in or risks associated with using the dataset.\n\nOn the Hugging Face Hub, this information is stored in each dataset repository’s _README.md_ file. There are two main steps you should take before creating this file:\n\n1. Use the [`datasets-tagging` application](https://huggingface.co/datasets/tagging/) to create metadata tags in YAML format. These tags are used for a variety of search features on the Hugging Face Hub and ensure your dataset can be easily found by members of the community. Since we have created a custom dataset here, you’ll need to clone the `datasets-tagging` repository and run the application locally. Here’s what the interface looks like:\n\n![The `datasets-tagging` interface.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter5/datasets-tagger.png)\n\n2. Read the [🤗 Datasets guide](https://github.com/huggingface/datasets/blob/master/templates/README_guide.md) on creating informative dataset cards and use it as a template.\n\nYou can create the _README.md_ file directly on the Hub, and you can find a template dataset card in the `lewtun/github-issues` dataset repository. A screenshot of the filled-out dataset card is shown below.\n\n![A dataset card.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter5/dataset-card.png)\n\n✏️ **Try it out!** Use the `dataset-tagging` application and [🤗 Datasets guide](https://github.com/huggingface/datasets/blob/master/templates/README_guide.md) to complete the _README.md_ file for your GitHub issues dataset.\n\nThat’s it! We’ve seen in this section that creating a good dataset can be quite involved, but fortunately uploading it and sharing it with the community is not. In the next section we’ll use our new dataset to create a semantic search engine with 🤗 Datasets that can match questions to the most relevant issues and comments.\n\n✏️ **Try it out!** Go through the steps we took in this section to create a dataset of GitHub issues for your favorite open source library (pick something other than 🤗 Datasets, of course!). For bonus points, fine-tune a multilabel classifier to predict the tags present in the `labels` field.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tCreating your own dataset - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Creating your own dataset

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Creating your own dataset

\"Ask \"Open \"Open

Sometimes the dataset that you need to build an NLP application doesn’t exist, so you’ll need to create it yourself. In this section we’ll show you how to create a corpus of GitHub issues, which are commonly used to track bugs or features in GitHub repositories. This corpus could be used for various purposes, including:

  • Exploring how long it takes to close open issues or pull requests
  • Training a multilabel classifier that can tag issues with metadata based on the issue’s description (e.g., “bug,” “enhancement,” or “question”)
  • Creating a semantic search engine to find which issues match a user’s query

Here we’ll focus on creating the corpus, and in the next section we’ll tackle the semantic search application. To keep things meta, we’ll use the GitHub issues associated with a popular open source project: 🤗 Datasets! Let’s take a look at how to get the data and explore the information contained in these issues.

Getting the data

You can find all the issues in 🤗 Datasets by navigating to the repository’s Issues tab. As shown in the following screenshot, at the time of writing there were 331 open issues and 668 closed ones.

\"The

If you click on one of these issues you’ll find it contains a title, a description, and a set of labels that characterize the issue. An example is shown in the screenshot below.

\"A

To download all the repository’s issues, we’ll use the GitHub REST API to poll the Issues endpoint. This endpoint returns a list of JSON objects, with each object containing a large number of fields that include the title and description as well as metadata about the status of the issue and so on.

A convenient way to download the issues is via the requests library, which is the standard way for making HTTP requests in Python. You can install the library by running:

!pip install requests

Once the library is installed, you can make GET requests to the Issues endpoint by invoking the requests.get() function. For example, you can run the following command to retrieve the first issue on the first page:

import requests\n\nurl = \"https://api.github.com/repos/huggingface/datasets/issues?page=1&per_page=1\"\nresponse = requests.get(url)

The response object contains a lot of useful information about the request, including the HTTP status code:

response.status_code
200

where a 200 status means the request was successful (you can find a list of possible HTTP status codes here). What we are really interested in, though, is the payload, which can be accessed in various formats like bytes, strings, or JSON. Since we know our issues are in JSON format, let’s inspect the payload as follows:

response.json()
[{'url': 'https://api.github.com/repos/huggingface/datasets/issues/2792',\n  'repository_url': 'https://api.github.com/repos/huggingface/datasets',\n  'labels_url': 'https://api.github.com/repos/huggingface/datasets/issues/2792/labels{/name}',\n  'comments_url': 'https://api.github.com/repos/huggingface/datasets/issues/2792/comments',\n  'events_url': 'https://api.github.com/repos/huggingface/datasets/issues/2792/events',\n  'html_url': 'https://github.com/huggingface/datasets/pull/2792',\n  'id': 968650274,\n  'node_id': 'MDExOlB1bGxSZXF1ZXN0NzEwNzUyMjc0',\n  'number': 2792,\n  'title': 'Update GooAQ',\n  'user': {'login': 'bhavitvyamalik',\n   'id': 19718818,\n   'node_id': 'MDQ6VXNlcjE5NzE4ODE4',\n   'avatar_url': 'https://avatars.githubusercontent.com/u/19718818?v=4',\n   'gravatar_id': '',\n   'url': 'https://api.github.com/users/bhavitvyamalik',\n   'html_url': 'https://github.com/bhavitvyamalik',\n   'followers_url': 'https://api.github.com/users/bhavitvyamalik/followers',\n   'following_url': 'https://api.github.com/users/bhavitvyamalik/following{/other_user}',\n   'gists_url': 'https://api.github.com/users/bhavitvyamalik/gists{/gist_id}',\n   'starred_url': 'https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}',\n   'subscriptions_url': 'https://api.github.com/users/bhavitvyamalik/subscriptions',\n   'organizations_url': 'https://api.github.com/users/bhavitvyamalik/orgs',\n   'repos_url': 'https://api.github.com/users/bhavitvyamalik/repos',\n   'events_url': 'https://api.github.com/users/bhavitvyamalik/events{/privacy}',\n   'received_events_url': 'https://api.github.com/users/bhavitvyamalik/received_events',\n   'type': 'User',\n   'site_admin': False},\n  'labels': [],\n  'state': 'open',\n  'locked': False,\n  'assignee': None,\n  'assignees': [],\n  'milestone': None,\n  'comments': 1,\n  'created_at': '2021-08-12T11:40:18Z',\n  'updated_at': '2021-08-12T12:31:17Z',\n  'closed_at': None,\n  'author_association': 'CONTRIBUTOR',\n  'active_lock_reason': None,\n  'pull_request': {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/2792',\n   'html_url': 'https://github.com/huggingface/datasets/pull/2792',\n   'diff_url': 'https://github.com/huggingface/datasets/pull/2792.diff',\n   'patch_url': 'https://github.com/huggingface/datasets/pull/2792.patch'},\n  'body': '[GooAQ](https://github.com/allenai/gooaq) dataset was recently updated after splits were added for the same. This PR contains new updated GooAQ with train/val/test splits and updated README as well.',\n  'performed_via_github_app': None}]

Whoa, that’s a lot of information! We can see useful fields like title, body, and number that describe the issue, as well as information about the GitHub user who opened the issue.

✏️ Try it out! Click on a few of the URLs in the JSON payload above to get a feel for what type of information each GitHub issue is linked to.

As described in the GitHub documentation, unauthenticated requests are limited to 60 requests per hour. Although you can increase the per_page query parameter to reduce the number of requests you make, you will still hit the rate limit on any repository that has more than a few thousand issues. So instead, you should follow GitHub’s instructions on creating a personal access token so that you can boost the rate limit to 5,000 requests per hour. Once you have your token, you can include it as part of the request header:

GITHUB_TOKEN = xxx  # Copy your GitHub token here\nheaders = {\"Authorization\": f\"token {GITHUB_TOKEN}\"}

⚠️ Do not share a notebook with your GITHUB_TOKEN pasted in it. We recommend you delete the last cell once you have executed it to avoid leaking this information accidentally. Even better, store the token in a .env file and use the python-dotenv library to load it automatically for you as an environment variable.

Now that we have our access token, let’s create a function that can download all the issues from a GitHub repository:

import time\nimport math\nfrom pathlib import Path\nimport pandas as pd\nfrom tqdm.notebook import tqdm\n\n\ndef fetch_issues(\n    owner=\"huggingface\",\n    repo=\"datasets\",\n    num_issues=10_000,\n    rate_limit=5_000,\n    issues_path=Path(\".\"),\n):\n    if not issues_path.is_dir():\n        issues_path.mkdir(exist_ok=True)\n\n    batch = []\n    all_issues = []\n    per_page = 100  # Number of issues to return per page\n    num_pages = math.ceil(num_issues / per_page)\n    base_url = \"https://api.github.com/repos\"\n\n    for page in tqdm(range(num_pages)):\n        # Query with state=all to get both open and closed issues\n        query = f\"issues?page={page}&per_page={per_page}&state=all\"\n        issues = requests.get(f\"{base_url}/{owner}/{repo}/{query}\", headers=headers)\n        batch.extend(issues.json())\n\n        if len(batch) > rate_limit and len(all_issues) < num_issues:\n            all_issues.extend(batch)\n            batch = []  # Flush batch for next time period\n            print(f\"Reached GitHub rate limit. Sleeping for one hour ...\")\n            time.sleep(60 * 60 + 1)\n\n    all_issues.extend(batch)\n    df = pd.DataFrame.from_records(all_issues)\n    df.to_json(f\"{issues_path}/{repo}-issues.jsonl\", orient=\"records\", lines=True)\n    print(\n        f\"Downloaded all the issues for {repo}! Dataset stored at {issues_path}/{repo}-issues.jsonl\"\n    )

Now when we call fetch_issues() it will download all the issues in batches to avoid exceeding GitHub’s limit on the number of requests per hour; the result will be stored in a repository_name-issues.jsonl file, where each line is a JSON object the represents an issue. Let’s use this function to grab all the issues from 🤗 Datasets:

# Depending on your internet connection, this can take several minutes to run...\nfetch_issues()

Once the issues are downloaded we can load them locally using our newfound skills from section 2:

issues_dataset = load_dataset(\"json\", data_files=\"datasets-issues.jsonl\", split=\"train\")\nissues_dataset
Dataset({\n    features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'timeline_url', 'performed_via_github_app'],\n    num_rows: 3019\n})

Great, we’ve created our first dataset from scratch! But why are there several thousand issues when the Issues tab of the 🤗 Datasets repository only shows around 1,000 issues in total 🤔? As described in the GitHub documentation, that’s because we’ve downloaded all the pull requests as well:

GitHub’s REST API v3 considers every pull request an issue, but not every issue is a pull request. For this reason, “Issues” endpoints may return both issues and pull requests in the response. You can identify pull requests by the pull_request key. Be aware that the id of a pull request returned from “Issues” endpoints will be an issue id.

Since the contents of issues and pull requests are quite different, let’s do some minor preprocessing to enable us to distinguish between them.

Cleaning up the data

The above snippet from GitHub’s documentation tells us that the pull_request column can be used to differentiate between issues and pull requests. Let’s look at a random sample to see what the difference is. As we did in section 3, we’ll chain Dataset.shuffle() and Dataset.select() to create a random sample and then zip the html_url and pull_request columns so we can compare the various URLs:

sample = issues_dataset.shuffle(seed=666).select(range(3))\n\n# Print out the URL and pull request entries\nfor url, pr in zip(sample[\"html_url\"], sample[\"pull_request\"]):\n    print(f\">> URL: {url}\")\n    print(f\">> Pull request: {pr}\\n\")
>> URL: https://github.com/huggingface/datasets/pull/850\n>> Pull request: {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/850', 'html_url': 'https://github.com/huggingface/datasets/pull/850', 'diff_url': 'https://github.com/huggingface/datasets/pull/850.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/850.patch'}\n\n>> URL: https://github.com/huggingface/datasets/issues/2773\n>> Pull request: None\n\n>> URL: https://github.com/huggingface/datasets/pull/783\n>> Pull request: {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/783', 'html_url': 'https://github.com/huggingface/datasets/pull/783', 'diff_url': 'https://github.com/huggingface/datasets/pull/783.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/783.patch'}

Here we can see that each pull request is associated with various URLs, while ordinary issues have a None entry. We can use this distinction to create a new is_pull_request column that checks whether the pull_request field is None or not:

issues_dataset = issues_dataset.map(\n    lambda x: {\"is_pull_request\": False if x[\"pull_request\"] is None else True}\n)

✏️ Try it out! Calculate the average time it takes to close issues in 🤗 Datasets. You may find the Dataset.filter() function useful to filter out the pull requests and open issues, and you can use the Dataset.set_format() function to convert the dataset to a DataFrame so you can easily manipulate the created_at and closed_at timestamps. For bonus points, calculate the average time it takes to close pull requests.

Although we could proceed to further clean up the dataset by dropping or renaming some columns, it is generally a good practice to keep the dataset as “raw” as possible at this stage so that it can be easily used in multiple applications.

Before we push our dataset to the Hugging Face Hub, let’s deal with one thing that’s missing from it: the comments associated with each issue and pull request. We’ll add them next with — you guessed it — the GitHub REST API!

Augmenting the dataset

As shown in the following screenshot, the comments associated with an issue or pull request provide a rich source of information, especially if we’re interested in building a search engine to answer user queries about the library.

\"Comments

The GitHub REST API provides a Comments endpoint that returns all the comments associated with an issue number. Let’s test the endpoint to see what it returns:

issue_number = 2792\nurl = f\"https://api.github.com/repos/huggingface/datasets/issues/{issue_number}/comments\"\nresponse = requests.get(url, headers=headers)\nresponse.json()
[{'url': 'https://api.github.com/repos/huggingface/datasets/issues/comments/897594128',\n  'html_url': 'https://github.com/huggingface/datasets/pull/2792#issuecomment-897594128',\n  'issue_url': 'https://api.github.com/repos/huggingface/datasets/issues/2792',\n  'id': 897594128,\n  'node_id': 'IC_kwDODunzps41gDMQ',\n  'user': {'login': 'bhavitvyamalik',\n   'id': 19718818,\n   'node_id': 'MDQ6VXNlcjE5NzE4ODE4',\n   'avatar_url': 'https://avatars.githubusercontent.com/u/19718818?v=4',\n   'gravatar_id': '',\n   'url': 'https://api.github.com/users/bhavitvyamalik',\n   'html_url': 'https://github.com/bhavitvyamalik',\n   'followers_url': 'https://api.github.com/users/bhavitvyamalik/followers',\n   'following_url': 'https://api.github.com/users/bhavitvyamalik/following{/other_user}',\n   'gists_url': 'https://api.github.com/users/bhavitvyamalik/gists{/gist_id}',\n   'starred_url': 'https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}',\n   'subscriptions_url': 'https://api.github.com/users/bhavitvyamalik/subscriptions',\n   'organizations_url': 'https://api.github.com/users/bhavitvyamalik/orgs',\n   'repos_url': 'https://api.github.com/users/bhavitvyamalik/repos',\n   'events_url': 'https://api.github.com/users/bhavitvyamalik/events{/privacy}',\n   'received_events_url': 'https://api.github.com/users/bhavitvyamalik/received_events',\n   'type': 'User',\n   'site_admin': False},\n  'created_at': '2021-08-12T12:21:52Z',\n  'updated_at': '2021-08-12T12:31:17Z',\n  'author_association': 'CONTRIBUTOR',\n  'body': \"@albertvillanova my tests are failing here:\\r\\n```\\r\\ndataset_name = 'gooaq'\\r\\n\\r\\n    def test_load_dataset(self, dataset_name):\\r\\n        configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\\r\\n>       self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\\r\\n\\r\\ntests/test_dataset_common.py:234: \\r\\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \\r\\ntests/test_dataset_common.py:187: in check_load_dataset\\r\\n    self.parent.assertTrue(len(dataset[split]) > 0)\\r\\nE   AssertionError: False is not true\\r\\n```\\r\\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?\",\n  'performed_via_github_app': None}]

We can see that the comment is stored in the body field, so let’s write a simple function that returns all the comments associated with an issue by picking out the body contents for each element in response.json():

def get_comments(issue_number):\n    url = f\"https://api.github.com/repos/huggingface/datasets/issues/{issue_number}/comments\"\n    response = requests.get(url, headers=headers)\n    return [r[\"body\"] for r in response.json()]\n\n\n# Test our function works as expected\nget_comments(2792)
[\"@albertvillanova my tests are failing here:\\r\\n```\\r\\ndataset_name = 'gooaq'\\r\\n\\r\\n    def test_load_dataset(self, dataset_name):\\r\\n        configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\\r\\n>       self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\\r\\n\\r\\ntests/test_dataset_common.py:234: \\r\\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \\r\\ntests/test_dataset_common.py:187: in check_load_dataset\\r\\n    self.parent.assertTrue(len(dataset[split]) > 0)\\r\\nE   AssertionError: False is not true\\r\\n```\\r\\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?\"]

This looks good, so let’s use Dataset.map() to add a new comments column to each issue in our dataset:

# Depending on your internet connection, this can take a few minutes...\nissues_with_comments_dataset = issues_dataset.map(\n    lambda x: {\"comments\": get_comments(x[\"number\"])}\n)

The final step is to push our dataset to the Hub. Let’s take a look at how we can do that.

Uploading the dataset to the Hugging Face Hub

Now that we have our augmented dataset, it’s time to push it to the Hub so we can share it with the community! Uploading a dataset is very simple: just like models and tokenizers from 🤗 Transformers, we can use a push_to_hub() method to push a dataset. To do that we need an authentication token, which can be obtained by first logging into the Hugging Face Hub with the notebook_login() function:

from huggingface_hub import notebook_login\n\nnotebook_login()

This will create a widget where you can enter your username and password, and an API token will be saved in ~/.huggingface/token. If you’re running the code in a terminal, you can log in via the CLI instead:

huggingface-cli login

Once we’ve done this, we can upload our dataset by running:

issues_with_comments_dataset.push_to_hub(\"github-issues\")

From here, anyone can download the dataset by simply providing load_dataset() with the repository ID as the path argument:

remote_dataset = load_dataset(\"lewtun/github-issues\", split=\"train\")\nremote_dataset
Dataset({\n    features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'performed_via_github_app', 'is_pull_request'],\n    num_rows: 2855\n})

Cool, we’ve pushed our dataset to the Hub and it’s available for others to use! There’s just one important thing left to do: adding a dataset card that explains how the corpus was created and provides other useful information for the community.

💡 You can also upload a dataset to the Hugging Face Hub directly from the terminal by using huggingface-cli and a bit of Git magic. See the 🤗 Datasets guide for details on how to do this.

Creating a dataset card

Well-documented datasets are more likely to be useful to others (including your future self!), as they provide the context to enable users to decide whether the dataset is relevant to their task and to evaluate any potential biases in or risks associated with using the dataset.

On the Hugging Face Hub, this information is stored in each dataset repository’s README.md file. There are two main steps you should take before creating this file:

  1. Use the datasets-tagging application to create metadata tags in YAML format. These tags are used for a variety of search features on the Hugging Face Hub and ensure your dataset can be easily found by members of the community. Since we have created a custom dataset here, you’ll need to clone the datasets-tagging repository and run the application locally. Here’s what the interface looks like:
\"The
  1. Read the 🤗 Datasets guide on creating informative dataset cards and use it as a template.

You can create the README.md file directly on the Hub, and you can find a template dataset card in the lewtun/github-issues dataset repository. A screenshot of the filled-out dataset card is shown below.

\"A

✏️ Try it out! Use the dataset-tagging application and 🤗 Datasets guide to complete the README.md file for your GitHub issues dataset.

That’s it! We’ve seen in this section that creating a good dataset can be quite involved, but fortunately uploading it and sharing it with the community is not. In the next section we’ll use our new dataset to create a semantic search engine with 🤗 Datasets that can match questions to the most relevant issues and comments.

✏️ Try it out! Go through the steps we took in this section to create a dataset of GitHub issues for your favorite open source library (pick something other than 🤗 Datasets, of course!). For bonus points, fine-tune a multilabel classifier to predict the tags present in the labels field.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:19.654Z"} {"title":"🤗 Datasets, check! - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter5/7?fw=pt","markdown":"## [](#datasets-check)🤗 Datasets, check!\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-5-questions)\n\nWell, that was quite a tour through the 🤗 Datasets library — congratulations on making it this far! With the knowledge that you’ve gained from this chapter, you should be able to:\n\n- Load datasets from anywhere, be it the Hugging Face Hub, your laptop, or a remote server at your company.\n- Wrangle your data using a mix of the `Dataset.map()` and `Dataset.filter()` functions.\n- Quickly switch between data formats like Pandas and NumPy using `Dataset.set_format()`.\n- Create your very own dataset and push it to the Hugging Face Hub.\n- Embed your documents using a Transformer model and build a semantic search engine using FAISS.\n\nIn [Chapter 7](/course/chapter7), we’ll put all of this to good use as we take a deep dive into the core NLP tasks that Transformer models are great for. Before jumping ahead, though, put your knowledge of 🤗 Datasets to the test with a quick quiz!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\t🤗 Datasets, check! - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

🤗 Datasets, check!

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

🤗 Datasets, check!

\"Ask

Well, that was quite a tour through the 🤗 Datasets library — congratulations on making it this far! With the knowledge that you’ve gained from this chapter, you should be able to:

  • Load datasets from anywhere, be it the Hugging Face Hub, your laptop, or a remote server at your company.
  • Wrangle your data using a mix of the Dataset.map() and Dataset.filter() functions.
  • Quickly switch between data formats like Pandas and NumPy using Dataset.set_format().
  • Create your very own dataset and push it to the Hugging Face Hub.
  • Embed your documents using a Transformer model and build a semantic search engine using FAISS.

In Chapter 7, we’ll put all of this to good use as we take a deep dive into the core NLP tasks that Transformer models are great for. Before jumping ahead, though, put your knowledge of 🤗 Datasets to the test with a quick quiz!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:20.100Z"} {"title":"Semantic search with FAISS - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter5/6?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#semantic-search-with-faiss)Semantic search with FAISS\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-5-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter5/section6_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter5/section6_pt.ipynb)\n\nIn [section 5](/course/chapter5/5), we created a dataset of GitHub issues and comments from the 🤗 Datasets repository. In this section we’ll use this information to build a search engine that can help us find answers to our most pressing questions about the library!\n\n## [](#using-embeddings-for-semantic-search)Using embeddings for semantic search\n\nAs we saw in [Chapter 1](/course/chapter1), Transformer-based language models represent each token in a span of text as an _embedding vector_. It turns out that one can “pool” the individual embeddings to create a vector representation for whole sentences, paragraphs, or (in some cases) documents. These embeddings can then be used to find similar documents in the corpus by computing the dot-product similarity (or some other similarity metric) between each embedding and returning the documents with the greatest overlap.\n\nIn this section we’ll use embeddings to develop a semantic search engine. These search engines offer several advantages over conventional approaches that are based on matching keywords in a query with the documents.\n\n![Semantic search.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter5/semantic-search.svg) ![Semantic search.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter5/semantic-search-dark.svg)\n\n## [](#loading-and-preparing-the-dataset)Loading and preparing the dataset\n\nThe first thing we need to do is download our dataset of GitHub issues, so let’s use `load_dataset()` function as usual:\n\n```\nfrom datasets import load_dataset\n\nissues_dataset = load_dataset(\"lewtun/github-issues\", split=\"train\")\nissues_dataset```\n\n```\nDataset({\n features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'performed_via_github_app', 'is_pull_request'],\n num_rows: 2855\n})```\n\nHere we’ve specified the default `train` split in `load_dataset()`, so it returns a `Dataset` instead of a `DatasetDict`. The first order of business is to filter out the pull requests, as these tend to be rarely used for answering user queries and will introduce noise in our search engine. As should be familiar by now, we can use the `Dataset.filter()` function to exclude these rows in our dataset. While we’re at it, let’s also filter out rows with no comments, since these provide no answers to user queries:\n\n```\nissues_dataset = issues_dataset.filter(\n lambda x: (x[\"is_pull_request\"] == False and len(x[\"comments\"]) > 0)\n)\nissues_dataset```\n\n```\nDataset({\n features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'performed_via_github_app', 'is_pull_request'],\n num_rows: 771\n})```\n\nWe can see that there are a lot of columns in our dataset, most of which we don’t need to build our search engine. From a search perspective, the most informative columns are `title`, `body`, and `comments`, while `html_url` provides us with a link back to the source issue. Let’s use the `Dataset.remove_columns()` function to drop the rest:\n\n```\ncolumns = issues_dataset.column_names\ncolumns_to_keep = [\"title\", \"body\", \"html_url\", \"comments\"]\ncolumns_to_remove = set(columns_to_keep).symmetric_difference(columns)\nissues_dataset = issues_dataset.remove_columns(columns_to_remove)\nissues_dataset```\n\n```\nDataset({\n features: ['html_url', 'title', 'comments', 'body'],\n num_rows: 771\n})```\n\nTo create our embeddings we’ll augment each comment with the issue’s title and body, since these fields often include useful contextual information. Because our `comments` column is currently a list of comments for each issue, we need to “explode” the column so that each row consists of an `(html_url, title, body, comment)` tuple. In Pandas we can do this with the [`DataFrame.explode()` function](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html), which creates a new row for each element in a list-like column, while replicating all the other column values. To see this in action, let’s first switch to the Pandas `DataFrame` format:\n\n```\nissues_dataset.set_format(\"pandas\")\ndf = issues_dataset[:]```\n\nIf we inspect the first row in this `DataFrame` we can see there are four comments associated with this issue:\n\n```\ndf[\"comments\"][0].tolist()```\n\n```\n['the bug code locate in :\\r\\n if data_args.task_name is not None:\\r\\n # Downloading and loading a dataset from the hub.\\r\\n datasets = load_dataset(\"glue\", data_args.task_name, cache_dir=model_args.cache_dir)',\n 'Hi @jinec,\\r\\n\\r\\nFrom time to time we get this kind of `ConnectionError` coming from the github.com website: https://raw.githubusercontent.com\\r\\n\\r\\nNormally, it should work if you wait a little and then retry.\\r\\n\\r\\nCould you please confirm if the problem persists?',\n 'cannot connect,even by Web browser,please check that there is some problems。',\n 'I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...']```\n\nWhen we explode `df`, we expect to get one row for each of these comments. Let’s check if that’s the case:\n\n```\ncomments_df = df.explode(\"comments\", ignore_index=True)\ncomments_df.head(4)```\n\n| | html\\_url | title | comments | body |\n| --- | --- | --- | --- | --- |\n| 0 | https://github.com/huggingface/datasets/issues/2787 | ConnectionError: Couldn't reach https://raw.githubusercontent.com | the bug code locate in :\\\\r\\\\n if data\\_args.task\\_name is not None... | Hello,\\\\r\\\\nI am trying to run run\\_glue.py and it gives me this error... |\n| 1 | https://github.com/huggingface/datasets/issues/2787 | ConnectionError: Couldn't reach https://raw.githubusercontent.com | Hi @jinec,\\\\r\\\\n\\\\r\\\\nFrom time to time we get this kind of \\`ConnectionError\\` coming from the github.com website: https://raw.githubusercontent.com... | Hello,\\\\r\\\\nI am trying to run run\\_glue.py and it gives me this error... |\n| 2 | https://github.com/huggingface/datasets/issues/2787 | ConnectionError: Couldn't reach https://raw.githubusercontent.com | cannot connect,even by Web browser,please check that there is some problems。 | Hello,\\\\r\\\\nI am trying to run run\\_glue.py and it gives me this error... |\n| 3 | https://github.com/huggingface/datasets/issues/2787 | ConnectionError: Couldn't reach https://raw.githubusercontent.com | I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem... | Hello,\\\\r\\\\nI am trying to run run\\_glue.py and it gives me this error... |\n\nGreat, we can see the rows have been replicated, with the `comments` column containing the individual comments! Now that we’re finished with Pandas, we can quickly switch back to a `Dataset` by loading the `DataFrame` in memory:\n\n```\nfrom datasets import Dataset\n\ncomments_dataset = Dataset.from_pandas(comments_df)\ncomments_dataset```\n\n```\nDataset({\n features: ['html_url', 'title', 'comments', 'body'],\n num_rows: 2842\n})```\n\nOkay, this has given us a few thousand comments to work with!\n\n✏️ **Try it out!** See if you can use `Dataset.map()` to explode the `comments` column of `issues_dataset` _without_ resorting to the use of Pandas. This is a little tricky; you might find the [“Batch mapping”](https://huggingface.co/docs/datasets/v1.12.1/about_map_batch.html?batch-mapping#batch-mapping) section of the 🤗 Datasets documentation useful for this task.\n\nNow that we have one comment per row, let’s create a new `comments_length` column that contains the number of words per comment:\n\n```\ncomments_dataset = comments_dataset.map(\n lambda x: {\"comment_length\": len(x[\"comments\"].split())}\n)```\n\nWe can use this new column to filter out short comments, which typically include things like “cc @lewtun” or “Thanks!” that are not relevant for our search engine. There’s no precise number to select for the filter, but around 15 words seems like a good start:\n\n```\ncomments_dataset = comments_dataset.filter(lambda x: x[\"comment_length\"] > 15)\ncomments_dataset```\n\n```\nDataset({\n features: ['html_url', 'title', 'comments', 'body', 'comment_length'],\n num_rows: 2098\n})```\n\nHaving cleaned up our dataset a bit, let’s concatenate the issue title, description, and comments together in a new `text` column. As usual, we’ll write a simple function that we can pass to `Dataset.map()`:\n\n```\ndef concatenate_text(examples):\n return {\n \"text\": examples[\"title\"]\n + \" \\n \"\n + examples[\"body\"]\n + \" \\n \"\n + examples[\"comments\"]\n }\n\n\ncomments_dataset = comments_dataset.map(concatenate_text)```\n\nWe’re finally ready to create some embeddings! Let’s take a look.\n\n## [](#creating-text-embeddings)Creating text embeddings\n\nWe saw in [Chapter 2](/course/chapter2) that we can obtain token embeddings by using the `AutoModel` class. All we need to do is pick a suitable checkpoint to load the model from. Fortunately, there’s a library called `sentence-transformers` that is dedicated to creating embeddings. As described in the library’s [documentation](https://www.sbert.net/examples/applications/semantic-search/README.html#symmetric-vs-asymmetric-semantic-search), our use case is an example of _asymmetric semantic search_ because we have a short query whose answer we’d like to find in a longer document, like a an issue comment. The handy [model overview table](https://www.sbert.net/docs/pretrained_models.html#model-overview) in the documentation indicates that the `multi-qa-mpnet-base-dot-v1` checkpoint has the best performance for semantic search, so we’ll use that for our application. We’ll also load the tokenizer using the same checkpoint:\n\n```\nfrom transformers import AutoTokenizer, AutoModel\n\nmodel_ckpt = \"sentence-transformers/multi-qa-mpnet-base-dot-v1\"\ntokenizer = AutoTokenizer.from_pretrained(model_ckpt)\nmodel = AutoModel.from_pretrained(model_ckpt)```\n\nTo speed up the embedding process, it helps to place the model and inputs on a GPU device, so let’s do that now:\n\n```\nimport torch\n\ndevice = torch.device(\"cuda\")\nmodel.to(device)```\n\nAs we mentioned earlier, we’d like to represent each entry in our GitHub issues corpus as a single vector, so we need to “pool” or average our token embeddings in some way. One popular approach is to perform _CLS pooling_ on our model’s outputs, where we simply collect the last hidden state for the special `[CLS]` token. The following function does the trick for us:\n\n```\ndef cls_pooling(model_output):\n return model_output.last_hidden_state[:, 0]```\n\nNext, we’ll create a helper function that will tokenize a list of documents, place the tensors on the GPU, feed them to the model, and finally apply CLS pooling to the outputs:\n\n```\ndef get_embeddings(text_list):\n encoded_input = tokenizer(\n text_list, padding=True, truncation=True, return_tensors=\"pt\"\n )\n encoded_input = {k: v.to(device) for k, v in encoded_input.items()}\n model_output = model(**encoded_input)\n return cls_pooling(model_output)```\n\nWe can test the function works by feeding it the first text entry in our corpus and inspecting the output shape:\n\n```\nembedding = get_embeddings(comments_dataset[\"text\"][0])\nembedding.shape```\n\nGreat, we’ve converted the first entry in our corpus into a 768-dimensional vector! We can use `Dataset.map()` to apply our `get_embeddings()` function to each row in our corpus, so let’s create a new `embeddings` column as follows:\n\n```\nembeddings_dataset = comments_dataset.map(\n lambda x: {\"embeddings\": get_embeddings(x[\"text\"]).detach().cpu().numpy()[0]}\n)```\n\nNotice that we’ve converted the embeddings to NumPy arrays — that’s because 🤗 Datasets requires this format when we try to index them with FAISS, which we’ll do next.\n\n## [](#using-faiss-for-efficient-similarity-search)Using FAISS for efficient similarity search\n\nNow that we have a dataset of embeddings, we need some way to search over them. To do this, we’ll use a special data structure in 🤗 Datasets called a _FAISS index_. [FAISS](https://faiss.ai/) (short for Facebook AI Similarity Search) is a library that provides efficient algorithms to quickly search and cluster embedding vectors.\n\nThe basic idea behind FAISS is to create a special data structure called an _index_ that allows one to find which embeddings are similar to an input embedding. Creating a FAISS index in 🤗 Datasets is simple — we use the `Dataset.add_faiss_index()` function and specify which column of our dataset we’d like to index:\n\n```\nembeddings_dataset.add_faiss_index(column=\"embeddings\")```\n\nWe can now perform queries on this index by doing a nearest neighbor lookup with the `Dataset.get_nearest_examples()` function. Let’s test this out by first embedding a question as follows:\n\n```\nquestion = \"How can I load a dataset offline?\"\nquestion_embedding = get_embeddings([question]).cpu().detach().numpy()\nquestion_embedding.shape```\n\nJust like with the documents, we now have a 768-dimensional vector representing the query, which we can compare against the whole corpus to find the most similar embeddings:\n\n```\nscores, samples = embeddings_dataset.get_nearest_examples(\n \"embeddings\", question_embedding, k=5\n)```\n\nThe `Dataset.get_nearest_examples()` function returns a tuple of scores that rank the overlap between the query and the document, and a corresponding set of samples (here, the 5 best matches). Let’s collect these in a `pandas.DataFrame` so we can easily sort them:\n\n```\nimport pandas as pd\n\nsamples_df = pd.DataFrame.from_dict(samples)\nsamples_df[\"scores\"] = scores\nsamples_df.sort_values(\"scores\", ascending=False, inplace=True)```\n\nNow we can iterate over the first few rows to see how well our query matched the available comments:\n\n```\nfor _, row in samples_df.iterrows():\n print(f\"COMMENT: {row.comments}\")\n print(f\"SCORE: {row.scores}\")\n print(f\"TITLE: {row.title}\")\n print(f\"URL: {row.html_url}\")\n print(\"=\" * 50)\n print()```\n\n```\n\"\"\"\nCOMMENT: Requiring online connection is a deal breaker in some cases unfortunately so it'd be great if offline mode is added similar to how `transformers` loads models offline fine.\n\n@mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?\nSCORE: 25.505046844482422\nTITLE: Discussion using datasets in offline mode\nURL: https://github.com/huggingface/datasets/issues/824\n==================================================\n\nCOMMENT: The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\nYou can now use them offline\n\\`\\`\\`python\ndatasets = load_dataset(\"text\", data_files=data_files)\n\\`\\`\\`\n\nWe'll do a new release soon\nSCORE: 24.555509567260742\nTITLE: Discussion using datasets in offline mode\nURL: https://github.com/huggingface/datasets/issues/824\n==================================================\n\nCOMMENT: I opened a PR that allows to reload modules that have already been loaded once even if there's no internet.\n\nLet me know if you know other ways that can make the offline mode experience better. I'd be happy to add them :)\n\nI already note the \"freeze\" modules option, to prevent local modules updates. It would be a cool feature.\n\n----------\n\n> @mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?\n\nIndeed `load_dataset` allows to load remote dataset script (squad, glue, etc.) but also you own local ones.\nFor example if you have a dataset script at `./my_dataset/my_dataset.py` then you can do\n\\`\\`\\`python\nload_dataset(\"./my_dataset\")\n\\`\\`\\`\nand the dataset script will generate your dataset once and for all.\n\n----------\n\nAbout I'm looking into having `csv`, `json`, `text`, `pandas` dataset builders already included in the `datasets` package, so that they are available offline by default, as opposed to the other datasets that require the script to be downloaded.\ncf #1724\nSCORE: 24.14896583557129\nTITLE: Discussion using datasets in offline mode\nURL: https://github.com/huggingface/datasets/issues/824\n==================================================\n\nCOMMENT: > here is my way to load a dataset offline, but it **requires** an online machine\n>\n> 1. (online machine)\n>\n> ```\n>\n> import datasets\n>\n> data = datasets.load_dataset(...)\n>\n> data.save_to_disk(/YOUR/DATASET/DIR)\n>\n> ```\n>\n> 2. copy the dir from online to the offline machine\n>\n> 3. (offline machine)\n>\n> ```\n>\n> import datasets\n>\n> data = datasets.load_from_disk(/SAVED/DATA/DIR)\n>\n> ```\n>\n>\n>\n> HTH.\n\n\nSCORE: 22.893993377685547\nTITLE: Discussion using datasets in offline mode\nURL: https://github.com/huggingface/datasets/issues/824\n==================================================\n\nCOMMENT: here is my way to load a dataset offline, but it **requires** an online machine\n1. (online machine)\n\\`\\`\\`\nimport datasets\ndata = datasets.load_dataset(...)\ndata.save_to_disk(/YOUR/DATASET/DIR)\n\\`\\`\\`\n2. copy the dir from online to the offline machine\n3. (offline machine)\n\\`\\`\\`\nimport datasets\ndata = datasets.load_from_disk(/SAVED/DATA/DIR)\n\\`\\`\\`\n\nHTH.\nSCORE: 22.406635284423828\nTITLE: Discussion using datasets in offline mode\nURL: https://github.com/huggingface/datasets/issues/824\n==================================================\n\"\"\"```\n\nNot bad! Our second hit seems to match the query.\n\n✏️ **Try it out!** Create your own query and see whether you can find an answer in the retrieved documents. You might have to increase the `k` parameter in `Dataset.get_nearest_examples()` to broaden the search.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tSemantic search with FAISS - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Semantic search with FAISS

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Semantic search with FAISS

\"Ask \"Open \"Open

In section 5, we created a dataset of GitHub issues and comments from the 🤗 Datasets repository. In this section we’ll use this information to build a search engine that can help us find answers to our most pressing questions about the library!

Using embeddings for semantic search

As we saw in Chapter 1, Transformer-based language models represent each token in a span of text as an embedding vector. It turns out that one can “pool” the individual embeddings to create a vector representation for whole sentences, paragraphs, or (in some cases) documents. These embeddings can then be used to find similar documents in the corpus by computing the dot-product similarity (or some other similarity metric) between each embedding and returning the documents with the greatest overlap.

In this section we’ll use embeddings to develop a semantic search engine. These search engines offer several advantages over conventional approaches that are based on matching keywords in a query with the documents.

\"Semantic \"Semantic

Loading and preparing the dataset

The first thing we need to do is download our dataset of GitHub issues, so let’s use load_dataset() function as usual:

from datasets import load_dataset\n\nissues_dataset = load_dataset(\"lewtun/github-issues\", split=\"train\")\nissues_dataset
Dataset({\n    features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'performed_via_github_app', 'is_pull_request'],\n    num_rows: 2855\n})

Here we’ve specified the default train split in load_dataset(), so it returns a Dataset instead of a DatasetDict. The first order of business is to filter out the pull requests, as these tend to be rarely used for answering user queries and will introduce noise in our search engine. As should be familiar by now, we can use the Dataset.filter() function to exclude these rows in our dataset. While we’re at it, let’s also filter out rows with no comments, since these provide no answers to user queries:

issues_dataset = issues_dataset.filter(\n    lambda x: (x[\"is_pull_request\"] == False and len(x[\"comments\"]) > 0)\n)\nissues_dataset
Dataset({\n    features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'performed_via_github_app', 'is_pull_request'],\n    num_rows: 771\n})

We can see that there are a lot of columns in our dataset, most of which we don’t need to build our search engine. From a search perspective, the most informative columns are title, body, and comments, while html_url provides us with a link back to the source issue. Let’s use the Dataset.remove_columns() function to drop the rest:

columns = issues_dataset.column_names\ncolumns_to_keep = [\"title\", \"body\", \"html_url\", \"comments\"]\ncolumns_to_remove = set(columns_to_keep).symmetric_difference(columns)\nissues_dataset = issues_dataset.remove_columns(columns_to_remove)\nissues_dataset
Dataset({\n    features: ['html_url', 'title', 'comments', 'body'],\n    num_rows: 771\n})

To create our embeddings we’ll augment each comment with the issue’s title and body, since these fields often include useful contextual information. Because our comments column is currently a list of comments for each issue, we need to “explode” the column so that each row consists of an (html_url, title, body, comment) tuple. In Pandas we can do this with the DataFrame.explode() function, which creates a new row for each element in a list-like column, while replicating all the other column values. To see this in action, let’s first switch to the Pandas DataFrame format:

issues_dataset.set_format(\"pandas\")\ndf = issues_dataset[:]

If we inspect the first row in this DataFrame we can see there are four comments associated with this issue:

df[\"comments\"][0].tolist()
['the bug code locate in :\\r\\n    if data_args.task_name is not None:\\r\\n        # Downloading and loading a dataset from the hub.\\r\\n        datasets = load_dataset(\"glue\", data_args.task_name, cache_dir=model_args.cache_dir)',\n 'Hi @jinec,\\r\\n\\r\\nFrom time to time we get this kind of `ConnectionError` coming from the github.com website: https://raw.githubusercontent.com\\r\\n\\r\\nNormally, it should work if you wait a little and then retry.\\r\\n\\r\\nCould you please confirm if the problem persists?',\n 'cannot connect,even by Web browser,please check that  there is some  problems。',\n 'I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...']

When we explode df, we expect to get one row for each of these comments. Let’s check if that’s the case:

comments_df = df.explode(\"comments\", ignore_index=True)\ncomments_df.head(4)
html_url title comments body
0 https://github.com/huggingface/datasets/issues/2787 ConnectionError: Couldn't reach https://raw.githubusercontent.com the bug code locate in :\\r\\n if data_args.task_name is not None... Hello,\\r\\nI am trying to run run_glue.py and it gives me this error...
1 https://github.com/huggingface/datasets/issues/2787 ConnectionError: Couldn't reach https://raw.githubusercontent.com Hi @jinec,\\r\\n\\r\\nFrom time to time we get this kind of `ConnectionError` coming from the github.com website: https://raw.githubusercontent.com... Hello,\\r\\nI am trying to run run_glue.py and it gives me this error...
2 https://github.com/huggingface/datasets/issues/2787 ConnectionError: Couldn't reach https://raw.githubusercontent.com cannot connect,even by Web browser,please check that there is some problems。 Hello,\\r\\nI am trying to run run_glue.py and it gives me this error...
3 https://github.com/huggingface/datasets/issues/2787 ConnectionError: Couldn't reach https://raw.githubusercontent.com I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem... Hello,\\r\\nI am trying to run run_glue.py and it gives me this error...

Great, we can see the rows have been replicated, with the comments column containing the individual comments! Now that we’re finished with Pandas, we can quickly switch back to a Dataset by loading the DataFrame in memory:

from datasets import Dataset\n\ncomments_dataset = Dataset.from_pandas(comments_df)\ncomments_dataset
Dataset({\n    features: ['html_url', 'title', 'comments', 'body'],\n    num_rows: 2842\n})

Okay, this has given us a few thousand comments to work with!

✏️ Try it out! See if you can use Dataset.map() to explode the comments column of issues_dataset without resorting to the use of Pandas. This is a little tricky; you might find the “Batch mapping” section of the 🤗 Datasets documentation useful for this task.

Now that we have one comment per row, let’s create a new comments_length column that contains the number of words per comment:

comments_dataset = comments_dataset.map(\n    lambda x: {\"comment_length\": len(x[\"comments\"].split())}\n)

We can use this new column to filter out short comments, which typically include things like “cc @lewtun” or “Thanks!” that are not relevant for our search engine. There’s no precise number to select for the filter, but around 15 words seems like a good start:

comments_dataset = comments_dataset.filter(lambda x: x[\"comment_length\"] > 15)\ncomments_dataset
Dataset({\n    features: ['html_url', 'title', 'comments', 'body', 'comment_length'],\n    num_rows: 2098\n})

Having cleaned up our dataset a bit, let’s concatenate the issue title, description, and comments together in a new text column. As usual, we’ll write a simple function that we can pass to Dataset.map():

def concatenate_text(examples):\n    return {\n        \"text\": examples[\"title\"]\n        + \" \\n \"\n        + examples[\"body\"]\n        + \" \\n \"\n        + examples[\"comments\"]\n    }\n\n\ncomments_dataset = comments_dataset.map(concatenate_text)

We’re finally ready to create some embeddings! Let’s take a look.

Creating text embeddings

We saw in Chapter 2 that we can obtain token embeddings by using the AutoModel class. All we need to do is pick a suitable checkpoint to load the model from. Fortunately, there’s a library called sentence-transformers that is dedicated to creating embeddings. As described in the library’s documentation, our use case is an example of asymmetric semantic search because we have a short query whose answer we’d like to find in a longer document, like a an issue comment. The handy model overview table in the documentation indicates that the multi-qa-mpnet-base-dot-v1 checkpoint has the best performance for semantic search, so we’ll use that for our application. We’ll also load the tokenizer using the same checkpoint:

from transformers import AutoTokenizer, AutoModel\n\nmodel_ckpt = \"sentence-transformers/multi-qa-mpnet-base-dot-v1\"\ntokenizer = AutoTokenizer.from_pretrained(model_ckpt)\nmodel = AutoModel.from_pretrained(model_ckpt)

To speed up the embedding process, it helps to place the model and inputs on a GPU device, so let’s do that now:

import torch\n\ndevice = torch.device(\"cuda\")\nmodel.to(device)

As we mentioned earlier, we’d like to represent each entry in our GitHub issues corpus as a single vector, so we need to “pool” or average our token embeddings in some way. One popular approach is to perform CLS pooling on our model’s outputs, where we simply collect the last hidden state for the special [CLS] token. The following function does the trick for us:

def cls_pooling(model_output):\n    return model_output.last_hidden_state[:, 0]

Next, we’ll create a helper function that will tokenize a list of documents, place the tensors on the GPU, feed them to the model, and finally apply CLS pooling to the outputs:

def get_embeddings(text_list):\n    encoded_input = tokenizer(\n        text_list, padding=True, truncation=True, return_tensors=\"pt\"\n    )\n    encoded_input = {k: v.to(device) for k, v in encoded_input.items()}\n    model_output = model(**encoded_input)\n    return cls_pooling(model_output)

We can test the function works by feeding it the first text entry in our corpus and inspecting the output shape:

embedding = get_embeddings(comments_dataset[\"text\"][0])\nembedding.shape
torch.Size([1, 768])

Great, we’ve converted the first entry in our corpus into a 768-dimensional vector! We can use Dataset.map() to apply our get_embeddings() function to each row in our corpus, so let’s create a new embeddings column as follows:

embeddings_dataset = comments_dataset.map(\n    lambda x: {\"embeddings\": get_embeddings(x[\"text\"]).detach().cpu().numpy()[0]}\n)

Notice that we’ve converted the embeddings to NumPy arrays — that’s because 🤗 Datasets requires this format when we try to index them with FAISS, which we’ll do next.

Using FAISS for efficient similarity search

Now that we have a dataset of embeddings, we need some way to search over them. To do this, we’ll use a special data structure in 🤗 Datasets called a FAISS index. FAISS (short for Facebook AI Similarity Search) is a library that provides efficient algorithms to quickly search and cluster embedding vectors.

The basic idea behind FAISS is to create a special data structure called an index that allows one to find which embeddings are similar to an input embedding. Creating a FAISS index in 🤗 Datasets is simple — we use the Dataset.add_faiss_index() function and specify which column of our dataset we’d like to index:

embeddings_dataset.add_faiss_index(column=\"embeddings\")

We can now perform queries on this index by doing a nearest neighbor lookup with the Dataset.get_nearest_examples() function. Let’s test this out by first embedding a question as follows:

question = \"How can I load a dataset offline?\"\nquestion_embedding = get_embeddings([question]).cpu().detach().numpy()\nquestion_embedding.shape
torch.Size([1, 768])

Just like with the documents, we now have a 768-dimensional vector representing the query, which we can compare against the whole corpus to find the most similar embeddings:

scores, samples = embeddings_dataset.get_nearest_examples(\n    \"embeddings\", question_embedding, k=5\n)

The Dataset.get_nearest_examples() function returns a tuple of scores that rank the overlap between the query and the document, and a corresponding set of samples (here, the 5 best matches). Let’s collect these in a pandas.DataFrame so we can easily sort them:

import pandas as pd\n\nsamples_df = pd.DataFrame.from_dict(samples)\nsamples_df[\"scores\"] = scores\nsamples_df.sort_values(\"scores\", ascending=False, inplace=True)

Now we can iterate over the first few rows to see how well our query matched the available comments:

for _, row in samples_df.iterrows():\n    print(f\"COMMENT: {row.comments}\")\n    print(f\"SCORE: {row.scores}\")\n    print(f\"TITLE: {row.title}\")\n    print(f\"URL: {row.html_url}\")\n    print(\"=\" * 50)\n    print()
\"\"\"\nCOMMENT: Requiring online connection is a deal breaker in some cases unfortunately so it'd be great if offline mode is added similar to how `transformers` loads models offline fine.\n\n@mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?\nSCORE: 25.505046844482422\nTITLE: Discussion using datasets in offline mode\nURL: https://github.com/huggingface/datasets/issues/824\n==================================================\n\nCOMMENT: The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\nYou can now use them offline\n\\`\\`\\`python\ndatasets = load_dataset(\"text\", data_files=data_files)\n\\`\\`\\`\n\nWe'll do a new release soon\nSCORE: 24.555509567260742\nTITLE: Discussion using datasets in offline mode\nURL: https://github.com/huggingface/datasets/issues/824\n==================================================\n\nCOMMENT: I opened a PR that allows to reload modules that have already been loaded once even if there's no internet.\n\nLet me know if you know other ways that can make the offline mode experience better. I'd be happy to add them :)\n\nI already note the \"freeze\" modules option, to prevent local modules updates. It would be a cool feature.\n\n----------\n\n> @mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?\n\nIndeed `load_dataset` allows to load remote dataset script (squad, glue, etc.) but also you own local ones.\nFor example if you have a dataset script at `./my_dataset/my_dataset.py` then you can do\n\\`\\`\\`python\nload_dataset(\"./my_dataset\")\n\\`\\`\\`\nand the dataset script will generate your dataset once and for all.\n\n----------\n\nAbout I'm looking into having `csv`, `json`, `text`, `pandas` dataset builders already included in the `datasets` package, so that they are available offline by default, as opposed to the other datasets that require the script to be downloaded.\ncf #1724\nSCORE: 24.14896583557129\nTITLE: Discussion using datasets in offline mode\nURL: https://github.com/huggingface/datasets/issues/824\n==================================================\n\nCOMMENT: > here is my way to load a dataset offline, but it **requires** an online machine\n>\n> 1. (online machine)\n>\n> ```\n>\n> import datasets\n>\n> data = datasets.load_dataset(...)\n>\n> data.save_to_disk(/YOUR/DATASET/DIR)\n>\n> ```\n>\n> 2. copy the dir from online to the offline machine\n>\n> 3. (offline machine)\n>\n> ```\n>\n> import datasets\n>\n> data = datasets.load_from_disk(/SAVED/DATA/DIR)\n>\n> ```\n>\n>\n>\n> HTH.\n\n\nSCORE: 22.893993377685547\nTITLE: Discussion using datasets in offline mode\nURL: https://github.com/huggingface/datasets/issues/824\n==================================================\n\nCOMMENT: here is my way to load a dataset offline, but it **requires** an online machine\n1. (online machine)\n\\`\\`\\`\nimport datasets\ndata = datasets.load_dataset(...)\ndata.save_to_disk(/YOUR/DATASET/DIR)\n\\`\\`\\`\n2. copy the dir from online to the offline machine\n3. (offline machine)\n\\`\\`\\`\nimport datasets\ndata = datasets.load_from_disk(/SAVED/DATA/DIR)\n\\`\\`\\`\n\nHTH.\nSCORE: 22.406635284423828\nTITLE: Discussion using datasets in offline mode\nURL: https://github.com/huggingface/datasets/issues/824\n==================================================\n\"\"\"

Not bad! Our second hit seems to match the query.

✏️ Try it out! Create your own query and see whether you can find an answer in the retrieved documents. You might have to increase the k parameter in Dataset.get_nearest_examples() to broaden the search.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:20.301Z"} {"title":"End-of-chapter quiz - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter5/8?fw=pt","markdown":"## [](#end-of-chapter-quiz)End-of-chapter quiz\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-5-questions)\n\nThis chapter covered a lot of ground! Don’t worry if you didn’t grasp all the details; the next chapters will help you understand how things work under the hood.\n\nBefore moving on, though, let’s test what you learned in this chapter.\n\n### [](#1.-the-load_dataset()-function-in-🤗-datasets-allows-you-to-load-a-dataset-from-which-of-the-following-locations?)1\\. The `load_dataset()` function in 🤗 Datasets allows you to load a dataset from which of the following locations?\n\n### [](#2.-suppose-you-load-one-of-the-glue-tasks-as-follows:)2\\. Suppose you load one of the GLUE tasks as follows:\n\n```\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"glue\", \"mrpc\", split=\"train\")```\n\nWhich of the following commands will produce a random sample of 50 elements from `dataset`?\n\n### [](#3.-suppose-you-have-a-dataset-about-household-pets-called-pets_dataset,-which-has-a-name-column-that-denotes-the-name-of-each-pet.-which-of-the-following-approaches-would-allow-you-to-filter-the-dataset-for-all-pets-whose-names-start-with-the-letter-“l”?)3\\. Suppose you have a dataset about household pets called `pets_dataset`, which has a `name` column that denotes the name of each pet. Which of the following approaches would allow you to filter the dataset for all pets whose names start with the letter “L”?\n\n### [](#4.-what-is-memory-mapping?)4\\. What is memory mapping?\n\n### [](#5.-which-of-the-following-are-the-main-benefits-of-memory-mapping?)5\\. Which of the following are the main benefits of memory mapping?\n\n### [](#6.-why-does-the-following-code-fail?)6\\. Why does the following code fail?\n\n```\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"allocine\", streaming=True, split=\"train\")\ndataset[0]```\n\n### [](#7.-which-of-the-following-are-the-main-benefits-of-creating-a-dataset-card?)7\\. Which of the following are the main benefits of creating a dataset card?\n\n### [](#8.-what-is-semantic-search?)8\\. What is semantic search?\n\n### [](#9.-for-asymmetric-semantic-search,-you-usually-have:)9\\. For asymmetric semantic search, you usually have:\n\n### [](#10.-can-i-use-🤗-datasets-to-load-data-for-use-in-other-domains,-like-speech-processing?)10\\. Can I use 🤗 Datasets to load data for use in other domains, like speech processing?","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tEnd-of-chapter quiz - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

End-of-chapter quiz

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

End-of-chapter quiz

\"Ask

This chapter covered a lot of ground! Don’t worry if you didn’t grasp all the details; the next chapters will help you understand how things work under the hood.

Before moving on, though, let’s test what you learned in this chapter.

load_dataset()-function-in-🤗-datasets-allows-you-to-load-a-dataset-from-which-of-the-following-locations?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#1.-the-load_dataset()-function-in-🤗-datasets-allows-you-to-load-a-dataset-from-which-of-the-following-locations?\"> 1. The load_dataset() function in 🤗 Datasets allows you to load a dataset from which of the following locations?

2. Suppose you load one of the GLUE tasks as follows:

from datasets import load_dataset\n\ndataset = load_dataset(\"glue\", \"mrpc\", split=\"train\")

Which of the following commands will produce a random sample of 50 elements from dataset?

pets_dataset,-which-has-a-name-column-that-denotes-the-name-of-each-pet.-which-of-the-following-approaches-would-allow-you-to-filter-the-dataset-for-all-pets-whose-names-start-with-the-letter-“l”?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#3.-suppose-you-have-a-dataset-about-household-pets-called-pets_dataset,-which-has-a-name-column-that-denotes-the-name-of-each-pet.-which-of-the-following-approaches-would-allow-you-to-filter-the-dataset-for-all-pets-whose-names-start-with-the-letter-“l”?\"> 3. Suppose you have a dataset about household pets called pets_dataset, which has a name column that denotes the name of each pet. Which of the following approaches would allow you to filter the dataset for all pets whose names start with the letter “L”?

4. What is memory mapping?

5. Which of the following are the main benefits of memory mapping?

6. Why does the following code fail?

from datasets import load_dataset\n\ndataset = load_dataset(\"allocine\", streaming=True, split=\"train\")\ndataset[0]

7. Which of the following are the main benefits of creating a dataset card?

8. What is semantic search?

9. For asymmetric semantic search, you usually have:

10. Can I use 🤗 Datasets to load data for use in other domains, like speech processing?

\n\t\t\t\t
🤗 Datasets, check!\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tNext chapter
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:20.409Z"} {"title":"Introduction - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter6/1?fw=pt","markdown":"## [](#introduction)Introduction\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-6-questions)\n\nIn [Chapter 3](/course/chapter3), we looked at how to fine-tune a model on a given task. When we do that, we use the same tokenizer that the model was pretrained with — but what do we do when we want to train a model from scratch? In these cases, using a tokenizer that was pretrained on a corpus from another domain or language is typically suboptimal. For example, a tokenizer that’s trained on an English corpus will perform poorly on a corpus of Japanese texts because the use of spaces and punctuation is very different in the two languages.\n\nIn this chapter, you will learn how to train a brand new tokenizer on a corpus of texts, so it can then be used to pretrain a language model. This will all be done with the help of the [🤗 Tokenizers](https://github.com/huggingface/tokenizers) library, which provides the “fast” tokenizers in the [🤗 Transformers](https://github.com/huggingface/transformers) library. We’ll take a close look at the features that this library provides, and explore how the fast tokenizers differ from the “slow” versions.\n\nTopics we will cover include:\n\n- How to train a new tokenizer similar to the one used by a given checkpoint on a new corpus of texts\n- The special features of fast tokenizers\n- The differences between the three main subword tokenization algorithms used in NLP today\n- How to build a tokenizer from scratch with the 🤗 Tokenizers library and train it on some data\n\nThe techniques introduced in this chapter will prepare you for the section in [Chapter 7](/course/chapter7/6) where we look at creating a language model for Python source code. Let’s start by looking at what it means to “train” a tokenizer in the first place.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tIntroduction - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Introduction

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Introduction

\"Ask

In Chapter 3, we looked at how to fine-tune a model on a given task. When we do that, we use the same tokenizer that the model was pretrained with — but what do we do when we want to train a model from scratch? In these cases, using a tokenizer that was pretrained on a corpus from another domain or language is typically suboptimal. For example, a tokenizer that’s trained on an English corpus will perform poorly on a corpus of Japanese texts because the use of spaces and punctuation is very different in the two languages.

In this chapter, you will learn how to train a brand new tokenizer on a corpus of texts, so it can then be used to pretrain a language model. This will all be done with the help of the 🤗 Tokenizers library, which provides the “fast” tokenizers in the 🤗 Transformers library. We’ll take a close look at the features that this library provides, and explore how the fast tokenizers differ from the “slow” versions.

Topics we will cover include:

  • How to train a new tokenizer similar to the one used by a given checkpoint on a new corpus of texts
  • The special features of fast tokenizers
  • The differences between the three main subword tokenization algorithms used in NLP today
  • How to build a tokenizer from scratch with the 🤗 Tokenizers library and train it on some data

The techniques introduced in this chapter will prepare you for the section in Chapter 7 where we look at creating a language model for Python source code. Let’s start by looking at what it means to “train” a tokenizer in the first place.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:20.960Z"} {"title":"Fast tokenizers in the QA pipeline - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter6/3b?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#fast-tokenizers-in-the-qa-pipeline)Fast tokenizers in the QA pipeline\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-6-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section3b_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter6/section3b_pt.ipynb)\n\nWe will now dive into the `question-answering` pipeline and see how to leverage the offsets to grab the answer to the question at hand from the context, a bit like we did for the grouped entities in the previous section. Then we will see how we can deal with very long contexts that end up being truncated. You can skip this section if you’re not interested in the question answering task.\n\n## [](#using-the-question-answering-pipeline)Using the `question-answering` pipeline\n\nAs we saw in [Chapter 1](/course/chapter1), we can use the `question-answering` pipeline like this to get the answer to a question:\n\n```\nfrom transformers import pipeline\n\nquestion_answerer = pipeline(\"question-answering\")\ncontext = \"\"\"\n🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch, and TensorFlow — with a seamless integration\nbetween them. It's straightforward to train your models with one before loading them for inference with the other.\n\"\"\"\nquestion = \"Which deep learning libraries back 🤗 Transformers?\"\nquestion_answerer(question=question, context=context)```\n\n```\n{'score': 0.97773,\n 'start': 78,\n 'end': 105,\n 'answer': 'Jax, PyTorch and TensorFlow'}```\n\nUnlike the other pipelines, which can’t truncate and split texts that are longer than the maximum length accepted by the model (and thus may miss information at the end of a document), this pipeline can deal with very long contexts and will return the answer to the question even if it’s at the end:\n\n```\nlong_context = \"\"\"\n🤗 Transformers: State of the Art NLP\n\n🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction,\nquestion answering, summarization, translation, text generation and more in over 100 languages.\nIts aim is to make cutting-edge NLP easier to use for everyone.\n\n🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and\nthen share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and\ncan be modified to enable quick research experiments.\n\nWhy should I use transformers?\n\n1. Easy-to-use state-of-the-art models:\n - High performance on NLU and NLG tasks.\n - Low barrier to entry for educators and practitioners.\n - Few user-facing abstractions with just three classes to learn.\n - A unified API for using all our pretrained models.\n - Lower compute costs, smaller carbon footprint:\n\n2. Researchers can share trained models instead of always retraining.\n - Practitioners can reduce compute time and production costs.\n - Dozens of architectures with over 10,000 pretrained models, some in more than 100 languages.\n\n3. Choose the right framework for every part of a model's lifetime:\n - Train state-of-the-art models in 3 lines of code.\n - Move a single model between TF2.0/PyTorch frameworks at will.\n - Seamlessly pick the right framework for training, evaluation and production.\n\n4. Easily customize a model or an example to your needs:\n - We provide examples for each architecture to reproduce the results published by its original authors.\n - Model internals are exposed as consistently as possible.\n - Model files can be used independently of the library for quick experiments.\n\n🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration\nbetween them. It's straightforward to train your models with one before loading them for inference with the other.\n\"\"\"\nquestion_answerer(question=question, context=long_context)```\n\n```\n{'score': 0.97149,\n 'start': 1892,\n 'end': 1919,\n 'answer': 'Jax, PyTorch and TensorFlow'}```\n\nLet’s see how it does all of this!\n\n## [](#using-a-model-for-question-answering)Using a model for question answering\n\nLike with any other pipeline, we start by tokenizing our input and then send it through the model. The checkpoint used by default for the `question-answering` pipeline is [`distilbert-base-cased-distilled-squad`](https://huggingface.co/distilbert-base-cased-distilled-squad) (the “squad” in the name comes from the dataset on which the model was fine-tuned; we’ll talk more about the SQuAD dataset in [Chapter 7](/course/chapter7/7)):\n\n```\nfrom transformers import AutoTokenizer, AutoModelForQuestionAnswering\n\nmodel_checkpoint = \"distilbert-base-cased-distilled-squad\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\nmodel = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)\n\ninputs = tokenizer(question, context, return_tensors=\"pt\")\noutputs = model(**inputs)```\n\nNote that we tokenize the question and the context as a pair, with the question first.\n\n![An example of tokenization of question and context](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter6/question_tokens.svg) ![An example of tokenization of question and context](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter6/question_tokens-dark.svg)\n\nModels for question answering work a little differently from the models we’ve seen up to now. Using the picture above as an example, the model has been trained to predict the index of the token starting the answer (here 21) and the index of the token where the answer ends (here 24). This is why those models don’t return one tensor of logits but two: one for the logits corresponding to the start token of the answer, and one for the logits corresponding to the end token of the answer. Since in this case we have only one input containing 66 tokens, we get:\n\n```\nstart_logits = outputs.start_logits\nend_logits = outputs.end_logits\nprint(start_logits.shape, end_logits.shape)```\n\n```\ntorch.Size([1, 66]) torch.Size([1, 66])```\n\nTo convert those logits into probabilities, we will apply a softmax function — but before that, we need to make sure we mask the indices that are not part of the context. Our input is `[CLS] question [SEP] context [SEP]`, so we need to mask the tokens of the question as well as the `[SEP]` token. We’ll keep the `[CLS]` token, however, as some models use it to indicate that the answer is not in the context.\n\nSince we will apply a softmax afterward, we just need to replace the logits we want to mask with a large negative number. Here, we use `-10000`:\n\n```\nimport torch\n\nsequence_ids = inputs.sequence_ids()\n\nmask = [i != 1 for i in sequence_ids]\n\nmask[0] = False\nmask = torch.tensor(mask)[None]\n\nstart_logits[mask] = -10000\nend_logits[mask] = -10000```\n\nNow that we have properly masked the logits corresponding to positions we don’t want to predict, we can apply the softmax:\n\n```\nstart_probabilities = torch.nn.functional.softmax(start_logits, dim=-1)[0]\nend_probabilities = torch.nn.functional.softmax(end_logits, dim=-1)[0]```\n\nAt this stage, we could take the argmax of the start and end probabilities — but we might end up with a start index that is greater than the end index, so we need to take a few more precautions. We will compute the probabilities of each possible `start_index` and `end_index` where `start_index <= end_index`, then take the tuple `(start_index, end_index)` with the highest probability.\n\nAssuming the events “The answer starts at `start_index`” and “The answer ends at `end_index`” to be independent, the probability that the answer starts at `start_index` and ends at `end_index` is: start\\_probabilities\\[start\\_index\\]×end\\_probabilities\\[end\\_index\\]\\\\mathrm{start\\\\\\_probabilities}\\[\\\\mathrm{start\\\\\\_index}\\] \\\\times \\\\mathrm{end\\\\\\_probabilities}\\[\\\\mathrm{end\\\\\\_index}\\]\n\nSo, to compute all the scores, we just need to compute all the products start\\_probabilities\\[start\\_index\\]×end\\_probabilities\\[end\\_index\\]\\\\mathrm{start\\\\\\_probabilities}\\[\\\\mathrm{start\\\\\\_index}\\] \\\\times \\\\mathrm{end\\\\\\_probabilities}\\[\\\\mathrm{end\\\\\\_index}\\] where `start_index <= end_index`.\n\nFirst let’s compute all the possible products:\n\n```\nscores = start_probabilities[:, None] * end_probabilities[None, :]```\n\nThen we’ll mask the values where `start_index > end_index` by setting them to `0` (the other probabilities are all positive numbers). The `torch.triu()` function returns the upper triangular part of the 2D tensor passed as an argument, so it will do that masking for us:\n\n```\nscores = torch.triu(scores)```\n\nNow we just have to get the index of the maximum. Since PyTorch will return the index in the flattened tensor, we need to use the floor division `//` and modulus `%` operations to get the `start_index` and `end_index`:\n\n```\nmax_index = scores.argmax().item()\nstart_index = max_index // scores.shape[1]\nend_index = max_index % scores.shape[1]\nprint(scores[start_index, end_index])```\n\nWe’re not quite done yet, but at least we already have the correct score for the answer (you can check this by comparing it to the first result in the previous section):\n\n✏️ **Try it out!** Compute the start and end indices for the five most likely answers.\n\nWe have the `start_index` and `end_index` of the answer in terms of tokens, so now we just need to convert to the character indices in the context. This is where the offsets will be super useful. We can grab them and use them like we did in the token classification task:\n\n```\ninputs_with_offsets = tokenizer(question, context, return_offsets_mapping=True)\noffsets = inputs_with_offsets[\"offset_mapping\"]\n\nstart_char, _ = offsets[start_index]\n_, end_char = offsets[end_index]\nanswer = context[start_char:end_char]```\n\nNow we just have to format everything to get our result:\n\n```\nresult = {\n \"answer\": answer,\n \"start\": start_char,\n \"end\": end_char,\n \"score\": scores[start_index, end_index],\n}\nprint(result)```\n\n```\n{'answer': 'Jax, PyTorch and TensorFlow',\n 'start': 78,\n 'end': 105,\n 'score': 0.97773}```\n\nGreat! That’s the same as in our first example!\n\n✏️ **Try it out!** Use the best scores you computed earlier to show the five most likely answers. To check your results, go back to the first pipeline and pass in `top_k=5` when calling it.\n\n## [](#handling-long-contexts)Handling long contexts\n\nIf we try to tokenize the question and long context we used as an example previously, we’ll get a number of tokens higher than the maximum length used in the `question-answering` pipeline (which is 384):\n\n```\ninputs = tokenizer(question, long_context)\nprint(len(inputs[\"input_ids\"]))```\n\nSo, we’ll need to truncate our inputs at that maximum length. There are several ways we can do this, but we don’t want to truncate the question, only the context. Since the context is the second sentence, we’ll use the `\"only_second\"` truncation strategy. The problem that arises then is that the answer to the question may not be in the truncated context. Here, for instance, we picked a question where the answer is toward the end of the context, and when we truncate it that answer is not present:\n\n```\ninputs = tokenizer(question, long_context, max_length=384, truncation=\"only_second\")\nprint(tokenizer.decode(inputs[\"input_ids\"]))```\n\n```\n\"\"\"\n[CLS] Which deep learning libraries back [UNK] Transformers? [SEP] [UNK] Transformers : State of the Art NLP\n\n[UNK] Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction,\nquestion answering, summarization, translation, text generation and more in over 100 languages.\nIts aim is to make cutting-edge NLP easier to use for everyone.\n\n[UNK] Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and\nthen share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and\ncan be modified to enable quick research experiments.\n\nWhy should I use transformers?\n\n1. Easy-to-use state-of-the-art models:\n - High performance on NLU and NLG tasks.\n - Low barrier to entry for educators and practitioners.\n - Few user-facing abstractions with just three classes to learn.\n - A unified API for using all our pretrained models.\n - Lower compute costs, smaller carbon footprint:\n\n2. Researchers can share trained models instead of always retraining.\n - Practitioners can reduce compute time and production costs.\n - Dozens of architectures with over 10,000 pretrained models, some in more than 100 languages.\n\n3. Choose the right framework for every part of a model's lifetime:\n - Train state-of-the-art models in 3 lines of code.\n - Move a single model between TF2.0/PyTorch frameworks at will.\n - Seamlessly pick the right framework for training, evaluation and production.\n\n4. Easily customize a model or an example to your needs:\n - We provide examples for each architecture to reproduce the results published by its original authors.\n - Model internal [SEP]\n\"\"\"```\n\nThis means the model will have a hard time picking the correct answer. To fix this, the `question-answering` pipeline allows us to split the context into smaller chunks, specifying the maximum length. To make sure we don’t split the context at exactly the wrong place to make it possible to find the answer, it also includes some overlap between the chunks.\n\nWe can have the tokenizer (fast or slow) do this for us by adding `return_overflowing_tokens=True`, and we can specify the overlap we want with the `stride` argument. Here is an example, using a smaller sentence:\n\n```\nsentence = \"This sentence is not too long but we are going to split it anyway.\"\ninputs = tokenizer(\n sentence, truncation=True, return_overflowing_tokens=True, max_length=6, stride=2\n)\n\nfor ids in inputs[\"input_ids\"]:\n print(tokenizer.decode(ids))```\n\n```\n'[CLS] This sentence is not [SEP]'\n'[CLS] is not too long [SEP]'\n'[CLS] too long but we [SEP]'\n'[CLS] but we are going [SEP]'\n'[CLS] are going to split [SEP]'\n'[CLS] to split it anyway [SEP]'\n'[CLS] it anyway. [SEP]'```\n\nAs we can see, the sentence has been split into chunks in such a way that each entry in `inputs[\"input_ids\"]` has at most 6 tokens (we would need to add padding to have the last entry be the same size as the others) and there is an overlap of 2 tokens between each of the entries.\n\nLet’s take a closer look at the result of the tokenization:\n\n```\ndict_keys(['input_ids', 'attention_mask', 'overflow_to_sample_mapping'])```\n\nAs expected, we get input IDs and an attention mask. The last key, `overflow_to_sample_mapping`, is a map that tells us which sentence each of the results corresponds to — here we have 7 results that all come from the (only) sentence we passed the tokenizer:\n\n```\nprint(inputs[\"overflow_to_sample_mapping\"])```\n\nThis is more useful when we tokenize several sentences together. For instance, this:\n\n```\nsentences = [\n \"This sentence is not too long but we are going to split it anyway.\",\n \"This sentence is shorter but will still get split.\",\n]\ninputs = tokenizer(\n sentences, truncation=True, return_overflowing_tokens=True, max_length=6, stride=2\n)\n\nprint(inputs[\"overflow_to_sample_mapping\"])```\n\ngets us:\n\n```\n[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]```\n\nwhich means the first sentence is split into 7 chunks as before, and the next 4 chunks come from the second sentence.\n\nNow let’s go back to our long context. By default the `question-answering` pipeline uses a maximum length of 384, as we mentioned earlier, and a stride of 128, which correspond to the way the model was fine-tuned (you can adjust those parameters by passing `max_seq_len` and `stride` arguments when calling the pipeline). We will thus use those parameters when tokenizing. We’ll also add padding (to have samples of the same length, so we can build tensors) as well as ask for the offsets:\n\n```\ninputs = tokenizer(\n question,\n long_context,\n stride=128,\n max_length=384,\n padding=\"longest\",\n truncation=\"only_second\",\n return_overflowing_tokens=True,\n return_offsets_mapping=True,\n)```\n\nThose `inputs` will contain the input IDs and attention masks the model expects, as well as the offsets and the `overflow_to_sample_mapping` we just talked about. Since those two are not parameters used by the model, we’ll pop them out of the `inputs` (and we won’t store the map, since it’s not useful here) before converting it to a tensor:\n\n```\n_ = inputs.pop(\"overflow_to_sample_mapping\")\noffsets = inputs.pop(\"offset_mapping\")\n\ninputs = inputs.convert_to_tensors(\"pt\")\nprint(inputs[\"input_ids\"].shape)```\n\nOur long context was split in two, which means that after it goes through our model, we will have two sets of start and end logits:\n\n```\noutputs = model(**inputs)\n\nstart_logits = outputs.start_logits\nend_logits = outputs.end_logits\nprint(start_logits.shape, end_logits.shape)```\n\n```\ntorch.Size([2, 384]) torch.Size([2, 384])```\n\nLike before, we first mask the tokens that are not part of the context before taking the softmax. We also mask all the padding tokens (as flagged by the attention mask):\n\n```\nsequence_ids = inputs.sequence_ids()\n\nmask = [i != 1 for i in sequence_ids]\n\nmask[0] = False\n\nmask = torch.logical_or(torch.tensor(mask)[None], (inputs[\"attention_mask\"] == 0))\n\nstart_logits[mask] = -10000\nend_logits[mask] = -10000```\n\nThen we can use the softmax to convert our logits to probabilities:\n\n```\nstart_probabilities = torch.nn.functional.softmax(start_logits, dim=-1)\nend_probabilities = torch.nn.functional.softmax(end_logits, dim=-1)```\n\nThe next step is similar to what we did for the small context, but we repeat it for each of our two chunks. We attribute a score to all possible spans of answer, then take the span with the best score:\n\n```\ncandidates = []\nfor start_probs, end_probs in zip(start_probabilities, end_probabilities):\n scores = start_probs[:, None] * end_probs[None, :]\n idx = torch.triu(scores).argmax().item()\n\n start_idx = idx // scores.shape[1]\n end_idx = idx % scores.shape[1]\n score = scores[start_idx, end_idx].item()\n candidates.append((start_idx, end_idx, score))\n\nprint(candidates)```\n\n```\n[(0, 18, 0.33867), (173, 184, 0.97149)]```\n\nThose two candidates correspond to the best answers the model was able to find in each chunk. The model is way more confident the right answer is in the second part (which is a good sign!). Now we just have to map those two token spans to spans of characters in the context (we only need to map the second one to have our answer, but it’s interesting to see what the model has picked in the first chunk).\n\n✏️ **Try it out!** Adapt the code above to return the scores and spans for the five most likely answers (in total, not per chunk).\n\nThe `offsets` we grabbed earlier is actually a list of offsets, with one list per chunk of text:\n\n```\nfor candidate, offset in zip(candidates, offsets):\n start_token, end_token, score = candidate\n start_char, _ = offset[start_token]\n _, end_char = offset[end_token]\n answer = long_context[start_char:end_char]\n result = {\"answer\": answer, \"start\": start_char, \"end\": end_char, \"score\": score}\n print(result)```\n\n```\n{'answer': '\\n🤗 Transformers: State of the Art NLP', 'start': 0, 'end': 37, 'score': 0.33867}\n{'answer': 'Jax, PyTorch and TensorFlow', 'start': 1892, 'end': 1919, 'score': 0.97149}```\n\nIf we ignore the first result, we get the same result as our pipeline for this long context — yay!\n\n✏️ **Try it out!** Use the best scores you computed before to show the five most likely answers (for the whole context, not each chunk). To check your results, go back to the first pipeline and pass in `top_k=5` when calling it.\n\nThis concludes our deep dive into the tokenizer’s capabilities. We will put all of this in practice again in the next chapter, when we show you how to fine-tune a model on a range of common NLP tasks.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tFast tokenizers in the QA pipeline - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Fast tokenizers in the QA pipeline

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Fast tokenizers in the QA pipeline

\"Ask \"Open \"Open

We will now dive into the question-answering pipeline and see how to leverage the offsets to grab the answer to the question at hand from the context, a bit like we did for the grouped entities in the previous section. Then we will see how we can deal with very long contexts that end up being truncated. You can skip this section if you’re not interested in the question answering task.

Using the question-answering pipeline

As we saw in Chapter 1, we can use the question-answering pipeline like this to get the answer to a question:

from transformers import pipeline\n\nquestion_answerer = pipeline(\"question-answering\")\ncontext = \"\"\"\n🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch, and TensorFlow — with a seamless integration\nbetween them. It's straightforward to train your models with one before loading them for inference with the other.\n\"\"\"\nquestion = \"Which deep learning libraries back 🤗 Transformers?\"\nquestion_answerer(question=question, context=context)
{'score': 0.97773,\n 'start': 78,\n 'end': 105,\n 'answer': 'Jax, PyTorch and TensorFlow'}

Unlike the other pipelines, which can’t truncate and split texts that are longer than the maximum length accepted by the model (and thus may miss information at the end of a document), this pipeline can deal with very long contexts and will return the answer to the question even if it’s at the end:

long_context = \"\"\"\n🤗 Transformers: State of the Art NLP\n\n🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction,\nquestion answering, summarization, translation, text generation and more in over 100 languages.\nIts aim is to make cutting-edge NLP easier to use for everyone.\n\n🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and\nthen share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and\ncan be modified to enable quick research experiments.\n\nWhy should I use transformers?\n\n1. Easy-to-use state-of-the-art models:\n  - High performance on NLU and NLG tasks.\n  - Low barrier to entry for educators and practitioners.\n  - Few user-facing abstractions with just three classes to learn.\n  - A unified API for using all our pretrained models.\n  - Lower compute costs, smaller carbon footprint:\n\n2. Researchers can share trained models instead of always retraining.\n  - Practitioners can reduce compute time and production costs.\n  - Dozens of architectures with over 10,000 pretrained models, some in more than 100 languages.\n\n3. Choose the right framework for every part of a model's lifetime:\n  - Train state-of-the-art models in 3 lines of code.\n  - Move a single model between TF2.0/PyTorch frameworks at will.\n  - Seamlessly pick the right framework for training, evaluation and production.\n\n4. Easily customize a model or an example to your needs:\n  - We provide examples for each architecture to reproduce the results published by its original authors.\n  - Model internals are exposed as consistently as possible.\n  - Model files can be used independently of the library for quick experiments.\n\n🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration\nbetween them. It's straightforward to train your models with one before loading them for inference with the other.\n\"\"\"\nquestion_answerer(question=question, context=long_context)
{'score': 0.97149,\n 'start': 1892,\n 'end': 1919,\n 'answer': 'Jax, PyTorch and TensorFlow'}

Let’s see how it does all of this!

Using a model for question answering

Like with any other pipeline, we start by tokenizing our input and then send it through the model. The checkpoint used by default for the question-answering pipeline is distilbert-base-cased-distilled-squad (the “squad” in the name comes from the dataset on which the model was fine-tuned; we’ll talk more about the SQuAD dataset in Chapter 7):

from transformers import AutoTokenizer, AutoModelForQuestionAnswering\n\nmodel_checkpoint = \"distilbert-base-cased-distilled-squad\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\nmodel = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)\n\ninputs = tokenizer(question, context, return_tensors=\"pt\")\noutputs = model(**inputs)

Note that we tokenize the question and the context as a pair, with the question first.

\"An \"An

Models for question answering work a little differently from the models we’ve seen up to now. Using the picture above as an example, the model has been trained to predict the index of the token starting the answer (here 21) and the index of the token where the answer ends (here 24). This is why those models don’t return one tensor of logits but two: one for the logits corresponding to the start token of the answer, and one for the logits corresponding to the end token of the answer. Since in this case we have only one input containing 66 tokens, we get:

start_logits = outputs.start_logits\nend_logits = outputs.end_logits\nprint(start_logits.shape, end_logits.shape)
torch.Size([1, 66]) torch.Size([1, 66])

To convert those logits into probabilities, we will apply a softmax function — but before that, we need to make sure we mask the indices that are not part of the context. Our input is [CLS] question [SEP] context [SEP], so we need to mask the tokens of the question as well as the [SEP] token. We’ll keep the [CLS] token, however, as some models use it to indicate that the answer is not in the context.

Since we will apply a softmax afterward, we just need to replace the logits we want to mask with a large negative number. Here, we use -10000:

import torch\n\nsequence_ids = inputs.sequence_ids()\n# Mask everything apart from the tokens of the context\nmask = [i != 1 for i in sequence_ids]\n# Unmask the [CLS] token\nmask[0] = False\nmask = torch.tensor(mask)[None]\n\nstart_logits[mask] = -10000\nend_logits[mask] = -10000

Now that we have properly masked the logits corresponding to positions we don’t want to predict, we can apply the softmax:

start_probabilities = torch.nn.functional.softmax(start_logits, dim=-1)[0]\nend_probabilities = torch.nn.functional.softmax(end_logits, dim=-1)[0]

At this stage, we could take the argmax of the start and end probabilities — but we might end up with a start index that is greater than the end index, so we need to take a few more precautions. We will compute the probabilities of each possible start_index and end_index where start_index <= end_index, then take the tuple (start_index, end_index) with the highest probability.

Assuming the events “The answer starts at start_index” and “The answer ends at end_index” to be independent, the probability that the answer starts at start_index and ends at end_index is:\nstart_probabilities[start_index]×end_probabilities[end_index]\\mathrm{start\\_probabilities}[\\mathrm{start\\_index}] \\times \\mathrm{end\\_probabilities}[\\mathrm{end\\_index}]start_probabilities[start_index]×end_probabilities[end_index]

So, to compute all the scores, we just need to compute all the products start_probabilities[start_index]×end_probabilities[end_index]\\mathrm{start\\_probabilities}[\\mathrm{start\\_index}] \\times \\mathrm{end\\_probabilities}[\\mathrm{end\\_index}]start_probabilities[start_index]×end_probabilities[end_index] where start_index <= end_index.

First let’s compute all the possible products:

scores = start_probabilities[:, None] * end_probabilities[None, :]

Then we’ll mask the values where start_index > end_index by setting them to 0 (the other probabilities are all positive numbers). The torch.triu() function returns the upper triangular part of the 2D tensor passed as an argument, so it will do that masking for us:

scores = torch.triu(scores)

Now we just have to get the index of the maximum. Since PyTorch will return the index in the flattened tensor, we need to use the floor division // and modulus % operations to get the start_index and end_index:

max_index = scores.argmax().item()\nstart_index = max_index // scores.shape[1]\nend_index = max_index % scores.shape[1]\nprint(scores[start_index, end_index])

We’re not quite done yet, but at least we already have the correct score for the answer (you can check this by comparing it to the first result in the previous section):

0.97773

✏️ Try it out! Compute the start and end indices for the five most likely answers.

We have the start_index and end_index of the answer in terms of tokens, so now we just need to convert to the character indices in the context. This is where the offsets will be super useful. We can grab them and use them like we did in the token classification task:

inputs_with_offsets = tokenizer(question, context, return_offsets_mapping=True)\noffsets = inputs_with_offsets[\"offset_mapping\"]\n\nstart_char, _ = offsets[start_index]\n_, end_char = offsets[end_index]\nanswer = context[start_char:end_char]

Now we just have to format everything to get our result:

result = {\n    \"answer\": answer,\n    \"start\": start_char,\n    \"end\": end_char,\n    \"score\": scores[start_index, end_index],\n}\nprint(result)
{'answer': 'Jax, PyTorch and TensorFlow',\n 'start': 78,\n 'end': 105,\n 'score': 0.97773}

Great! That’s the same as in our first example!

✏️ Try it out! Use the best scores you computed earlier to show the five most likely answers. To check your results, go back to the first pipeline and pass in top_k=5 when calling it.

Handling long contexts

If we try to tokenize the question and long context we used as an example previously, we’ll get a number of tokens higher than the maximum length used in the question-answering pipeline (which is 384):

inputs = tokenizer(question, long_context)\nprint(len(inputs[\"input_ids\"]))
461

So, we’ll need to truncate our inputs at that maximum length. There are several ways we can do this, but we don’t want to truncate the question, only the context. Since the context is the second sentence, we’ll use the \"only_second\" truncation strategy. The problem that arises then is that the answer to the question may not be in the truncated context. Here, for instance, we picked a question where the answer is toward the end of the context, and when we truncate it that answer is not present:

inputs = tokenizer(question, long_context, max_length=384, truncation=\"only_second\")\nprint(tokenizer.decode(inputs[\"input_ids\"]))
\"\"\"\n[CLS] Which deep learning libraries back [UNK] Transformers? [SEP] [UNK] Transformers : State of the Art NLP\n\n[UNK] Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction,\nquestion answering, summarization, translation, text generation and more in over 100 languages.\nIts aim is to make cutting-edge NLP easier to use for everyone.\n\n[UNK] Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and\nthen share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and\ncan be modified to enable quick research experiments.\n\nWhy should I use transformers?\n\n1. Easy-to-use state-of-the-art models:\n  - High performance on NLU and NLG tasks.\n  - Low barrier to entry for educators and practitioners.\n  - Few user-facing abstractions with just three classes to learn.\n  - A unified API for using all our pretrained models.\n  - Lower compute costs, smaller carbon footprint:\n\n2. Researchers can share trained models instead of always retraining.\n  - Practitioners can reduce compute time and production costs.\n  - Dozens of architectures with over 10,000 pretrained models, some in more than 100 languages.\n\n3. Choose the right framework for every part of a model's lifetime:\n  - Train state-of-the-art models in 3 lines of code.\n  - Move a single model between TF2.0/PyTorch frameworks at will.\n  - Seamlessly pick the right framework for training, evaluation and production.\n\n4. Easily customize a model or an example to your needs:\n  - We provide examples for each architecture to reproduce the results published by its original authors.\n  - Model internal [SEP]\n\"\"\"

This means the model will have a hard time picking the correct answer. To fix this, the question-answering pipeline allows us to split the context into smaller chunks, specifying the maximum length. To make sure we don’t split the context at exactly the wrong place to make it possible to find the answer, it also includes some overlap between the chunks.

We can have the tokenizer (fast or slow) do this for us by adding return_overflowing_tokens=True, and we can specify the overlap we want with the stride argument. Here is an example, using a smaller sentence:

sentence = \"This sentence is not too long but we are going to split it anyway.\"\ninputs = tokenizer(\n    sentence, truncation=True, return_overflowing_tokens=True, max_length=6, stride=2\n)\n\nfor ids in inputs[\"input_ids\"]:\n    print(tokenizer.decode(ids))
'[CLS] This sentence is not [SEP]'\n'[CLS] is not too long [SEP]'\n'[CLS] too long but we [SEP]'\n'[CLS] but we are going [SEP]'\n'[CLS] are going to split [SEP]'\n'[CLS] to split it anyway [SEP]'\n'[CLS] it anyway. [SEP]'

As we can see, the sentence has been split into chunks in such a way that each entry in inputs[\"input_ids\"] has at most 6 tokens (we would need to add padding to have the last entry be the same size as the others) and there is an overlap of 2 tokens between each of the entries.

Let’s take a closer look at the result of the tokenization:

print(inputs.keys())
dict_keys(['input_ids', 'attention_mask', 'overflow_to_sample_mapping'])

As expected, we get input IDs and an attention mask. The last key, overflow_to_sample_mapping, is a map that tells us which sentence each of the results corresponds to — here we have 7 results that all come from the (only) sentence we passed the tokenizer:

print(inputs[\"overflow_to_sample_mapping\"])
[0, 0, 0, 0, 0, 0, 0]

This is more useful when we tokenize several sentences together. For instance, this:

sentences = [\n    \"This sentence is not too long but we are going to split it anyway.\",\n    \"This sentence is shorter but will still get split.\",\n]\ninputs = tokenizer(\n    sentences, truncation=True, return_overflowing_tokens=True, max_length=6, stride=2\n)\n\nprint(inputs[\"overflow_to_sample_mapping\"])

gets us:

[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]

which means the first sentence is split into 7 chunks as before, and the next 4 chunks come from the second sentence.

Now let’s go back to our long context. By default the question-answering pipeline uses a maximum length of 384, as we mentioned earlier, and a stride of 128, which correspond to the way the model was fine-tuned (you can adjust those parameters by passing max_seq_len and stride arguments when calling the pipeline). We will thus use those parameters when tokenizing. We’ll also add padding (to have samples of the same length, so we can build tensors) as well as ask for the offsets:

inputs = tokenizer(\n    question,\n    long_context,\n    stride=128,\n    max_length=384,\n    padding=\"longest\",\n    truncation=\"only_second\",\n    return_overflowing_tokens=True,\n    return_offsets_mapping=True,\n)

Those inputs will contain the input IDs and attention masks the model expects, as well as the offsets and the overflow_to_sample_mapping we just talked about. Since those two are not parameters used by the model, we’ll pop them out of the inputs (and we won’t store the map, since it’s not useful here) before converting it to a tensor:

_ = inputs.pop(\"overflow_to_sample_mapping\")\noffsets = inputs.pop(\"offset_mapping\")\n\ninputs = inputs.convert_to_tensors(\"pt\")\nprint(inputs[\"input_ids\"].shape)
torch.Size([2, 384])

Our long context was split in two, which means that after it goes through our model, we will have two sets of start and end logits:

outputs = model(**inputs)\n\nstart_logits = outputs.start_logits\nend_logits = outputs.end_logits\nprint(start_logits.shape, end_logits.shape)
torch.Size([2, 384]) torch.Size([2, 384])

Like before, we first mask the tokens that are not part of the context before taking the softmax. We also mask all the padding tokens (as flagged by the attention mask):

sequence_ids = inputs.sequence_ids()\n# Mask everything apart from the tokens of the context\nmask = [i != 1 for i in sequence_ids]\n# Unmask the [CLS] token\nmask[0] = False\n# Mask all the [PAD] tokens\nmask = torch.logical_or(torch.tensor(mask)[None], (inputs[\"attention_mask\"] == 0))\n\nstart_logits[mask] = -10000\nend_logits[mask] = -10000

Then we can use the softmax to convert our logits to probabilities:

start_probabilities = torch.nn.functional.softmax(start_logits, dim=-1)\nend_probabilities = torch.nn.functional.softmax(end_logits, dim=-1)

The next step is similar to what we did for the small context, but we repeat it for each of our two chunks. We attribute a score to all possible spans of answer, then take the span with the best score:

candidates = []\nfor start_probs, end_probs in zip(start_probabilities, end_probabilities):\n    scores = start_probs[:, None] * end_probs[None, :]\n    idx = torch.triu(scores).argmax().item()\n\n    start_idx = idx // scores.shape[1]\n    end_idx = idx % scores.shape[1]\n    score = scores[start_idx, end_idx].item()\n    candidates.append((start_idx, end_idx, score))\n\nprint(candidates)
[(0, 18, 0.33867), (173, 184, 0.97149)]

Those two candidates correspond to the best answers the model was able to find in each chunk. The model is way more confident the right answer is in the second part (which is a good sign!). Now we just have to map those two token spans to spans of characters in the context (we only need to map the second one to have our answer, but it’s interesting to see what the model has picked in the first chunk).

✏️ Try it out! Adapt the code above to return the scores and spans for the five most likely answers (in total, not per chunk).

The offsets we grabbed earlier is actually a list of offsets, with one list per chunk of text:

for candidate, offset in zip(candidates, offsets):\n    start_token, end_token, score = candidate\n    start_char, _ = offset[start_token]\n    _, end_char = offset[end_token]\n    answer = long_context[start_char:end_char]\n    result = {\"answer\": answer, \"start\": start_char, \"end\": end_char, \"score\": score}\n    print(result)
{'answer': '\\n🤗 Transformers: State of the Art NLP', 'start': 0, 'end': 37, 'score': 0.33867}\n{'answer': 'Jax, PyTorch and TensorFlow', 'start': 1892, 'end': 1919, 'score': 0.97149}

If we ignore the first result, we get the same result as our pipeline for this long context — yay!

✏️ Try it out! Use the best scores you computed before to show the five most likely answers (for the whole context, not each chunk). To check your results, go back to the first pipeline and pass in top_k=5 when calling it.

This concludes our deep dive into the tokenizer’s capabilities. We will put all of this in practice again in the next chapter, when we show you how to fine-tune a model on a range of common NLP tasks.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:21.876Z"} {"title":"Training a new tokenizer from an old one - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter6/2?fw=pt","markdown":"## [](#training-a-new-tokenizer-from-an-old-one)Training a new tokenizer from an old one\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-6-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section2.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter6/section2.ipynb)\n\nIf a language model is not available in the language you are interested in, or if your corpus is very different from the one your language model was trained on, you will most likely want to retrain the model from scratch using a tokenizer adapted to your data. That will require training a new tokenizer on your dataset. But what exactly does that mean? When we first looked at tokenizers in [Chapter 2](/course/chapter2), we saw that most Transformer models use a _subword tokenization algorithm_. To identify which subwords are of interest and occur most frequently in the corpus at hand, the tokenizer needs to take a hard look at all the texts in the corpus — a process we call _training_. The exact rules that govern this training depend on the type of tokenizer used, and we’ll go over the three main algorithms later in this chapter.\n\n⚠️ Training a tokenizer is not the same as training a model! Model training uses stochastic gradient descent to make the loss a little bit smaller for each batch. It’s randomized by nature (meaning you have to set some seeds to get the same results when doing the same training twice). Training a tokenizer is a statistical process that tries to identify which subwords are the best to pick for a given corpus, and the exact rules used to pick them depend on the tokenization algorithm. It’s deterministic, meaning you always get the same results when training with the same algorithm on the same corpus.\n\n## [](#assembling-a-corpus)Assembling a corpus\n\nThere’s a very simple API in 🤗 Transformers that you can use to train a new tokenizer with the same characteristics as an existing one: `AutoTokenizer.train_new_from_iterator()`. To see this in action, let’s say we want to train GPT-2 from scratch, but in a language other than English. Our first task will be to gather lots of data in that language in a training corpus. To provide examples everyone will be able to understand, we won’t use a language like Russian or Chinese here, but rather a specialized English language: Python code.\n\nThe [🤗 Datasets](https://github.com/huggingface/datasets) library can help us assemble a corpus of Python source code. We’ll use the usual `load_dataset()` function to download and cache the [CodeSearchNet](https://huggingface.co/datasets/code_search_net) dataset. This dataset was created for the [CodeSearchNet challenge](https://wandb.ai/github/CodeSearchNet/benchmark) and contains millions of functions from open source libraries on GitHub in several programming languages. Here, we will load the Python part of this dataset:\n\n```\nfrom datasets import load_dataset\n\n\nraw_datasets = load_dataset(\"code_search_net\", \"python\")```\n\nWe can have a look at the training split to see which columns we have access to:\n\n```\nDataset({\n features: ['repository_name', 'func_path_in_repository', 'func_name', 'whole_func_string', 'language', \n 'func_code_string', 'func_code_tokens', 'func_documentation_string', 'func_documentation_tokens', 'split_name', \n 'func_code_url'\n ],\n num_rows: 412178\n})```\n\nWe can see the dataset separates docstrings from code and suggests a tokenization of both. Here. we’ll just use the `whole_func_string` column to train our tokenizer. We can look at an example of one these functions by indexing into the `train` split:\n\n```\nprint(raw_datasets[\"train\"][123456][\"whole_func_string\"])```\n\nwhich should print the following:\n\n```\ndef handle_simple_responses(\n self, timeout_ms=None, info_cb=DEFAULT_MESSAGE_CALLBACK):\n \"\"\"Accepts normal responses from the device.\n\n Args:\n timeout_ms: Timeout in milliseconds to wait for each response.\n info_cb: Optional callback for text sent from the bootloader.\n\n Returns:\n OKAY packet's message.\n \"\"\"\n return self._accept_responses('OKAY', info_cb, timeout_ms=timeout_ms)```\n\nThe first thing we need to do is transform the dataset into an _iterator_ of lists of texts — for instance, a list of list of texts. Using lists of texts will enable our tokenizer to go faster (training on batches of texts instead of processing individual texts one by one), and it should be an iterator if we want to avoid having everything in memory at once. If your corpus is huge, you will want to take advantage of the fact that 🤗 Datasets does not load everything into RAM but stores the elements of the dataset on disk.\n\nDoing the following would create a list of lists of 1,000 texts each, but would load everything in memory:\n\nUsing a Python generator, we can avoid Python loading anything into memory until it’s actually necessary. To create such a generator, you just to need to replace the brackets with parentheses:\n\n```\ntraining_corpus = (\n raw_datasets[\"train\"][i : i + 1000][\"whole_func_string\"]\n for i in range(0, len(raw_datasets[\"train\"]), 1000)\n)```\n\nThis line of code doesn’t fetch any elements of the dataset; it just creates an object you can use in a Python `for` loop. The texts will only be loaded when you need them (that is, when you’re at the step of the `for` loop that requires them), and only 1,000 texts at a time will be loaded. This way you won’t exhaust all your memory even if you are processing a huge dataset.\n\nThe problem with a generator object is that it can only be used once. So, instead of this giving us the list of the first 10 digits twice:\n\n```\ngen = (i for i in range(10))\nprint(list(gen))\nprint(list(gen))```\n\nwe get them once and then an empty list:\n\n```\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n[]```\n\nThat’s why we define a function that returns a generator instead:\n\n```\ndef get_training_corpus():\n return (\n raw_datasets[\"train\"][i : i + 1000][\"whole_func_string\"]\n for i in range(0, len(raw_datasets[\"train\"]), 1000)\n )\n\n\ntraining_corpus = get_training_corpus()```\n\nYou can also define your generator inside a `for` loop by using the `yield` statement:\n\n```\ndef get_training_corpus():\n dataset = raw_datasets[\"train\"]\n for start_idx in range(0, len(dataset), 1000):\n samples = dataset[start_idx : start_idx + 1000]\n yield samples[\"whole_func_string\"]```\n\nwhich will produce the exact same generator as before, but allows you to use more complex logic than you can in a list comprehension.\n\n## [](#training-a-new-tokenizer)Training a new tokenizer\n\nNow that we have our corpus in the form of an iterator of batches of texts, we are ready to train a new tokenizer. To do this, we first need to load the tokenizer we want to pair with our model (here, GPT-2):\n\n```\nfrom transformers import AutoTokenizer\n\nold_tokenizer = AutoTokenizer.from_pretrained(\"gpt2\")```\n\nEven though we are going to train a new tokenizer, it’s a good idea to do this to avoid starting entirely from scratch. This way, we won’t have to specify anything about the tokenization algorithm or the special tokens we want to use; our new tokenizer will be exactly the same as GPT-2, and the only thing that will change is the vocabulary, which will be determined by the training on our corpus.\n\nFirst let’s have a look at how this tokenizer would treat an example function:\n\n```\nexample = '''def add_numbers(a, b):\n \"\"\"Add the two numbers `a` and `b`.\"\"\"\n return a + b'''\n\ntokens = old_tokenizer.tokenize(example)\ntokens```\n\n```\n['def', 'Ġadd', '_', 'n', 'umbers', '(', 'a', ',', 'Ġb', '):', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ\"\"\"', 'Add', 'Ġthe', 'Ġtwo',\n 'Ġnumbers', 'Ġ`', 'a', '`', 'Ġand', 'Ġ`', 'b', '`', '.\"', '\"\"', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġreturn', 'Ġa', 'Ġ+', 'Ġb']```\n\nThis tokenizer has a few special symbols, like `Ġ` and `Ċ`, which denote spaces and newlines, respectively. As we can see, this is not too efficient: the tokenizer returns individual tokens for each space, when it could group together indentation levels (since having sets of four or eight spaces is going to be very common in code). It also split the function name a bit weirdly, not being used to seeing words with the `_` character.\n\nLet’s train a new tokenizer and see if it solves those issues. For this, we’ll use the method `train_new_from_iterator()`:\n\n```\ntokenizer = old_tokenizer.train_new_from_iterator(training_corpus, 52000)```\n\nThis command might take a bit of time if your corpus is very large, but for this dataset of 1.6 GB of texts it’s blazing fast (1 minute 16 seconds on an AMD Ryzen 9 3900X CPU with 12 cores).\n\nNote that `AutoTokenizer.train_new_from_iterator()` only works if the tokenizer you are using is a “fast” tokenizer. As you’ll see in the next section, the 🤗 Transformers library contains two types of tokenizers: some are written purely in Python and others (the fast ones) are backed by the 🤗 Tokenizers library, which is written in the [Rust](https://www.rust-lang.org/) programming language. Python is the language most often used for data science and deep learning applications, but when anything needs to be parallelized to be fast, it has to be written in another language. For instance, the matrix multiplications that are at the core of the model computation are written in CUDA, an optimized C library for GPUs.\n\nTraining a brand new tokenizer in pure Python would be excruciatingly slow, which is why we developed the 🤗 Tokenizers library. Note that just as you didn’t have to learn the CUDA language to be able to execute your model on a batch of inputs on a GPU, you won’t need to learn Rust to use a fast tokenizer. The 🤗 Tokenizers library provides Python bindings for many methods that internally call some piece of code in Rust; for example, to parallelize the training of your new tokenizer or, as we saw in [Chapter 3](/course/chapter3), the tokenization of a batch of inputs.\n\nMost of the Transformer models have a fast tokenizer available (there are some exceptions that you can check [here](https://huggingface.co/transformers/#supported-frameworks)), and the `AutoTokenizer` API always selects the fast tokenizer for you if it’s available. In the next section we’ll take a look at some of the other special features fast tokenizers have, which will be really useful for tasks like token classification and question answering. Before diving into that, however, let’s try our brand new tokenizer on the previous example:\n\n```\ntokens = tokenizer.tokenize(example)\ntokens```\n\n```\n['def', 'Ġadd', '_', 'numbers', '(', 'a', ',', 'Ġb', '):', 'ĊĠĠĠ', 'Ġ\"\"\"', 'Add', 'Ġthe', 'Ġtwo', 'Ġnumbers', 'Ġ`',\n 'a', '`', 'Ġand', 'Ġ`', 'b', '`.\"\"\"', 'ĊĠĠĠ', 'Ġreturn', 'Ġa', 'Ġ+', 'Ġb']```\n\nHere we again see the special symbols `Ġ` and `Ċ` that denote spaces and newlines, but we can also see that our tokenizer learned some tokens that are highly specific to a corpus of Python functions: for example, there is a `ĊĠĠĠ` token that represents an indentation, and a `Ġ\"\"\"` token that represents the three quotes that start a docstring. The tokenizer also correctly split the function name on `_`. This is quite a compact representation; comparatively, using the plain English tokenizer on the same example will give us a longer sentence:\n\n```\nprint(len(tokens))\nprint(len(old_tokenizer.tokenize(example)))```\n\nLet’s look at another example:\n\n```\nexample = \"\"\"class LinearLayer():\n def __init__(self, input_size, output_size):\n self.weight = torch.randn(input_size, output_size)\n self.bias = torch.zeros(output_size)\n\n def __call__(self, x):\n return x @ self.weights + self.bias\n \"\"\"\ntokenizer.tokenize(example)```\n\n```\n['class', 'ĠLinear', 'Layer', '():', 'ĊĠĠĠ', 'Ġdef', 'Ġ__', 'init', '__(', 'self', ',', 'Ġinput', '_', 'size', ',',\n 'Ġoutput', '_', 'size', '):', 'ĊĠĠĠĠĠĠĠ', 'Ġself', '.', 'weight', 'Ġ=', 'Ġtorch', '.', 'randn', '(', 'input', '_',\n 'size', ',', 'Ġoutput', '_', 'size', ')', 'ĊĠĠĠĠĠĠĠ', 'Ġself', '.', 'bias', 'Ġ=', 'Ġtorch', '.', 'zeros', '(',\n 'output', '_', 'size', ')', 'ĊĊĠĠĠ', 'Ġdef', 'Ġ__', 'call', '__(', 'self', ',', 'Ġx', '):', 'ĊĠĠĠĠĠĠĠ',\n 'Ġreturn', 'Ġx', 'Ġ@', 'Ġself', '.', 'weights', 'Ġ+', 'Ġself', '.', 'bias', 'ĊĠĠĠĠ']```\n\nIn addition to the token corresponding to an indentation, here we can also see a token for a double indentation: `ĊĠĠĠĠĠĠĠ`. The special Python words like `class`, `init`, `call`, `self`, and `return` are each tokenized as one token, and we can see that as well as splitting on `_` and `.` the tokenizer correctly splits even camel-cased names: `LinearLayer` is tokenized as `[\"ĠLinear\", \"Layer\"]`.\n\n## [](#saving-the-tokenizer)Saving the tokenizer\n\nTo make sure we can use it later, we need to save our new tokenizer. Like for models, this is done with the `save_pretrained()` method:\n\n```\ntokenizer.save_pretrained(\"code-search-net-tokenizer\")```\n\nThis will create a new folder named _code-search-net-tokenizer_, which will contain all the files the tokenizer needs to be reloaded. If you want to share this tokenizer with your colleagues and friends, you can upload it to the Hub by logging into your account. If you’re working in a notebook, there’s a convenience function to help you with this:\n\n```\nfrom huggingface_hub import notebook_login\n\nnotebook_login()```\n\nThis will display a widget where you can enter your Hugging Face login credentials. If you aren’t working in a notebook, just type the following line in your terminal:\n\nOnce you’ve logged in, you can push your tokenizer by executing the following command:\n\n```\ntokenizer.push_to_hub(\"code-search-net-tokenizer\")```\n\nThis will create a new repository in your namespace with the name `code-search-net-tokenizer`, containing the tokenizer file. You can then load the tokenizer from anywhere with the `from_pretrained()` method:\n\n```\ntokenizer = AutoTokenizer.from_pretrained(\"huggingface-course/code-search-net-tokenizer\")```\n\nYou’re now all set for training a language model from scratch and fine-tuning it on your task at hand! We’ll get to that in [Chapter 7](/course/chapter7), but first, in the rest of this chapter we’ll take a closer look at fast tokenizers and explore in detail what actually happens when we call the method `train_new_from_iterator()`.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tTraining a new tokenizer from an old one - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Training a new tokenizer from an old one

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Training a new tokenizer from an old one

\"Ask \"Open \"Open

If a language model is not available in the language you are interested in, or if your corpus is very different from the one your language model was trained on, you will most likely want to retrain the model from scratch using a tokenizer adapted to your data. That will require training a new tokenizer on your dataset. But what exactly does that mean? When we first looked at tokenizers in Chapter 2, we saw that most Transformer models use a subword tokenization algorithm. To identify which subwords are of interest and occur most frequently in the corpus at hand, the tokenizer needs to take a hard look at all the texts in the corpus — a process we call training. The exact rules that govern this training depend on the type of tokenizer used, and we’ll go over the three main algorithms later in this chapter.

⚠️ Training a tokenizer is not the same as training a model! Model training uses stochastic gradient descent to make the loss a little bit smaller for each batch. It’s randomized by nature (meaning you have to set some seeds to get the same results when doing the same training twice). Training a tokenizer is a statistical process that tries to identify which subwords are the best to pick for a given corpus, and the exact rules used to pick them depend on the tokenization algorithm. It’s deterministic, meaning you always get the same results when training with the same algorithm on the same corpus.

Assembling a corpus

There’s a very simple API in 🤗 Transformers that you can use to train a new tokenizer with the same characteristics as an existing one: AutoTokenizer.train_new_from_iterator(). To see this in action, let’s say we want to train GPT-2 from scratch, but in a language other than English. Our first task will be to gather lots of data in that language in a training corpus. To provide examples everyone will be able to understand, we won’t use a language like Russian or Chinese here, but rather a specialized English language: Python code.

The 🤗 Datasets library can help us assemble a corpus of Python source code. We’ll use the usual load_dataset() function to download and cache the CodeSearchNet dataset. This dataset was created for the CodeSearchNet challenge and contains millions of functions from open source libraries on GitHub in several programming languages. Here, we will load the Python part of this dataset:

from datasets import load_dataset\n\n# This can take a few minutes to load, so grab a coffee or tea while you wait!\nraw_datasets = load_dataset(\"code_search_net\", \"python\")

We can have a look at the training split to see which columns we have access to:

raw_datasets[\"train\"]
Dataset({\n    features: ['repository_name', 'func_path_in_repository', 'func_name', 'whole_func_string', 'language', \n      'func_code_string', 'func_code_tokens', 'func_documentation_string', 'func_documentation_tokens', 'split_name', \n      'func_code_url'\n    ],\n    num_rows: 412178\n})

We can see the dataset separates docstrings from code and suggests a tokenization of both. Here. we’ll just use the whole_func_string column to train our tokenizer. We can look at an example of one these functions by indexing into the train split:

print(raw_datasets[\"train\"][123456][\"whole_func_string\"])

which should print the following:

def handle_simple_responses(\n      self, timeout_ms=None, info_cb=DEFAULT_MESSAGE_CALLBACK):\n    \"\"\"Accepts normal responses from the device.\n\n    Args:\n      timeout_ms: Timeout in milliseconds to wait for each response.\n      info_cb: Optional callback for text sent from the bootloader.\n\n    Returns:\n      OKAY packet's message.\n    \"\"\"\n    return self._accept_responses('OKAY', info_cb, timeout_ms=timeout_ms)

The first thing we need to do is transform the dataset into an iterator of lists of texts — for instance, a list of list of texts. Using lists of texts will enable our tokenizer to go faster (training on batches of texts instead of processing individual texts one by one), and it should be an iterator if we want to avoid having everything in memory at once. If your corpus is huge, you will want to take advantage of the fact that 🤗 Datasets does not load everything into RAM but stores the elements of the dataset on disk.

Doing the following would create a list of lists of 1,000 texts each, but would load everything in memory:

# Don't uncomment the following line unless your dataset is small!\n# training_corpus = [raw_datasets[\"train\"][i: i + 1000][\"whole_func_string\"] for i in range(0, len(raw_datasets[\"train\"]), 1000)]

Using a Python generator, we can avoid Python loading anything into memory until it’s actually necessary. To create such a generator, you just to need to replace the brackets with parentheses:

training_corpus = (\n    raw_datasets[\"train\"][i : i + 1000][\"whole_func_string\"]\n    for i in range(0, len(raw_datasets[\"train\"]), 1000)\n)

This line of code doesn’t fetch any elements of the dataset; it just creates an object you can use in a Python for loop. The texts will only be loaded when you need them (that is, when you’re at the step of the for loop that requires them), and only 1,000 texts at a time will be loaded. This way you won’t exhaust all your memory even if you are processing a huge dataset.

The problem with a generator object is that it can only be used once. So, instead of this giving us the list of the first 10 digits twice:

gen = (i for i in range(10))\nprint(list(gen))\nprint(list(gen))

we get them once and then an empty list:

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n[]

That’s why we define a function that returns a generator instead:

def get_training_corpus():\n    return (\n        raw_datasets[\"train\"][i : i + 1000][\"whole_func_string\"]\n        for i in range(0, len(raw_datasets[\"train\"]), 1000)\n    )\n\n\ntraining_corpus = get_training_corpus()

You can also define your generator inside a for loop by using the yield statement:

def get_training_corpus():\n    dataset = raw_datasets[\"train\"]\n    for start_idx in range(0, len(dataset), 1000):\n        samples = dataset[start_idx : start_idx + 1000]\n        yield samples[\"whole_func_string\"]

which will produce the exact same generator as before, but allows you to use more complex logic than you can in a list comprehension.

Training a new tokenizer

Now that we have our corpus in the form of an iterator of batches of texts, we are ready to train a new tokenizer. To do this, we first need to load the tokenizer we want to pair with our model (here, GPT-2):

from transformers import AutoTokenizer\n\nold_tokenizer = AutoTokenizer.from_pretrained(\"gpt2\")

Even though we are going to train a new tokenizer, it’s a good idea to do this to avoid starting entirely from scratch. This way, we won’t have to specify anything about the tokenization algorithm or the special tokens we want to use; our new tokenizer will be exactly the same as GPT-2, and the only thing that will change is the vocabulary, which will be determined by the training on our corpus.

First let’s have a look at how this tokenizer would treat an example function:

example = '''def add_numbers(a, b):\n    \"\"\"Add the two numbers `a` and `b`.\"\"\"\n    return a + b'''\n\ntokens = old_tokenizer.tokenize(example)\ntokens
['def', 'Ġadd', '_', 'n', 'umbers', '(', 'a', ',', 'Ġb', '):', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ\"\"\"', 'Add', 'Ġthe', 'Ġtwo',\n 'Ġnumbers', 'Ġ`', 'a', '`', 'Ġand', 'Ġ`', 'b', '`', '.\"', '\"\"', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġreturn', 'Ġa', 'Ġ+', 'Ġb']

This tokenizer has a few special symbols, like Ġ and Ċ, which denote spaces and newlines, respectively. As we can see, this is not too efficient: the tokenizer returns individual tokens for each space, when it could group together indentation levels (since having sets of four or eight spaces is going to be very common in code). It also split the function name a bit weirdly, not being used to seeing words with the _ character.

Let’s train a new tokenizer and see if it solves those issues. For this, we’ll use the method train_new_from_iterator():

tokenizer = old_tokenizer.train_new_from_iterator(training_corpus, 52000)

This command might take a bit of time if your corpus is very large, but for this dataset of 1.6 GB of texts it’s blazing fast (1 minute 16 seconds on an AMD Ryzen 9 3900X CPU with 12 cores).

Note that AutoTokenizer.train_new_from_iterator() only works if the tokenizer you are using is a “fast” tokenizer. As you’ll see in the next section, the 🤗 Transformers library contains two types of tokenizers: some are written purely in Python and others (the fast ones) are backed by the 🤗 Tokenizers library, which is written in the Rust programming language. Python is the language most often used for data science and deep learning applications, but when anything needs to be parallelized to be fast, it has to be written in another language. For instance, the matrix multiplications that are at the core of the model computation are written in CUDA, an optimized C library for GPUs.

Training a brand new tokenizer in pure Python would be excruciatingly slow, which is why we developed the 🤗 Tokenizers library. Note that just as you didn’t have to learn the CUDA language to be able to execute your model on a batch of inputs on a GPU, you won’t need to learn Rust to use a fast tokenizer. The 🤗 Tokenizers library provides Python bindings for many methods that internally call some piece of code in Rust; for example, to parallelize the training of your new tokenizer or, as we saw in Chapter 3, the tokenization of a batch of inputs.

Most of the Transformer models have a fast tokenizer available (there are some exceptions that you can check here), and the AutoTokenizer API always selects the fast tokenizer for you if it’s available. In the next section we’ll take a look at some of the other special features fast tokenizers have, which will be really useful for tasks like token classification and question answering. Before diving into that, however, let’s try our brand new tokenizer on the previous example:

tokens = tokenizer.tokenize(example)\ntokens
['def', 'Ġadd', '_', 'numbers', '(', 'a', ',', 'Ġb', '):', 'ĊĠĠĠ', 'Ġ\"\"\"', 'Add', 'Ġthe', 'Ġtwo', 'Ġnumbers', 'Ġ`',\n 'a', '`', 'Ġand', 'Ġ`', 'b', '`.\"\"\"', 'ĊĠĠĠ', 'Ġreturn', 'Ġa', 'Ġ+', 'Ġb']

Here we again see the special symbols Ġ and Ċ that denote spaces and newlines, but we can also see that our tokenizer learned some tokens that are highly specific to a corpus of Python functions: for example, there is a ĊĠĠĠ token that represents an indentation, and a Ġ\"\"\" token that represents the three quotes that start a docstring. The tokenizer also correctly split the function name on _. This is quite a compact representation; comparatively, using the plain English tokenizer on the same example will give us a longer sentence:

print(len(tokens))\nprint(len(old_tokenizer.tokenize(example)))
27\n36

Let’s look at another example:

example = \"\"\"class LinearLayer():\n    def __init__(self, input_size, output_size):\n        self.weight = torch.randn(input_size, output_size)\n        self.bias = torch.zeros(output_size)\n\n    def __call__(self, x):\n        return x @ self.weights + self.bias\n    \"\"\"\ntokenizer.tokenize(example)
['class', 'ĠLinear', 'Layer', '():', 'ĊĠĠĠ', 'Ġdef', 'Ġ__', 'init', '__(', 'self', ',', 'Ġinput', '_', 'size', ',',\n 'Ġoutput', '_', 'size', '):', 'ĊĠĠĠĠĠĠĠ', 'Ġself', '.', 'weight', 'Ġ=', 'Ġtorch', '.', 'randn', '(', 'input', '_',\n 'size', ',', 'Ġoutput', '_', 'size', ')', 'ĊĠĠĠĠĠĠĠ', 'Ġself', '.', 'bias', 'Ġ=', 'Ġtorch', '.', 'zeros', '(',\n 'output', '_', 'size', ')', 'ĊĊĠĠĠ', 'Ġdef', 'Ġ__', 'call', '__(', 'self', ',', 'Ġx', '):', 'ĊĠĠĠĠĠĠĠ',\n 'Ġreturn', 'Ġx', 'Ġ@', 'Ġself', '.', 'weights', 'Ġ+', 'Ġself', '.', 'bias', 'ĊĠĠĠĠ']

In addition to the token corresponding to an indentation, here we can also see a token for a double indentation: ĊĠĠĠĠĠĠĠ. The special Python words like class, init, call, self, and return are each tokenized as one token, and we can see that as well as splitting on _ and . the tokenizer correctly splits even camel-cased names: LinearLayer is tokenized as [\"ĠLinear\", \"Layer\"].

Saving the tokenizer

To make sure we can use it later, we need to save our new tokenizer. Like for models, this is done with the save_pretrained() method:

tokenizer.save_pretrained(\"code-search-net-tokenizer\")

This will create a new folder named code-search-net-tokenizer, which will contain all the files the tokenizer needs to be reloaded. If you want to share this tokenizer with your colleagues and friends, you can upload it to the Hub by logging into your account. If you’re working in a notebook, there’s a convenience function to help you with this:

from huggingface_hub import notebook_login\n\nnotebook_login()

This will display a widget where you can enter your Hugging Face login credentials. If you aren’t working in a notebook, just type the following line in your terminal:

huggingface-cli login

Once you’ve logged in, you can push your tokenizer by executing the following command:

tokenizer.push_to_hub(\"code-search-net-tokenizer\")

This will create a new repository in your namespace with the name code-search-net-tokenizer, containing the tokenizer file. You can then load the tokenizer from anywhere with the from_pretrained() method:

# Replace \"huggingface-course\" below with your actual namespace to use your own tokenizer\ntokenizer = AutoTokenizer.from_pretrained(\"huggingface-course/code-search-net-tokenizer\")

You’re now all set for training a language model from scratch and fine-tuning it on your task at hand! We’ll get to that in Chapter 7, but first, in the rest of this chapter we’ll take a closer look at fast tokenizers and explore in detail what actually happens when we call the method train_new_from_iterator().

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:22.061Z"} {"title":"Fast tokenizers' special powers - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter6/3?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#fast-tokenizers-special-powers)Fast tokenizers' special powers\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-6-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section3_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter6/section3_pt.ipynb)\n\nIn this section we will take a closer look at the capabilities of the tokenizers in 🤗 Transformers. Up to now we have only used them to tokenize inputs or decode IDs back into text, but tokenizers — especially those backed by the 🤗 Tokenizers library — can do a lot more. To illustrate these additional features, we will explore how to reproduce the results of the `token-classification` (that we called `ner`) and `question-answering` pipelines that we first encountered in [Chapter 1](/course/chapter1).\n\nIn the following discussion, we will often make the distinction between “slow” and “fast” tokenizers. Slow tokenizers are those written in Python inside the 🤗 Transformers library, while the fast versions are the ones provided by 🤗 Tokenizers, which are written in Rust. If you remember the table from [Chapter 5](/course/chapter5/3) that reported how long it took a fast and a slow tokenizer to tokenize the Drug Review Dataset, you should have an idea of why we call them fast and slow:\n\n| | Fast tokenizer | Slow tokenizer |\n| --- | --- | --- |\n| `batched=True` | 10.8s | 4min41s |\n| `batched=False` | 59.2s | 5min3s |\n\n⚠️ When tokenizing a single sentence, you won’t always see a difference in speed between the slow and fast versions of the same tokenizer. In fact, the fast version might actually be slower! It’s only when tokenizing lots of texts in parallel at the same time that you will be able to clearly see the difference.\n\n## [](#batch-encoding)Batch encoding\n\nThe output of a tokenizer isn’t a simple Python dictionary; what we get is actually a special `BatchEncoding` object. It’s a subclass of a dictionary (which is why we were able to index into that result without any problem before), but with additional methods that are mostly used by fast tokenizers.\n\nBesides their parallelization capabilities, the key functionality of fast tokenizers is that they always keep track of the original span of texts the final tokens come from — a feature we call _offset mapping_. This in turn unlocks features like mapping each word to the tokens it generated or mapping each character of the original text to the token it’s inside, and vice versa.\n\nLet’s take a look at an example:\n\n```\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\nexample = \"My name is Sylvain and I work at Hugging Face in Brooklyn.\"\nencoding = tokenizer(example)\nprint(type(encoding))```\n\nAs mentioned previously, we get a `BatchEncoding` object in the tokenizer’s output:\n\n```\n```\n\nSince the `AutoTokenizer` class picks a fast tokenizer by default, we can use the additional methods this `BatchEncoding` object provides. We have two ways to check if our tokenizer is a fast or a slow one. We can either check the attribute `is_fast` of the `tokenizer`:\n\nor check the same attribute of our `encoding`:\n\nLet’s see what a fast tokenizer enables us to do. First, we can access the tokens without having to convert the IDs back to tokens:\n\n```\n['[CLS]', 'My', 'name', 'is', 'S', '##yl', '##va', '##in', 'and', 'I', 'work', 'at', 'Hu', '##gging', 'Face', 'in',\n 'Brooklyn', '.', '[SEP]']```\n\nIn this case the token at index 5 is `##yl`, which is part of the word “Sylvain” in the original sentence. We can also use the `word_ids()` method to get the index of the word each token comes from:\n\n```\n[None, 0, 1, 2, 3, 3, 3, 3, 4, 5, 6, 7, 8, 8, 9, 10, 11, 12, None]```\n\nWe can see that the tokenizer’s special tokens `[CLS]` and `[SEP]` are mapped to `None`, and then each token is mapped to the word it originates from. This is especially useful to determine if a token is at the start of a word or if two tokens are in the same word. We could rely on the `##` prefix for that, but it only works for BERT-like tokenizers; this method works for any type of tokenizer as long as it’s a fast one. In the next chapter, we’ll see how we can use this capability to apply the labels we have for each word properly to the tokens in tasks like named entity recognition (NER) and part-of-speech (POS) tagging. We can also use it to mask all the tokens coming from the same word in masked language modeling (a technique called _whole word masking_).\n\nThe notion of what a word is complicated. For instance, does “I’ll” (a contraction of “I will”) count as one or two words? It actually depends on the tokenizer and the pre-tokenization operation it applies. Some tokenizers just split on spaces, so they will consider this as one word. Others use punctuation on top of spaces, so will consider it two words.\n\n✏️ **Try it out!** Create a tokenizer from the `bert-base-cased` and `roberta-base` checkpoints and tokenize ”81s” with them. What do you observe? What are the word IDs?\n\nSimilarly, there is a `sentence_ids()` method that we can use to map a token to the sentence it came from (though in this case, the `token_type_ids` returned by the tokenizer can give us the same information).\n\nLastly, we can map any word or token to characters in the original text, and vice versa, via the `word_to_chars()` or `token_to_chars()` and `char_to_word()` or `char_to_token()` methods. For instance, the `word_ids()` method told us that `##yl` is part of the word at index 3, but which word is it in the sentence? We can find out like this:\n\n```\nstart, end = encoding.word_to_chars(3)\nexample[start:end]```\n\nAs we mentioned previously, this is all powered by the fact the fast tokenizer keeps track of the span of text each token comes from in a list of _offsets_. To illustrate their use, next we’ll show you how to replicate the results of the `token-classification` pipeline manually.\n\n✏️ **Try it out!** Create your own example text and see if you can understand which tokens are associated with word ID, and also how to extract the character spans for a single word. For bonus points, try using two sentences as input and see if the sentence IDs make sense to you.\n\n## [](#inside-the-token-classification-pipeline)Inside the `token-classification` pipeline\n\nIn [Chapter 1](/course/chapter1) we got our first taste of applying NER — where the task is to identify which parts of the text correspond to entities like persons, locations, or organizations — with the 🤗 Transformers `pipeline()` function. Then, in [Chapter 2](/course/chapter2), we saw how a pipeline groups together the three stages necessary to get the predictions from a raw text: tokenization, passing the inputs through the model, and post-processing. The first two steps in the `token-classification` pipeline are the same as in any other pipeline, but the post-processing is a little more complex — let’s see how!\n\n### [](#getting-the-base-results-with-the-pipeline)Getting the base results with the pipeline\n\nFirst, let’s grab a token classification pipeline so we can get some results to compare manually. The model used by default is [`dbmdz/bert-large-cased-finetuned-conll03-english`](https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english); it performs NER on sentences:\n\n```\nfrom transformers import pipeline\n\ntoken_classifier = pipeline(\"token-classification\")\ntoken_classifier(\"My name is Sylvain and I work at Hugging Face in Brooklyn.\")```\n\n```\n[{'entity': 'I-PER', 'score': 0.9993828, 'index': 4, 'word': 'S', 'start': 11, 'end': 12},\n {'entity': 'I-PER', 'score': 0.99815476, 'index': 5, 'word': '##yl', 'start': 12, 'end': 14},\n {'entity': 'I-PER', 'score': 0.99590725, 'index': 6, 'word': '##va', 'start': 14, 'end': 16},\n {'entity': 'I-PER', 'score': 0.9992327, 'index': 7, 'word': '##in', 'start': 16, 'end': 18},\n {'entity': 'I-ORG', 'score': 0.97389334, 'index': 12, 'word': 'Hu', 'start': 33, 'end': 35},\n {'entity': 'I-ORG', 'score': 0.976115, 'index': 13, 'word': '##gging', 'start': 35, 'end': 40},\n {'entity': 'I-ORG', 'score': 0.98879766, 'index': 14, 'word': 'Face', 'start': 41, 'end': 45},\n {'entity': 'I-LOC', 'score': 0.99321055, 'index': 16, 'word': 'Brooklyn', 'start': 49, 'end': 57}]```\n\nThe model properly identified each token generated by “Sylvain” as a person, each token generated by “Hugging Face” as an organization, and the token “Brooklyn” as a location. We can also ask the pipeline to group together the tokens that correspond to the same entity:\n\n```\nfrom transformers import pipeline\n\ntoken_classifier = pipeline(\"token-classification\", aggregation_strategy=\"simple\")\ntoken_classifier(\"My name is Sylvain and I work at Hugging Face in Brooklyn.\")```\n\n```\n[{'entity_group': 'PER', 'score': 0.9981694, 'word': 'Sylvain', 'start': 11, 'end': 18},\n {'entity_group': 'ORG', 'score': 0.97960204, 'word': 'Hugging Face', 'start': 33, 'end': 45},\n {'entity_group': 'LOC', 'score': 0.99321055, 'word': 'Brooklyn', 'start': 49, 'end': 57}]```\n\nThe `aggregation_strategy` picked will change the scores computed for each grouped entity. With `\"simple\"` the score is just the mean of the scores of each token in the given entity: for instance, the score of “Sylvain” is the mean of the scores we saw in the previous example for the tokens `S`, `##yl`, `##va`, and `##in`. Other strategies available are:\n\n- `\"first\"`, where the score of each entity is the score of the first token of that entity (so for “Sylvain” it would be 0.993828, the score of the token `S`)\n- `\"max\"`, where the score of each entity is the maximum score of the tokens in that entity (so for “Hugging Face” it would be 0.98879766, the score of “Face”)\n- `\"average\"`, where the score of each entity is the average of the scores of the words composing that entity (so for “Sylvain” there would be no difference from the `\"simple\"` strategy, but “Hugging Face” would have a score of 0.9819, the average of the scores for “Hugging”, 0.975, and “Face”, 0.98879)\n\nNow let’s see how to obtain these results without using the `pipeline()` function!\n\n### [](#from-inputs-to-predictions)From inputs to predictions\n\nFirst we need to tokenize our input and pass it through the model. This is done exactly as in [Chapter 2](/course/chapter2); we instantiate the tokenizer and the model using the `AutoXxx` classes and then use them on our example:\n\n```\nfrom transformers import AutoTokenizer, AutoModelForTokenClassification\n\nmodel_checkpoint = \"dbmdz/bert-large-cased-finetuned-conll03-english\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\nmodel = AutoModelForTokenClassification.from_pretrained(model_checkpoint)\n\nexample = \"My name is Sylvain and I work at Hugging Face in Brooklyn.\"\ninputs = tokenizer(example, return_tensors=\"pt\")\noutputs = model(**inputs)```\n\nSince we’re using `AutoModelForTokenClassification` here, we get one set of logits for each token in the input sequence:\n\n```\nprint(inputs[\"input_ids\"].shape)\nprint(outputs.logits.shape)```\n\n```\ntorch.Size([1, 19])\ntorch.Size([1, 19, 9])```\n\nWe have a batch with 1 sequence of 19 tokens and the model has 9 different labels, so the output of the model has a shape of 1 x 19 x 9. Like for the text classification pipeline, we use a softmax function to convert those logits to probabilities, and we take the argmax to get predictions (note that we can take the argmax on the logits because the softmax does not change the order):\n\n```\nimport torch\n\nprobabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)[0].tolist()\npredictions = outputs.logits.argmax(dim=-1)[0].tolist()\nprint(predictions)```\n\n```\n[0, 0, 0, 0, 4, 4, 4, 4, 0, 0, 0, 0, 6, 6, 6, 0, 8, 0, 0]```\n\nThe `model.config.id2label` attribute contains the mapping of indexes to labels that we can use to make sense of the predictions:\n\n```\n{0: 'O',\n 1: 'B-MISC',\n 2: 'I-MISC',\n 3: 'B-PER',\n 4: 'I-PER',\n 5: 'B-ORG',\n 6: 'I-ORG',\n 7: 'B-LOC',\n 8: 'I-LOC'}```\n\nAs we saw earlier, there are 9 labels: `O` is the label for the tokens that are not in any named entity (it stands for “outside”), and we then have two labels for each type of entity (miscellaneous, person, organization, and location). The label `B-XXX` indicates the token is at the beginning of an entity `XXX` and the label `I-XXX` indicates the token is inside the entity `XXX`. For instance, in the current example we would expect our model to classify the token `S` as `B-PER` (beginning of a person entity) and the tokens `##yl`, `##va` and `##in` as `I-PER` (inside a person entity).\n\nYou might think the model was wrong in this case as it gave the label `I-PER` to all four of these tokens, but that’s not entirely true. There are actually two formats for those `B-` and `I-` labels: _IOB1_ and _IOB2_. The IOB2 format (in pink below), is the one we introduced whereas in the IOB1 format (in blue), the labels beginning with `B-` are only ever used to separate two adjacent entities of the same type. The model we are using was fine-tuned on a dataset using that format, which is why it assigns the label `I-PER` to the `S` token.\n\n![IOB1 vs IOB2 format](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter6/IOB_versions.svg) ![IOB1 vs IOB2 format](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter6/IOB_versions-dark.svg)\n\nWith this map, we are ready to reproduce (almost entirely) the results of the first pipeline — we can just grab the score and label of each token that was not classified as `O`:\n\n```\nresults = []\ntokens = inputs.tokens()\n\nfor idx, pred in enumerate(predictions):\n label = model.config.id2label[pred]\n if label != \"O\":\n results.append(\n {\"entity\": label, \"score\": probabilities[idx][pred], \"word\": tokens[idx]}\n )\n\nprint(results)```\n\n```\n[{'entity': 'I-PER', 'score': 0.9993828, 'index': 4, 'word': 'S'},\n {'entity': 'I-PER', 'score': 0.99815476, 'index': 5, 'word': '##yl'},\n {'entity': 'I-PER', 'score': 0.99590725, 'index': 6, 'word': '##va'},\n {'entity': 'I-PER', 'score': 0.9992327, 'index': 7, 'word': '##in'},\n {'entity': 'I-ORG', 'score': 0.97389334, 'index': 12, 'word': 'Hu'},\n {'entity': 'I-ORG', 'score': 0.976115, 'index': 13, 'word': '##gging'},\n {'entity': 'I-ORG', 'score': 0.98879766, 'index': 14, 'word': 'Face'},\n {'entity': 'I-LOC', 'score': 0.99321055, 'index': 16, 'word': 'Brooklyn'}]```\n\nThis is very similar to what we had before, with one exception: the pipeline also gave us information about the `start` and `end` of each entity in the original sentence. This is where our offset mapping will come into play. To get the offsets, we just have to set `return_offsets_mapping=True` when we apply the tokenizer to our inputs:\n\n```\ninputs_with_offsets = tokenizer(example, return_offsets_mapping=True)\ninputs_with_offsets[\"offset_mapping\"]```\n\n```\n[(0, 0), (0, 2), (3, 7), (8, 10), (11, 12), (12, 14), (14, 16), (16, 18), (19, 22), (23, 24), (25, 29), (30, 32),\n (33, 35), (35, 40), (41, 45), (46, 48), (49, 57), (57, 58), (0, 0)]```\n\nEach tuple is the span of text corresponding to each token, where `(0, 0)` is reserved for the special tokens. We saw before that the token at index 5 is `##yl`, which has `(12, 14)` as offsets here. If we grab the corresponding slice in our example:\n\nwe get the proper span of text without the `##`:\n\nUsing this, we can now complete the previous results:\n\n```\nresults = []\ninputs_with_offsets = tokenizer(example, return_offsets_mapping=True)\ntokens = inputs_with_offsets.tokens()\noffsets = inputs_with_offsets[\"offset_mapping\"]\n\nfor idx, pred in enumerate(predictions):\n label = model.config.id2label[pred]\n if label != \"O\":\n start, end = offsets[idx]\n results.append(\n {\n \"entity\": label,\n \"score\": probabilities[idx][pred],\n \"word\": tokens[idx],\n \"start\": start,\n \"end\": end,\n }\n )\n\nprint(results)```\n\n```\n[{'entity': 'I-PER', 'score': 0.9993828, 'index': 4, 'word': 'S', 'start': 11, 'end': 12},\n {'entity': 'I-PER', 'score': 0.99815476, 'index': 5, 'word': '##yl', 'start': 12, 'end': 14},\n {'entity': 'I-PER', 'score': 0.99590725, 'index': 6, 'word': '##va', 'start': 14, 'end': 16},\n {'entity': 'I-PER', 'score': 0.9992327, 'index': 7, 'word': '##in', 'start': 16, 'end': 18},\n {'entity': 'I-ORG', 'score': 0.97389334, 'index': 12, 'word': 'Hu', 'start': 33, 'end': 35},\n {'entity': 'I-ORG', 'score': 0.976115, 'index': 13, 'word': '##gging', 'start': 35, 'end': 40},\n {'entity': 'I-ORG', 'score': 0.98879766, 'index': 14, 'word': 'Face', 'start': 41, 'end': 45},\n {'entity': 'I-LOC', 'score': 0.99321055, 'index': 16, 'word': 'Brooklyn', 'start': 49, 'end': 57}]```\n\nThis is the same as what we got from the first pipeline!\n\n### [](#grouping-entities)Grouping entities\n\nUsing the offsets to determine the start and end keys for each entity is handy, but that information isn’t strictly necessary. When we want to group the entities together, however, the offsets will save us a lot of messy code. For example, if we wanted to group together the tokens `Hu`, `##gging`, and `Face`, we could make special rules that say the first two should be attached while removing the `##`, and the `Face` should be added with a space since it does not begin with `##` — but that would only work for this particular type of tokenizer. We would have to write another set of rules for a SentencePiece or a Byte-Pair-Encoding tokenizer (discussed later in this chapter).\n\nWith the offsets, all that custom code goes away: we just can take the span in the original text that begins with the first token and ends with the last token. So, in the case of the tokens `Hu`, `##gging`, and `Face`, we should start at character 33 (the beginning of `Hu`) and end before character 45 (the end of `Face`):\n\nTo write the code that post-processes the predictions while grouping entities, we will group together entities that are consecutive and labeled with `I-XXX`, except for the first one, which can be labeled as `B-XXX` or `I-XXX` (so, we stop grouping an entity when we get a `O`, a new type of entity, or a `B-XXX` that tells us an entity of the same type is starting):\n\n```\nimport numpy as np\n\nresults = []\ninputs_with_offsets = tokenizer(example, return_offsets_mapping=True)\ntokens = inputs_with_offsets.tokens()\noffsets = inputs_with_offsets[\"offset_mapping\"]\n\nidx = 0\nwhile idx < len(predictions):\n pred = predictions[idx]\n label = model.config.id2label[pred]\n if label != \"O\":\n \n label = label[2:]\n start, _ = offsets[idx]\n\n \n all_scores = []\n while (\n idx < len(predictions)\n and model.config.id2label[predictions[idx]] == f\"I-{label}\"\n ):\n all_scores.append(probabilities[idx][pred])\n _, end = offsets[idx]\n idx += 1\n\n \n score = np.mean(all_scores).item()\n word = example[start:end]\n results.append(\n {\n \"entity_group\": label,\n \"score\": score,\n \"word\": word,\n \"start\": start,\n \"end\": end,\n }\n )\n idx += 1\n\nprint(results)```\n\nAnd we get the same results as with our second pipeline!\n\n```\n[{'entity_group': 'PER', 'score': 0.9981694, 'word': 'Sylvain', 'start': 11, 'end': 18},\n {'entity_group': 'ORG', 'score': 0.97960204, 'word': 'Hugging Face', 'start': 33, 'end': 45},\n {'entity_group': 'LOC', 'score': 0.99321055, 'word': 'Brooklyn', 'start': 49, 'end': 57}]```\n\nAnother example of a task where these offsets are extremely useful is question answering. Diving into that pipeline, which we’ll do in the next section, will also enable us to take a look at one last feature of the tokenizers in the 🤗 Transformers library: dealing with overflowing tokens when we truncate an input to a given length.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tFast tokenizers' special powers - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Fast tokenizers' special powers

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Fast tokenizers' special powers

\"Ask \"Open \"Open

In this section we will take a closer look at the capabilities of the tokenizers in 🤗 Transformers. Up to now we have only used them to tokenize inputs or decode IDs back into text, but tokenizers — especially those backed by the 🤗 Tokenizers library — can do a lot more. To illustrate these additional features, we will explore how to reproduce the results of the token-classification (that we called ner) and question-answering pipelines that we first encountered in Chapter 1.

In the following discussion, we will often make the distinction between “slow” and “fast” tokenizers. Slow tokenizers are those written in Python inside the 🤗 Transformers library, while the fast versions are the ones provided by 🤗 Tokenizers, which are written in Rust. If you remember the table from Chapter 5 that reported how long it took a fast and a slow tokenizer to tokenize the Drug Review Dataset, you should have an idea of why we call them fast and slow:

Fast tokenizer Slow tokenizer
batched=True 10.8s 4min41s
batched=False 59.2s 5min3s

⚠️ When tokenizing a single sentence, you won’t always see a difference in speed between the slow and fast versions of the same tokenizer. In fact, the fast version might actually be slower! It’s only when tokenizing lots of texts in parallel at the same time that you will be able to clearly see the difference.

Batch encoding

The output of a tokenizer isn’t a simple Python dictionary; what we get is actually a special BatchEncoding object. It’s a subclass of a dictionary (which is why we were able to index into that result without any problem before), but with additional methods that are mostly used by fast tokenizers.

Besides their parallelization capabilities, the key functionality of fast tokenizers is that they always keep track of the original span of texts the final tokens come from — a feature we call offset mapping. This in turn unlocks features like mapping each word to the tokens it generated or mapping each character of the original text to the token it’s inside, and vice versa.

Let’s take a look at an example:

from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\nexample = \"My name is Sylvain and I work at Hugging Face in Brooklyn.\"\nencoding = tokenizer(example)\nprint(type(encoding))

As mentioned previously, we get a BatchEncoding object in the tokenizer’s output:

<class 'transformers.tokenization_utils_base.BatchEncoding'>

Since the AutoTokenizer class picks a fast tokenizer by default, we can use the additional methods this BatchEncoding object provides. We have two ways to check if our tokenizer is a fast or a slow one. We can either check the attribute is_fast of the tokenizer:

tokenizer.is_fast
True

or check the same attribute of our encoding:

encoding.is_fast
True

Let’s see what a fast tokenizer enables us to do. First, we can access the tokens without having to convert the IDs back to tokens:

encoding.tokens()
['[CLS]', 'My', 'name', 'is', 'S', '##yl', '##va', '##in', 'and', 'I', 'work', 'at', 'Hu', '##gging', 'Face', 'in',\n 'Brooklyn', '.', '[SEP]']

In this case the token at index 5 is ##yl, which is part of the word “Sylvain” in the original sentence. We can also use the word_ids() method to get the index of the word each token comes from:

encoding.word_ids()
[None, 0, 1, 2, 3, 3, 3, 3, 4, 5, 6, 7, 8, 8, 9, 10, 11, 12, None]

We can see that the tokenizer’s special tokens [CLS] and [SEP] are mapped to None, and then each token is mapped to the word it originates from. This is especially useful to determine if a token is at the start of a word or if two tokens are in the same word. We could rely on the ## prefix for that, but it only works for BERT-like tokenizers; this method works for any type of tokenizer as long as it’s a fast one. In the next chapter, we’ll see how we can use this capability to apply the labels we have for each word properly to the tokens in tasks like named entity recognition (NER) and part-of-speech (POS) tagging. We can also use it to mask all the tokens coming from the same word in masked language modeling (a technique called whole word masking).

The notion of what a word is complicated. For instance, does “I’ll” (a contraction of “I will”) count as one or two words? It actually depends on the tokenizer and the pre-tokenization operation it applies. Some tokenizers just split on spaces, so they will consider this as one word. Others use punctuation on top of spaces, so will consider it two words.

✏️ Try it out! Create a tokenizer from the bert-base-cased and roberta-base checkpoints and tokenize ”81s” with them. What do you observe? What are the word IDs?

Similarly, there is a sentence_ids() method that we can use to map a token to the sentence it came from (though in this case, the token_type_ids returned by the tokenizer can give us the same information).

Lastly, we can map any word or token to characters in the original text, and vice versa, via the word_to_chars() or token_to_chars() and char_to_word() or char_to_token() methods. For instance, the word_ids() method told us that ##yl is part of the word at index 3, but which word is it in the sentence? We can find out like this:

start, end = encoding.word_to_chars(3)\nexample[start:end]
Sylvain

As we mentioned previously, this is all powered by the fact the fast tokenizer keeps track of the span of text each token comes from in a list of offsets. To illustrate their use, next we’ll show you how to replicate the results of the token-classification pipeline manually.

✏️ Try it out! Create your own example text and see if you can understand which tokens are associated with word ID, and also how to extract the character spans for a single word. For bonus points, try using two sentences as input and see if the sentence IDs make sense to you.

Inside the token-classification pipeline

In Chapter 1 we got our first taste of applying NER — where the task is to identify which parts of the text correspond to entities like persons, locations, or organizations — with the 🤗 Transformers pipeline() function. Then, in Chapter 2, we saw how a pipeline groups together the three stages necessary to get the predictions from a raw text: tokenization, passing the inputs through the model, and post-processing. The first two steps in the token-classification pipeline are the same as in any other pipeline, but the post-processing is a little more complex — let’s see how!

Getting the base results with the pipeline

First, let’s grab a token classification pipeline so we can get some results to compare manually. The model used by default is dbmdz/bert-large-cased-finetuned-conll03-english; it performs NER on sentences:

from transformers import pipeline\n\ntoken_classifier = pipeline(\"token-classification\")\ntoken_classifier(\"My name is Sylvain and I work at Hugging Face in Brooklyn.\")
[{'entity': 'I-PER', 'score': 0.9993828, 'index': 4, 'word': 'S', 'start': 11, 'end': 12},\n {'entity': 'I-PER', 'score': 0.99815476, 'index': 5, 'word': '##yl', 'start': 12, 'end': 14},\n {'entity': 'I-PER', 'score': 0.99590725, 'index': 6, 'word': '##va', 'start': 14, 'end': 16},\n {'entity': 'I-PER', 'score': 0.9992327, 'index': 7, 'word': '##in', 'start': 16, 'end': 18},\n {'entity': 'I-ORG', 'score': 0.97389334, 'index': 12, 'word': 'Hu', 'start': 33, 'end': 35},\n {'entity': 'I-ORG', 'score': 0.976115, 'index': 13, 'word': '##gging', 'start': 35, 'end': 40},\n {'entity': 'I-ORG', 'score': 0.98879766, 'index': 14, 'word': 'Face', 'start': 41, 'end': 45},\n {'entity': 'I-LOC', 'score': 0.99321055, 'index': 16, 'word': 'Brooklyn', 'start': 49, 'end': 57}]

The model properly identified each token generated by “Sylvain” as a person, each token generated by “Hugging Face” as an organization, and the token “Brooklyn” as a location. We can also ask the pipeline to group together the tokens that correspond to the same entity:

from transformers import pipeline\n\ntoken_classifier = pipeline(\"token-classification\", aggregation_strategy=\"simple\")\ntoken_classifier(\"My name is Sylvain and I work at Hugging Face in Brooklyn.\")
[{'entity_group': 'PER', 'score': 0.9981694, 'word': 'Sylvain', 'start': 11, 'end': 18},\n {'entity_group': 'ORG', 'score': 0.97960204, 'word': 'Hugging Face', 'start': 33, 'end': 45},\n {'entity_group': 'LOC', 'score': 0.99321055, 'word': 'Brooklyn', 'start': 49, 'end': 57}]

The aggregation_strategy picked will change the scores computed for each grouped entity. With \"simple\" the score is just the mean of the scores of each token in the given entity: for instance, the score of “Sylvain” is the mean of the scores we saw in the previous example for the tokens S, ##yl, ##va, and ##in. Other strategies available are:

  • \"first\", where the score of each entity is the score of the first token of that entity (so for “Sylvain” it would be 0.993828, the score of the token S)
  • \"max\", where the score of each entity is the maximum score of the tokens in that entity (so for “Hugging Face” it would be 0.98879766, the score of “Face”)
  • \"average\", where the score of each entity is the average of the scores of the words composing that entity (so for “Sylvain” there would be no difference from the \"simple\" strategy, but “Hugging Face” would have a score of 0.9819, the average of the scores for “Hugging”, 0.975, and “Face”, 0.98879)

Now let’s see how to obtain these results without using the pipeline() function!

From inputs to predictions

First we need to tokenize our input and pass it through the model. This is done exactly as in Chapter 2; we instantiate the tokenizer and the model using the AutoXxx classes and then use them on our example:

from transformers import AutoTokenizer, AutoModelForTokenClassification\n\nmodel_checkpoint = \"dbmdz/bert-large-cased-finetuned-conll03-english\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\nmodel = AutoModelForTokenClassification.from_pretrained(model_checkpoint)\n\nexample = \"My name is Sylvain and I work at Hugging Face in Brooklyn.\"\ninputs = tokenizer(example, return_tensors=\"pt\")\noutputs = model(**inputs)

Since we’re using AutoModelForTokenClassification here, we get one set of logits for each token in the input sequence:

print(inputs[\"input_ids\"].shape)\nprint(outputs.logits.shape)
torch.Size([1, 19])\ntorch.Size([1, 19, 9])

We have a batch with 1 sequence of 19 tokens and the model has 9 different labels, so the output of the model has a shape of 1 x 19 x 9. Like for the text classification pipeline, we use a softmax function to convert those logits to probabilities, and we take the argmax to get predictions (note that we can take the argmax on the logits because the softmax does not change the order):

import torch\n\nprobabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)[0].tolist()\npredictions = outputs.logits.argmax(dim=-1)[0].tolist()\nprint(predictions)
[0, 0, 0, 0, 4, 4, 4, 4, 0, 0, 0, 0, 6, 6, 6, 0, 8, 0, 0]

The model.config.id2label attribute contains the mapping of indexes to labels that we can use to make sense of the predictions:

model.config.id2label
{0: 'O',\n 1: 'B-MISC',\n 2: 'I-MISC',\n 3: 'B-PER',\n 4: 'I-PER',\n 5: 'B-ORG',\n 6: 'I-ORG',\n 7: 'B-LOC',\n 8: 'I-LOC'}

As we saw earlier, there are 9 labels: O is the label for the tokens that are not in any named entity (it stands for “outside”), and we then have two labels for each type of entity (miscellaneous, person, organization, and location). The label B-XXX indicates the token is at the beginning of an entity XXX and the label I-XXX indicates the token is inside the entity XXX. For instance, in the current example we would expect our model to classify the token S as B-PER (beginning of a person entity) and the tokens ##yl, ##va and ##in as I-PER (inside a person entity).

You might think the model was wrong in this case as it gave the label I-PER to all four of these tokens, but that’s not entirely true. There are actually two formats for those B- and I- labels: IOB1 and IOB2. The IOB2 format (in pink below), is the one we introduced whereas in the IOB1 format (in blue), the labels beginning with B- are only ever used to separate two adjacent entities of the same type. The model we are using was fine-tuned on a dataset using that format, which is why it assigns the label I-PER to the S token.

\"IOB1 \"IOB1

With this map, we are ready to reproduce (almost entirely) the results of the first pipeline — we can just grab the score and label of each token that was not classified as O:

results = []\ntokens = inputs.tokens()\n\nfor idx, pred in enumerate(predictions):\n    label = model.config.id2label[pred]\n    if label != \"O\":\n        results.append(\n            {\"entity\": label, \"score\": probabilities[idx][pred], \"word\": tokens[idx]}\n        )\n\nprint(results)
[{'entity': 'I-PER', 'score': 0.9993828, 'index': 4, 'word': 'S'},\n {'entity': 'I-PER', 'score': 0.99815476, 'index': 5, 'word': '##yl'},\n {'entity': 'I-PER', 'score': 0.99590725, 'index': 6, 'word': '##va'},\n {'entity': 'I-PER', 'score': 0.9992327, 'index': 7, 'word': '##in'},\n {'entity': 'I-ORG', 'score': 0.97389334, 'index': 12, 'word': 'Hu'},\n {'entity': 'I-ORG', 'score': 0.976115, 'index': 13, 'word': '##gging'},\n {'entity': 'I-ORG', 'score': 0.98879766, 'index': 14, 'word': 'Face'},\n {'entity': 'I-LOC', 'score': 0.99321055, 'index': 16, 'word': 'Brooklyn'}]

This is very similar to what we had before, with one exception: the pipeline also gave us information about the start and end of each entity in the original sentence. This is where our offset mapping will come into play. To get the offsets, we just have to set return_offsets_mapping=True when we apply the tokenizer to our inputs:

inputs_with_offsets = tokenizer(example, return_offsets_mapping=True)\ninputs_with_offsets[\"offset_mapping\"]
[(0, 0), (0, 2), (3, 7), (8, 10), (11, 12), (12, 14), (14, 16), (16, 18), (19, 22), (23, 24), (25, 29), (30, 32),\n (33, 35), (35, 40), (41, 45), (46, 48), (49, 57), (57, 58), (0, 0)]

Each tuple is the span of text corresponding to each token, where (0, 0) is reserved for the special tokens. We saw before that the token at index 5 is ##yl, which has (12, 14) as offsets here. If we grab the corresponding slice in our example:

example[12:14]

we get the proper span of text without the ##:

yl

Using this, we can now complete the previous results:

results = []\ninputs_with_offsets = tokenizer(example, return_offsets_mapping=True)\ntokens = inputs_with_offsets.tokens()\noffsets = inputs_with_offsets[\"offset_mapping\"]\n\nfor idx, pred in enumerate(predictions):\n    label = model.config.id2label[pred]\n    if label != \"O\":\n        start, end = offsets[idx]\n        results.append(\n            {\n                \"entity\": label,\n                \"score\": probabilities[idx][pred],\n                \"word\": tokens[idx],\n                \"start\": start,\n                \"end\": end,\n            }\n        )\n\nprint(results)
[{'entity': 'I-PER', 'score': 0.9993828, 'index': 4, 'word': 'S', 'start': 11, 'end': 12},\n {'entity': 'I-PER', 'score': 0.99815476, 'index': 5, 'word': '##yl', 'start': 12, 'end': 14},\n {'entity': 'I-PER', 'score': 0.99590725, 'index': 6, 'word': '##va', 'start': 14, 'end': 16},\n {'entity': 'I-PER', 'score': 0.9992327, 'index': 7, 'word': '##in', 'start': 16, 'end': 18},\n {'entity': 'I-ORG', 'score': 0.97389334, 'index': 12, 'word': 'Hu', 'start': 33, 'end': 35},\n {'entity': 'I-ORG', 'score': 0.976115, 'index': 13, 'word': '##gging', 'start': 35, 'end': 40},\n {'entity': 'I-ORG', 'score': 0.98879766, 'index': 14, 'word': 'Face', 'start': 41, 'end': 45},\n {'entity': 'I-LOC', 'score': 0.99321055, 'index': 16, 'word': 'Brooklyn', 'start': 49, 'end': 57}]

This is the same as what we got from the first pipeline!

Grouping entities

Using the offsets to determine the start and end keys for each entity is handy, but that information isn’t strictly necessary. When we want to group the entities together, however, the offsets will save us a lot of messy code. For example, if we wanted to group together the tokens Hu, ##gging, and Face, we could make special rules that say the first two should be attached while removing the ##, and the Face should be added with a space since it does not begin with ## — but that would only work for this particular type of tokenizer. We would have to write another set of rules for a SentencePiece or a Byte-Pair-Encoding tokenizer (discussed later in this chapter).

With the offsets, all that custom code goes away: we just can take the span in the original text that begins with the first token and ends with the last token. So, in the case of the tokens Hu, ##gging, and Face, we should start at character 33 (the beginning of Hu) and end before character 45 (the end of Face):

example[33:45]
Hugging Face

To write the code that post-processes the predictions while grouping entities, we will group together entities that are consecutive and labeled with I-XXX, except for the first one, which can be labeled as B-XXX or I-XXX (so, we stop grouping an entity when we get a O, a new type of entity, or a B-XXX that tells us an entity of the same type is starting):

import numpy as np\n\nresults = []\ninputs_with_offsets = tokenizer(example, return_offsets_mapping=True)\ntokens = inputs_with_offsets.tokens()\noffsets = inputs_with_offsets[\"offset_mapping\"]\n\nidx = 0\nwhile idx < len(predictions):\n    pred = predictions[idx]\n    label = model.config.id2label[pred]\n    if label != \"O\":\n        # Remove the B- or I-\n        label = label[2:]\n        start, _ = offsets[idx]\n\n        # Grab all the tokens labeled with I-label\n        all_scores = []\n        while (\n            idx < len(predictions)\n            and model.config.id2label[predictions[idx]] == f\"I-{label}\"\n        ):\n            all_scores.append(probabilities[idx][pred])\n            _, end = offsets[idx]\n            idx += 1\n\n        # The score is the mean of all the scores of the tokens in that grouped entity\n        score = np.mean(all_scores).item()\n        word = example[start:end]\n        results.append(\n            {\n                \"entity_group\": label,\n                \"score\": score,\n                \"word\": word,\n                \"start\": start,\n                \"end\": end,\n            }\n        )\n    idx += 1\n\nprint(results)

And we get the same results as with our second pipeline!

[{'entity_group': 'PER', 'score': 0.9981694, 'word': 'Sylvain', 'start': 11, 'end': 18},\n {'entity_group': 'ORG', 'score': 0.97960204, 'word': 'Hugging Face', 'start': 33, 'end': 45},\n {'entity_group': 'LOC', 'score': 0.99321055, 'word': 'Brooklyn', 'start': 49, 'end': 57}]

Another example of a task where these offsets are extremely useful is question answering. Diving into that pipeline, which we’ll do in the next section, will also enable us to take a look at one last feature of the tokenizers in the 🤗 Transformers library: dealing with overflowing tokens when we truncate an input to a given length.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:22.269Z"} {"title":"Normalization and pre-tokenization - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter6/4?fw=pt","markdown":"## [](#normalization-and-pre-tokenization)Normalization and pre-tokenization\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-6-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section4.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter6/section4.ipynb)\n\nBefore we dive more deeply into the three most common subword tokenization algorithms used with Transformer models (Byte-Pair Encoding \\[BPE\\], WordPiece, and Unigram), we’ll first take a look at the preprocessing that each tokenizer applies to text. Here’s a high-level overview of the steps in the tokenization pipeline:\n\n![The tokenization pipeline.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter6/tokenization_pipeline.svg) ![The tokenization pipeline.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter6/tokenization_pipeline-dark.svg)\n\nBefore splitting a text into subtokens (according to its model), the tokenizer performs two steps: _normalization_ and _pre-tokenization_.\n\n## [](#normalization)Normalization\n\nThe normalization step involves some general cleanup, such as removing needless whitespace, lowercasing, and/or removing accents. If you’re familiar with [Unicode normalization](http://www.unicode.org/reports/tr15/) (such as NFC or NFKC), this is also something the tokenizer may apply.\n\nThe 🤗 Transformers `tokenizer` has an attribute called `backend_tokenizer` that provides access to the underlying tokenizer from the 🤗 Tokenizers library:\n\n```\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\nprint(type(tokenizer.backend_tokenizer))```\n\n```\n```\n\nThe `normalizer` attribute of the `tokenizer` object has a `normalize_str()` method that we can use to see how the normalization is performed:\n\n```\nprint(tokenizer.backend_tokenizer.normalizer.normalize_str(\"Héllò hôw are ü?\"))```\n\nIn this example, since we picked the `bert-base-uncased` checkpoint, the normalization applied lowercasing and removed the accents.\n\n✏️ **Try it out!** Load a tokenizer from the `bert-base-cased` checkpoint and pass the same example to it. What are the main differences you can see between the cased and uncased versions of the tokenizer?\n\n## [](#pre-tokenization)Pre-tokenization\n\nAs we will see in the next sections, a tokenizer cannot be trained on raw text alone. Instead, we first need to split the texts into small entities, like words. That’s where the pre-tokenization step comes in. As we saw in [Chapter 2](/course/chapter2), a word-based tokenizer can simply split a raw text into words on whitespace and punctuation. Those words will be the boundaries of the subtokens the tokenizer can learn during its training.\n\nTo see how a fast tokenizer performs pre-tokenization, we can use the `pre_tokenize_str()` method of the `pre_tokenizer` attribute of the `tokenizer` object:\n\n```\ntokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(\"Hello, how are you?\")```\n\n```\n[('Hello', (0, 5)), (',', (5, 6)), ('how', (7, 10)), ('are', (11, 14)), ('you', (16, 19)), ('?', (19, 20))]```\n\nNotice how the tokenizer is already keeping track of the offsets, which is how it can give us the offset mapping we used in the previous section. Here the tokenizer ignores the two spaces and replaces them with just one, but the offset jumps between `are` and `you` to account for that.\n\nSince we’re using a BERT tokenizer, the pre-tokenization involves splitting on whitespace and punctuation. Other tokenizers can have different rules for this step. For example, if we use the GPT-2 tokenizer:\n\n```\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\ntokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(\"Hello, how are you?\")```\n\nit will split on whitespace and punctuation as well, but it will keep the spaces and replace them with a `Ġ` symbol, enabling it to recover the original spaces if we decode the tokens:\n\n```\n[('Hello', (0, 5)), (',', (5, 6)), ('Ġhow', (6, 10)), ('Ġare', (10, 14)), ('Ġ', (14, 15)), ('Ġyou', (15, 19)),\n ('?', (19, 20))]```\n\nAlso note that unlike the BERT tokenizer, this tokenizer does not ignore the double space.\n\nFor a last example, let’s have a look at the T5 tokenizer, which is based on the SentencePiece algorithm:\n\n```\ntokenizer = AutoTokenizer.from_pretrained(\"t5-small\")\ntokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(\"Hello, how are you?\")```\n\n```\n[('▁Hello,', (0, 6)), ('▁how', (7, 10)), ('▁are', (11, 14)), ('▁you?', (16, 20))]```\n\nLike the GPT-2 tokenizer, this one keeps spaces and replaces them with a specific token (`_`), but the T5 tokenizer only splits on whitespace, not punctuation. Also note that it added a space by default at the beginning of the sentence (before `Hello`) and ignored the double space between `are` and `you`.\n\nNow that we’ve seen a little of how some different tokenizers process text, we can start to explore the underlying algorithms themselves. We’ll begin with a quick look at the broadly widely applicable SentencePiece; then, over the next three sections, we’ll examine how the three main algorithms used for subword tokenization work.\n\n## [](#sentencepiece)SentencePiece\n\n[SentencePiece](https://github.com/google/sentencepiece) is a tokenization algorithm for the preprocessing of text that you can use with any of the models we will see in the next three sections. It considers the text as a sequence of Unicode characters, and replaces spaces with a special character, `▁`. Used in conjunction with the Unigram algorithm (see [section 7](/course/chapter7/7)), it doesn’t even require a pre-tokenization step, which is very useful for languages where the space character is not used (like Chinese or Japanese).\n\nThe other main feature of SentencePiece is _reversible tokenization_: since there is no special treatment of spaces, decoding the tokens is done simply by concatenating them and replacing the `_`s with spaces — this results in the normalized text. As we saw earlier, the BERT tokenizer removes repeating spaces, so its tokenization is not reversible.\n\n## [](#algorithm-overview)Algorithm overview\n\nIn the following sections, we’ll dive into the three main subword tokenization algorithms: BPE (used by GPT-2 and others), WordPiece (used for example by BERT), and Unigram (used by T5 and others). Before we get started, here’s a quick overview of how they each work. Don’t hesitate to come back to this table after reading each of the next sections if it doesn’t make sense to you yet.\n\n| Model | BPE | WordPiece | Unigram |\n| --- | --- | --- | --- |\n| Training | Starts from a small vocabulary and learns rules to merge tokens | Starts from a small vocabulary and learns rules to merge tokens | Starts from a large vocabulary and learns rules to remove tokens |\n| Training step | Merges the tokens corresponding to the most common pair | Merges the tokens corresponding to the pair with the best score based on the frequency of the pair, privileging pairs where each individual token is less frequent | Removes all the tokens in the vocabulary that will minimize the loss computed on the whole corpus |\n| Learns | Merge rules and a vocabulary | Just a vocabulary | A vocabulary with a score for each token |\n| Encoding | Splits a word into characters and applies the merges learned during training | Finds the longest subword starting from the beginning that is in the vocabulary, then does the same for the rest of the word | Finds the most likely split into tokens, using the scores learned during training |\n\nNow let’s dive into BPE!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tNormalization and pre-tokenization - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Normalization and pre-tokenization

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Normalization and pre-tokenization

\"Ask \"Open \"Open

Before we dive more deeply into the three most common subword tokenization algorithms used with Transformer models (Byte-Pair Encoding [BPE], WordPiece, and Unigram), we’ll first take a look at the preprocessing that each tokenizer applies to text. Here’s a high-level overview of the steps in the tokenization pipeline:

\"The \"The

Before splitting a text into subtokens (according to its model), the tokenizer performs two steps: normalization and pre-tokenization.

Normalization

The normalization step involves some general cleanup, such as removing needless whitespace, lowercasing, and/or removing accents. If you’re familiar with Unicode normalization (such as NFC or NFKC), this is also something the tokenizer may apply.

The 🤗 Transformers tokenizer has an attribute called backend_tokenizer that provides access to the underlying tokenizer from the 🤗 Tokenizers library:

from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\nprint(type(tokenizer.backend_tokenizer))
<class 'tokenizers.Tokenizer'>

The normalizer attribute of the tokenizer object has a normalize_str() method that we can use to see how the normalization is performed:

print(tokenizer.backend_tokenizer.normalizer.normalize_str(\"Héllò hôw are ü?\"))
'hello how are u?'

In this example, since we picked the bert-base-uncased checkpoint, the normalization applied lowercasing and removed the accents.

✏️ Try it out! Load a tokenizer from the bert-base-cased checkpoint and pass the same example to it. What are the main differences you can see between the cased and uncased versions of the tokenizer?

Pre-tokenization

As we will see in the next sections, a tokenizer cannot be trained on raw text alone. Instead, we first need to split the texts into small entities, like words. That’s where the pre-tokenization step comes in. As we saw in Chapter 2, a word-based tokenizer can simply split a raw text into words on whitespace and punctuation. Those words will be the boundaries of the subtokens the tokenizer can learn during its training.

To see how a fast tokenizer performs pre-tokenization, we can use the pre_tokenize_str() method of the pre_tokenizer attribute of the tokenizer object:

tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(\"Hello, how are  you?\")
[('Hello', (0, 5)), (',', (5, 6)), ('how', (7, 10)), ('are', (11, 14)), ('you', (16, 19)), ('?', (19, 20))]

Notice how the tokenizer is already keeping track of the offsets, which is how it can give us the offset mapping we used in the previous section. Here the tokenizer ignores the two spaces and replaces them with just one, but the offset jumps between are and you to account for that.

Since we’re using a BERT tokenizer, the pre-tokenization involves splitting on whitespace and punctuation. Other tokenizers can have different rules for this step. For example, if we use the GPT-2 tokenizer:

tokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\ntokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(\"Hello, how are  you?\")

it will split on whitespace and punctuation as well, but it will keep the spaces and replace them with a Ġ symbol, enabling it to recover the original spaces if we decode the tokens:

[('Hello', (0, 5)), (',', (5, 6)), ('Ġhow', (6, 10)), ('Ġare', (10, 14)), ('Ġ', (14, 15)), ('Ġyou', (15, 19)),\n ('?', (19, 20))]

Also note that unlike the BERT tokenizer, this tokenizer does not ignore the double space.

For a last example, let’s have a look at the T5 tokenizer, which is based on the SentencePiece algorithm:

tokenizer = AutoTokenizer.from_pretrained(\"t5-small\")\ntokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(\"Hello, how are  you?\")
[('▁Hello,', (0, 6)), ('▁how', (7, 10)), ('▁are', (11, 14)), ('▁you?', (16, 20))]

Like the GPT-2 tokenizer, this one keeps spaces and replaces them with a specific token (_), but the T5 tokenizer only splits on whitespace, not punctuation. Also note that it added a space by default at the beginning of the sentence (before Hello) and ignored the double space between are and you.

Now that we’ve seen a little of how some different tokenizers process text, we can start to explore the underlying algorithms themselves. We’ll begin with a quick look at the broadly widely applicable SentencePiece; then, over the next three sections, we’ll examine how the three main algorithms used for subword tokenization work.

SentencePiece

SentencePiece is a tokenization algorithm for the preprocessing of text that you can use with any of the models we will see in the next three sections. It considers the text as a sequence of Unicode characters, and replaces spaces with a special character, . Used in conjunction with the Unigram algorithm (see section 7), it doesn’t even require a pre-tokenization step, which is very useful for languages where the space character is not used (like Chinese or Japanese).

The other main feature of SentencePiece is reversible tokenization: since there is no special treatment of spaces, decoding the tokens is done simply by concatenating them and replacing the _s with spaces — this results in the normalized text. As we saw earlier, the BERT tokenizer removes repeating spaces, so its tokenization is not reversible.

Algorithm overview

In the following sections, we’ll dive into the three main subword tokenization algorithms: BPE (used by GPT-2 and others), WordPiece (used for example by BERT), and Unigram (used by T5 and others). Before we get started, here’s a quick overview of how they each work. Don’t hesitate to come back to this table after reading each of the next sections if it doesn’t make sense to you yet.

Model BPE WordPiece Unigram
Training Starts from a small vocabulary and learns rules to merge tokens Starts from a small vocabulary and learns rules to merge tokens Starts from a large vocabulary and learns rules to remove tokens
Training step Merges the tokens corresponding to the most common pair Merges the tokens corresponding to the pair with the best score based on the frequency of the pair, privileging pairs where each individual token is less frequent Removes all the tokens in the vocabulary that will minimize the loss computed on the whole corpus
Learns Merge rules and a vocabulary Just a vocabulary A vocabulary with a score for each token
Encoding Splits a word into characters and applies the merges learned during training Finds the longest subword starting from the beginning that is in the vocabulary, then does the same for the rest of the word Finds the most likely split into tokens, using the scores learned during training

Now let’s dive into BPE!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:22.989Z"} {"title":"Byte-Pair Encoding tokenization - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter6/5?fw=pt","markdown":"## [](#byte-pair-encoding-tokenization)Byte-Pair Encoding tokenization\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-6-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section5.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter6/section5.ipynb)\n\nByte-Pair Encoding (BPE) was initially developed as an algorithm to compress texts, and then used by OpenAI for tokenization when pretraining the GPT model. It’s used by a lot of Transformer models, including GPT, GPT-2, RoBERTa, BART, and DeBERTa.\n\n💡 This section covers BPE in depth, going as far as showing a full implementation. You can skip to the end if you just want a general overview of the tokenization algorithm.\n\n## [](#training-algorithm)Training algorithm\n\nBPE training starts by computing the unique set of words used in the corpus (after the normalization and pre-tokenization steps are completed), then building the vocabulary by taking all the symbols used to write those words. As a very simple example, let’s say our corpus uses these five words:\n\n```\n\"hug\", \"pug\", \"pun\", \"bun\", \"hugs\"```\n\nThe base vocabulary will then be `[\"b\", \"g\", \"h\", \"n\", \"p\", \"s\", \"u\"]`. For real-world cases, that base vocabulary will contain all the ASCII characters, at the very least, and probably some Unicode characters as well. If an example you are tokenizing uses a character that is not in the training corpus, that character will be converted to the unknown token. That’s one reason why lots of NLP models are very bad at analyzing content with emojis, for instance.\n\nThe GPT-2 and RoBERTa tokenizers (which are pretty similar) have a clever way to deal with this: they don’t look at words as being written with Unicode characters, but with bytes. This way the base vocabulary has a small size (256), but every character you can think of will still be included and not end up being converted to the unknown token. This trick is called _byte-level BPE_.\n\nAfter getting this base vocabulary, we add new tokens until the desired vocabulary size is reached by learning _merges_, which are rules to merge two elements of the existing vocabulary together into a new one. So, at the beginning these merges will create tokens with two characters, and then, as training progresses, longer subwords.\n\nAt any step during the tokenizer training, the BPE algorithm will search for the most frequent pair of existing tokens (by “pair,” here we mean two consecutive tokens in a word). That most frequent pair is the one that will be merged, and we rinse and repeat for the next step.\n\nGoing back to our previous example, let’s assume the words had the following frequencies:\n\n```\n(\"hug\", 10), (\"pug\", 5), (\"pun\", 12), (\"bun\", 4), (\"hugs\", 5)```\n\nmeaning `\"hug\"` was present 10 times in the corpus, `\"pug\"` 5 times, `\"pun\"` 12 times, `\"bun\"` 4 times, and `\"hugs\"` 5 times. We start the training by splitting each word into characters (the ones that form our initial vocabulary) so we can see each word as a list of tokens:\n\n```\n(\"h\" \"u\" \"g\", 10), (\"p\" \"u\" \"g\", 5), (\"p\" \"u\" \"n\", 12), (\"b\" \"u\" \"n\", 4), (\"h\" \"u\" \"g\" \"s\", 5)```\n\nThen we look at pairs. The pair `(\"h\", \"u\")` is present in the words `\"hug\"` and `\"hugs\"`, so 15 times total in the corpus. It’s not the most frequent pair, though: that honor belongs to `(\"u\", \"g\")`, which is present in `\"hug\"`, `\"pug\"`, and `\"hugs\"`, for a grand total of 20 times in the vocabulary.\n\nThus, the first merge rule learned by the tokenizer is `(\"u\", \"g\") -> \"ug\"`, which means that `\"ug\"` will be added to the vocabulary, and the pair should be merged in all the words of the corpus. At the end of this stage, the vocabulary and corpus look like this:\n\n```\nVocabulary: [\"b\", \"g\", \"h\", \"n\", \"p\", \"s\", \"u\", \"ug\"]\nCorpus: (\"h\" \"ug\", 10), (\"p\" \"ug\", 5), (\"p\" \"u\" \"n\", 12), (\"b\" \"u\" \"n\", 4), (\"h\" \"ug\" \"s\", 5)```\n\nNow we have some pairs that result in a token longer than two characters: the pair `(\"h\", \"ug\")`, for instance (present 15 times in the corpus). The most frequent pair at this stage is `(\"u\", \"n\")`, however, present 16 times in the corpus, so the second merge rule learned is `(\"u\", \"n\") -> \"un\"`. Adding that to the vocabulary and merging all existing occurrences leads us to:\n\n```\nVocabulary: [\"b\", \"g\", \"h\", \"n\", \"p\", \"s\", \"u\", \"ug\", \"un\"]\nCorpus: (\"h\" \"ug\", 10), (\"p\" \"ug\", 5), (\"p\" \"un\", 12), (\"b\" \"un\", 4), (\"h\" \"ug\" \"s\", 5)```\n\nNow the most frequent pair is `(\"h\", \"ug\")`, so we learn the merge rule `(\"h\", \"ug\") -> \"hug\"`, which gives us our first three-letter token. After the merge, the corpus looks like this:\n\n```\nVocabulary: [\"b\", \"g\", \"h\", \"n\", \"p\", \"s\", \"u\", \"ug\", \"un\", \"hug\"]\nCorpus: (\"hug\", 10), (\"p\" \"ug\", 5), (\"p\" \"un\", 12), (\"b\" \"un\", 4), (\"hug\" \"s\", 5)```\n\nAnd we continue like this until we reach the desired vocabulary size.\n\n✏️ **Now your turn!** What do you think the next merge rule will be?\n\n## [](#tokenization-algorithm)Tokenization algorithm\n\nTokenization follows the training process closely, in the sense that new inputs are tokenized by applying the following steps:\n\n1. Normalization\n2. Pre-tokenization\n3. Splitting the words into individual characters\n4. Applying the merge rules learned in order on those splits\n\nLet’s take the example we used during training, with the three merge rules learned:\n\n```\n(\"u\", \"g\") -> \"ug\"\n(\"u\", \"n\") -> \"un\"\n(\"h\", \"ug\") -> \"hug\"```\n\nThe word `\"bug\"` will be tokenized as `[\"b\", \"ug\"]`. `\"mug\"`, however, will be tokenized as `[\"[UNK]\", \"ug\"]` since the letter `\"m\"` was not in the base vocabulary. Likewise, the word `\"thug\"` will be tokenized as `[\"[UNK]\", \"hug\"]`: the letter `\"t\"` is not in the base vocabulary, and applying the merge rules results first in `\"u\"` and `\"g\"` being merged and then `\"hu\"` and `\"g\"` being merged.\n\n✏️ **Now your turn!** How do you think the word `\"unhug\"` will be tokenized?\n\n## [](#implementing-bpe)Implementing BPE\n\nNow let’s take a look at an implementation of the BPE algorithm. This won’t be an optimized version you can actually use on a big corpus; we just want to show you the code so you can understand the algorithm a little bit better.\n\nFirst we need a corpus, so let’s create a simple one with a few sentences:\n\n```\ncorpus = [\n \"This is the Hugging Face Course.\",\n \"This chapter is about tokenization.\",\n \"This section shows several tokenizer algorithms.\",\n \"Hopefully, you will be able to understand how they are trained and generate tokens.\",\n]```\n\nNext, we need to pre-tokenize that corpus into words. Since we are replicating a BPE tokenizer (like GPT-2), we will use the `gpt2` tokenizer for the pre-tokenization:\n\n```\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")```\n\nThen we compute the frequencies of each word in the corpus as we do the pre-tokenization:\n\n```\nfrom collections import defaultdict\n\nword_freqs = defaultdict(int)\n\nfor text in corpus:\n words_with_offsets = tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text)\n new_words = [word for word, offset in words_with_offsets]\n for word in new_words:\n word_freqs[word] += 1\n\nprint(word_freqs)```\n\n```\ndefaultdict(int, {'This': 3, 'Ġis': 2, 'Ġthe': 1, 'ĠHugging': 1, 'ĠFace': 1, 'ĠCourse': 1, '.': 4, 'Ġchapter': 1,\n 'Ġabout': 1, 'Ġtokenization': 1, 'Ġsection': 1, 'Ġshows': 1, 'Ġseveral': 1, 'Ġtokenizer': 1, 'Ġalgorithms': 1,\n 'Hopefully': 1, ',': 1, 'Ġyou': 1, 'Ġwill': 1, 'Ġbe': 1, 'Ġable': 1, 'Ġto': 1, 'Ġunderstand': 1, 'Ġhow': 1,\n 'Ġthey': 1, 'Ġare': 1, 'Ġtrained': 1, 'Ġand': 1, 'Ġgenerate': 1, 'Ġtokens': 1})```\n\nThe next step is to compute the base vocabulary, formed by all the characters used in the corpus:\n\n```\nalphabet = []\n\nfor word in word_freqs.keys():\n for letter in word:\n if letter not in alphabet:\n alphabet.append(letter)\nalphabet.sort()\n\nprint(alphabet)```\n\n```\n[ ',', '.', 'C', 'F', 'H', 'T', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'k', 'l', 'm', 'n', 'o', 'p', 'r', 's',\n 't', 'u', 'v', 'w', 'y', 'z', 'Ġ']```\n\nWe also add the special tokens used by the model at the beginning of that vocabulary. In the case of GPT-2, the only special token is `\"<|endoftext|>\"`:\n\n```\nvocab = [\"<|endoftext|>\"] + alphabet.copy()```\n\nWe now need to split each word into individual characters, to be able to start training:\n\n```\nsplits = {word: [c for c in word] for word in word_freqs.keys()}```\n\nNow that we are ready for training, let’s write a function that computes the frequency of each pair. We’ll need to use this at each step of the training:\n\n```\ndef compute_pair_freqs(splits):\n pair_freqs = defaultdict(int)\n for word, freq in word_freqs.items():\n split = splits[word]\n if len(split) == 1:\n continue\n for i in range(len(split) - 1):\n pair = (split[i], split[i + 1])\n pair_freqs[pair] += freq\n return pair_freqs```\n\nLet’s have a look at a part of this dictionary after the initial splits:\n\n```\npair_freqs = compute_pair_freqs(splits)\n\nfor i, key in enumerate(pair_freqs.keys()):\n print(f\"{key}: {pair_freqs[key]}\")\n if i >= 5:\n break```\n\n```\n('T', 'h'): 3\n('h', 'i'): 3\n('i', 's'): 5\n('Ġ', 'i'): 2\n('Ġ', 't'): 7\n('t', 'h'): 3```\n\nNow, finding the most frequent pair only takes a quick loop:\n\n```\nbest_pair = \"\"\nmax_freq = None\n\nfor pair, freq in pair_freqs.items():\n if max_freq is None or max_freq < freq:\n best_pair = pair\n max_freq = freq\n\nprint(best_pair, max_freq)```\n\nSo the first merge to learn is `('Ġ', 't') -> 'Ġt'`, and we add `'Ġt'` to the vocabulary:\n\n```\nmerges = {(\"Ġ\", \"t\"): \"Ġt\"}\nvocab.append(\"Ġt\")```\n\nTo continue, we need to apply that merge in our `splits` dictionary. Let’s write another function for this:\n\n```\ndef merge_pair(a, b, splits):\n for word in word_freqs:\n split = splits[word]\n if len(split) == 1:\n continue\n\n i = 0\n while i < len(split) - 1:\n if split[i] == a and split[i + 1] == b:\n split = split[:i] + [a + b] + split[i + 2 :]\n else:\n i += 1\n splits[word] = split\n return splits```\n\nAnd we can have a look at the result of the first merge:\n\n```\nsplits = merge_pair(\"Ġ\", \"t\", splits)\nprint(splits[\"Ġtrained\"])```\n\n```\n['Ġt', 'r', 'a', 'i', 'n', 'e', 'd']```\n\nNow we have everything we need to loop until we have learned all the merges we want. Let’s aim for a vocab size of 50:\n\n```\nvocab_size = 50\n\nwhile len(vocab) < vocab_size:\n pair_freqs = compute_pair_freqs(splits)\n best_pair = \"\"\n max_freq = None\n for pair, freq in pair_freqs.items():\n if max_freq is None or max_freq < freq:\n best_pair = pair\n max_freq = freq\n splits = merge_pair(*best_pair, splits)\n merges[best_pair] = best_pair[0] + best_pair[1]\n vocab.append(best_pair[0] + best_pair[1])```\n\nAs a result, we’ve learned 19 merge rules (the initial vocabulary had a size of 31 — 30 characters in the alphabet, plus the special token):\n\n```\n{('Ġ', 't'): 'Ġt', ('i', 's'): 'is', ('e', 'r'): 'er', ('Ġ', 'a'): 'Ġa', ('Ġt', 'o'): 'Ġto', ('e', 'n'): 'en',\n ('T', 'h'): 'Th', ('Th', 'is'): 'This', ('o', 'u'): 'ou', ('s', 'e'): 'se', ('Ġto', 'k'): 'Ġtok',\n ('Ġtok', 'en'): 'Ġtoken', ('n', 'd'): 'nd', ('Ġ', 'is'): 'Ġis', ('Ġt', 'h'): 'Ġth', ('Ġth', 'e'): 'Ġthe',\n ('i', 'n'): 'in', ('Ġa', 'b'): 'Ġab', ('Ġtoken', 'i'): 'Ġtokeni'}```\n\nAnd the vocabulary is composed of the special token, the initial alphabet, and all the results of the merges:\n\n```\n['<|endoftext|>', ',', '.', 'C', 'F', 'H', 'T', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'k', 'l', 'm', 'n', 'o',\n 'p', 'r', 's', 't', 'u', 'v', 'w', 'y', 'z', 'Ġ', 'Ġt', 'is', 'er', 'Ġa', 'Ġto', 'en', 'Th', 'This', 'ou', 'se',\n 'Ġtok', 'Ġtoken', 'nd', 'Ġis', 'Ġth', 'Ġthe', 'in', 'Ġab', 'Ġtokeni']```\n\n💡 Using `train_new_from_iterator()` on the same corpus won’t result in the exact same vocabulary. This is because when there is a choice of the most frequent pair, we selected the first one encountered, while the 🤗 Tokenizers library selects the first one based on its inner IDs.\n\nTo tokenize a new text, we pre-tokenize it, split it, then apply all the merge rules learned:\n\n```\ndef tokenize(text):\n pre_tokenize_result = tokenizer._tokenizer.pre_tokenizer.pre_tokenize_str(text)\n pre_tokenized_text = [word for word, offset in pre_tokenize_result]\n splits = [[l for l in word] for word in pre_tokenized_text]\n for pair, merge in merges.items():\n for idx, split in enumerate(splits):\n i = 0\n while i < len(split) - 1:\n if split[i] == pair[0] and split[i + 1] == pair[1]:\n split = split[:i] + [merge] + split[i + 2 :]\n else:\n i += 1\n splits[idx] = split\n\n return sum(splits, [])```\n\nWe can try this on any text composed of characters in the alphabet:\n\n```\ntokenize(\"This is not a token.\")```\n\n```\n['This', 'Ġis', 'Ġ', 'n', 'o', 't', 'Ġa', 'Ġtoken', '.']```\n\n⚠️ Our implementation will throw an error if there is an unknown character since we didn’t do anything to handle them. GPT-2 doesn’t actually have an unknown token (it’s impossible to get an unknown character when using byte-level BPE), but this could happen here because we did not include all the possible bytes in the initial vocabulary. This aspect of BPE is beyond the scope of this section, so we’ve left the details out.\n\nThat’s it for the BPE algorithm! Next, we’ll have a look at WordPiece.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tByte-Pair Encoding tokenization - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Byte-Pair Encoding tokenization

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Byte-Pair Encoding tokenization

\"Ask \"Open \"Open

Byte-Pair Encoding (BPE) was initially developed as an algorithm to compress texts, and then used by OpenAI for tokenization when pretraining the GPT model. It’s used by a lot of Transformer models, including GPT, GPT-2, RoBERTa, BART, and DeBERTa.

💡 This section covers BPE in depth, going as far as showing a full implementation. You can skip to the end if you just want a general overview of the tokenization algorithm.

Training algorithm

BPE training starts by computing the unique set of words used in the corpus (after the normalization and pre-tokenization steps are completed), then building the vocabulary by taking all the symbols used to write those words. As a very simple example, let’s say our corpus uses these five words:

\"hug\", \"pug\", \"pun\", \"bun\", \"hugs\"

The base vocabulary will then be [\"b\", \"g\", \"h\", \"n\", \"p\", \"s\", \"u\"]. For real-world cases, that base vocabulary will contain all the ASCII characters, at the very least, and probably some Unicode characters as well. If an example you are tokenizing uses a character that is not in the training corpus, that character will be converted to the unknown token. That’s one reason why lots of NLP models are very bad at analyzing content with emojis, for instance.

The GPT-2 and RoBERTa tokenizers (which are pretty similar) have a clever way to deal with this: they don’t look at words as being written with Unicode characters, but with bytes. This way the base vocabulary has a small size (256), but every character you can think of will still be included and not end up being converted to the unknown token. This trick is called byte-level BPE.

After getting this base vocabulary, we add new tokens until the desired vocabulary size is reached by learning merges, which are rules to merge two elements of the existing vocabulary together into a new one. So, at the beginning these merges will create tokens with two characters, and then, as training progresses, longer subwords.

At any step during the tokenizer training, the BPE algorithm will search for the most frequent pair of existing tokens (by “pair,” here we mean two consecutive tokens in a word). That most frequent pair is the one that will be merged, and we rinse and repeat for the next step.

Going back to our previous example, let’s assume the words had the following frequencies:

(\"hug\", 10), (\"pug\", 5), (\"pun\", 12), (\"bun\", 4), (\"hugs\", 5)

meaning \"hug\" was present 10 times in the corpus, \"pug\" 5 times, \"pun\" 12 times, \"bun\" 4 times, and \"hugs\" 5 times. We start the training by splitting each word into characters (the ones that form our initial vocabulary) so we can see each word as a list of tokens:

(\"h\" \"u\" \"g\", 10), (\"p\" \"u\" \"g\", 5), (\"p\" \"u\" \"n\", 12), (\"b\" \"u\" \"n\", 4), (\"h\" \"u\" \"g\" \"s\", 5)

Then we look at pairs. The pair (\"h\", \"u\") is present in the words \"hug\" and \"hugs\", so 15 times total in the corpus. It’s not the most frequent pair, though: that honor belongs to (\"u\", \"g\"), which is present in \"hug\", \"pug\", and \"hugs\", for a grand total of 20 times in the vocabulary.

Thus, the first merge rule learned by the tokenizer is (\"u\", \"g\") -> \"ug\", which means that \"ug\" will be added to the vocabulary, and the pair should be merged in all the words of the corpus. At the end of this stage, the vocabulary and corpus look like this:

Vocabulary: [\"b\", \"g\", \"h\", \"n\", \"p\", \"s\", \"u\", \"ug\"]\nCorpus: (\"h\" \"ug\", 10), (\"p\" \"ug\", 5), (\"p\" \"u\" \"n\", 12), (\"b\" \"u\" \"n\", 4), (\"h\" \"ug\" \"s\", 5)

Now we have some pairs that result in a token longer than two characters: the pair (\"h\", \"ug\"), for instance (present 15 times in the corpus). The most frequent pair at this stage is (\"u\", \"n\"), however, present 16 times in the corpus, so the second merge rule learned is (\"u\", \"n\") -> \"un\". Adding that to the vocabulary and merging all existing occurrences leads us to:

Vocabulary: [\"b\", \"g\", \"h\", \"n\", \"p\", \"s\", \"u\", \"ug\", \"un\"]\nCorpus: (\"h\" \"ug\", 10), (\"p\" \"ug\", 5), (\"p\" \"un\", 12), (\"b\" \"un\", 4), (\"h\" \"ug\" \"s\", 5)

Now the most frequent pair is (\"h\", \"ug\"), so we learn the merge rule (\"h\", \"ug\") -> \"hug\", which gives us our first three-letter token. After the merge, the corpus looks like this:

Vocabulary: [\"b\", \"g\", \"h\", \"n\", \"p\", \"s\", \"u\", \"ug\", \"un\", \"hug\"]\nCorpus: (\"hug\", 10), (\"p\" \"ug\", 5), (\"p\" \"un\", 12), (\"b\" \"un\", 4), (\"hug\" \"s\", 5)

And we continue like this until we reach the desired vocabulary size.

✏️ Now your turn! What do you think the next merge rule will be?

Tokenization algorithm

Tokenization follows the training process closely, in the sense that new inputs are tokenized by applying the following steps:

  1. Normalization
  2. Pre-tokenization
  3. Splitting the words into individual characters
  4. Applying the merge rules learned in order on those splits

Let’s take the example we used during training, with the three merge rules learned:

(\"u\", \"g\") -> \"ug\"\n(\"u\", \"n\") -> \"un\"\n(\"h\", \"ug\") -> \"hug\"

The word \"bug\" will be tokenized as [\"b\", \"ug\"]. \"mug\", however, will be tokenized as [\"[UNK]\", \"ug\"] since the letter \"m\" was not in the base vocabulary. Likewise, the word \"thug\" will be tokenized as [\"[UNK]\", \"hug\"]: the letter \"t\" is not in the base vocabulary, and applying the merge rules results first in \"u\" and \"g\" being merged and then \"hu\" and \"g\" being merged.

✏️ Now your turn! How do you think the word \"unhug\" will be tokenized?

Implementing BPE

Now let’s take a look at an implementation of the BPE algorithm. This won’t be an optimized version you can actually use on a big corpus; we just want to show you the code so you can understand the algorithm a little bit better.

First we need a corpus, so let’s create a simple one with a few sentences:

corpus = [\n    \"This is the Hugging Face Course.\",\n    \"This chapter is about tokenization.\",\n    \"This section shows several tokenizer algorithms.\",\n    \"Hopefully, you will be able to understand how they are trained and generate tokens.\",\n]

Next, we need to pre-tokenize that corpus into words. Since we are replicating a BPE tokenizer (like GPT-2), we will use the gpt2 tokenizer for the pre-tokenization:

from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")

Then we compute the frequencies of each word in the corpus as we do the pre-tokenization:

from collections import defaultdict\n\nword_freqs = defaultdict(int)\n\nfor text in corpus:\n    words_with_offsets = tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text)\n    new_words = [word for word, offset in words_with_offsets]\n    for word in new_words:\n        word_freqs[word] += 1\n\nprint(word_freqs)
defaultdict(int, {'This': 3, 'Ġis': 2, 'Ġthe': 1, 'ĠHugging': 1, 'ĠFace': 1, 'ĠCourse': 1, '.': 4, 'Ġchapter': 1,\n    'Ġabout': 1, 'Ġtokenization': 1, 'Ġsection': 1, 'Ġshows': 1, 'Ġseveral': 1, 'Ġtokenizer': 1, 'Ġalgorithms': 1,\n    'Hopefully': 1, ',': 1, 'Ġyou': 1, 'Ġwill': 1, 'Ġbe': 1, 'Ġable': 1, 'Ġto': 1, 'Ġunderstand': 1, 'Ġhow': 1,\n    'Ġthey': 1, 'Ġare': 1, 'Ġtrained': 1, 'Ġand': 1, 'Ġgenerate': 1, 'Ġtokens': 1})

The next step is to compute the base vocabulary, formed by all the characters used in the corpus:

alphabet = []\n\nfor word in word_freqs.keys():\n    for letter in word:\n        if letter not in alphabet:\n            alphabet.append(letter)\nalphabet.sort()\n\nprint(alphabet)
[ ',', '.', 'C', 'F', 'H', 'T', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'k', 'l', 'm', 'n', 'o', 'p', 'r', 's',\n  't', 'u', 'v', 'w', 'y', 'z', 'Ġ']

We also add the special tokens used by the model at the beginning of that vocabulary. In the case of GPT-2, the only special token is \"<|endoftext|>\":

vocab = [\"<|endoftext|>\"] + alphabet.copy()

We now need to split each word into individual characters, to be able to start training:

splits = {word: [c for c in word] for word in word_freqs.keys()}

Now that we are ready for training, let’s write a function that computes the frequency of each pair. We’ll need to use this at each step of the training:

def compute_pair_freqs(splits):\n    pair_freqs = defaultdict(int)\n    for word, freq in word_freqs.items():\n        split = splits[word]\n        if len(split) == 1:\n            continue\n        for i in range(len(split) - 1):\n            pair = (split[i], split[i + 1])\n            pair_freqs[pair] += freq\n    return pair_freqs

Let’s have a look at a part of this dictionary after the initial splits:

pair_freqs = compute_pair_freqs(splits)\n\nfor i, key in enumerate(pair_freqs.keys()):\n    print(f\"{key}: {pair_freqs[key]}\")\n    if i >= 5:\n        break
('T', 'h'): 3\n('h', 'i'): 3\n('i', 's'): 5\n('Ġ', 'i'): 2\n('Ġ', 't'): 7\n('t', 'h'): 3

Now, finding the most frequent pair only takes a quick loop:

best_pair = \"\"\nmax_freq = None\n\nfor pair, freq in pair_freqs.items():\n    if max_freq is None or max_freq < freq:\n        best_pair = pair\n        max_freq = freq\n\nprint(best_pair, max_freq)
('Ġ', 't') 7

So the first merge to learn is ('Ġ', 't') -> 'Ġt', and we add 'Ġt' to the vocabulary:

merges = {(\"Ġ\", \"t\"): \"Ġt\"}\nvocab.append(\"Ġt\")

To continue, we need to apply that merge in our splits dictionary. Let’s write another function for this:

def merge_pair(a, b, splits):\n    for word in word_freqs:\n        split = splits[word]\n        if len(split) == 1:\n            continue\n\n        i = 0\n        while i < len(split) - 1:\n            if split[i] == a and split[i + 1] == b:\n                split = split[:i] + [a + b] + split[i + 2 :]\n            else:\n                i += 1\n        splits[word] = split\n    return splits

And we can have a look at the result of the first merge:

splits = merge_pair(\"Ġ\", \"t\", splits)\nprint(splits[\"Ġtrained\"])
['Ġt', 'r', 'a', 'i', 'n', 'e', 'd']

Now we have everything we need to loop until we have learned all the merges we want. Let’s aim for a vocab size of 50:

vocab_size = 50\n\nwhile len(vocab) < vocab_size:\n    pair_freqs = compute_pair_freqs(splits)\n    best_pair = \"\"\n    max_freq = None\n    for pair, freq in pair_freqs.items():\n        if max_freq is None or max_freq < freq:\n            best_pair = pair\n            max_freq = freq\n    splits = merge_pair(*best_pair, splits)\n    merges[best_pair] = best_pair[0] + best_pair[1]\n    vocab.append(best_pair[0] + best_pair[1])

As a result, we’ve learned 19 merge rules (the initial vocabulary had a size of 31 — 30 characters in the alphabet, plus the special token):

print(merges)
{('Ġ', 't'): 'Ġt', ('i', 's'): 'is', ('e', 'r'): 'er', ('Ġ', 'a'): 'Ġa', ('Ġt', 'o'): 'Ġto', ('e', 'n'): 'en',\n ('T', 'h'): 'Th', ('Th', 'is'): 'This', ('o', 'u'): 'ou', ('s', 'e'): 'se', ('Ġto', 'k'): 'Ġtok',\n ('Ġtok', 'en'): 'Ġtoken', ('n', 'd'): 'nd', ('Ġ', 'is'): 'Ġis', ('Ġt', 'h'): 'Ġth', ('Ġth', 'e'): 'Ġthe',\n ('i', 'n'): 'in', ('Ġa', 'b'): 'Ġab', ('Ġtoken', 'i'): 'Ġtokeni'}

And the vocabulary is composed of the special token, the initial alphabet, and all the results of the merges:

print(vocab)
['<|endoftext|>', ',', '.', 'C', 'F', 'H', 'T', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'k', 'l', 'm', 'n', 'o',\n 'p', 'r', 's', 't', 'u', 'v', 'w', 'y', 'z', 'Ġ', 'Ġt', 'is', 'er', 'Ġa', 'Ġto', 'en', 'Th', 'This', 'ou', 'se',\n 'Ġtok', 'Ġtoken', 'nd', 'Ġis', 'Ġth', 'Ġthe', 'in', 'Ġab', 'Ġtokeni']

💡 Using train_new_from_iterator() on the same corpus won’t result in the exact same vocabulary. This is because when there is a choice of the most frequent pair, we selected the first one encountered, while the 🤗 Tokenizers library selects the first one based on its inner IDs.

To tokenize a new text, we pre-tokenize it, split it, then apply all the merge rules learned:

def tokenize(text):\n    pre_tokenize_result = tokenizer._tokenizer.pre_tokenizer.pre_tokenize_str(text)\n    pre_tokenized_text = [word for word, offset in pre_tokenize_result]\n    splits = [[l for l in word] for word in pre_tokenized_text]\n    for pair, merge in merges.items():\n        for idx, split in enumerate(splits):\n            i = 0\n            while i < len(split) - 1:\n                if split[i] == pair[0] and split[i + 1] == pair[1]:\n                    split = split[:i] + [merge] + split[i + 2 :]\n                else:\n                    i += 1\n            splits[idx] = split\n\n    return sum(splits, [])

We can try this on any text composed of characters in the alphabet:

tokenize(\"This is not a token.\")
['This', 'Ġis', 'Ġ', 'n', 'o', 't', 'Ġa', 'Ġtoken', '.']

⚠️ Our implementation will throw an error if there is an unknown character since we didn’t do anything to handle them. GPT-2 doesn’t actually have an unknown token (it’s impossible to get an unknown character when using byte-level BPE), but this could happen here because we did not include all the possible bytes in the initial vocabulary. This aspect of BPE is beyond the scope of this section, so we’ve left the details out.

That’s it for the BPE algorithm! Next, we’ll have a look at WordPiece.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:23.465Z"} {"title":"WordPiece tokenization - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter6/6?fw=pt","markdown":"## [](#wordpiece-tokenization)WordPiece tokenization\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-6-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section6.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter6/section6.ipynb)\n\nWordPiece is the tokenization algorithm Google developed to pretrain BERT. It has since been reused in quite a few Transformer models based on BERT, such as DistilBERT, MobileBERT, Funnel Transformers, and MPNET. It’s very similar to BPE in terms of the training, but the actual tokenization is done differently.\n\n💡 This section covers WordPiece in depth, going as far as showing a full implementation. You can skip to the end if you just want a general overview of the tokenization algorithm.\n\n## [](#training-algorithm)Training algorithm\n\n⚠️ Google never open-sourced its implementation of the training algorithm of WordPiece, so what follows is our best guess based on the published literature. It may not be 100% accurate.\n\nLike BPE, WordPiece starts from a small vocabulary including the special tokens used by the model and the initial alphabet. Since it identifies subwords by adding a prefix (like `##` for BERT), each word is initially split by adding that prefix to all the characters inside the word. So, for instance, `\"word\"` gets split like this:\n\nThus, the initial alphabet contains all the characters present at the beginning of a word and the characters present inside a word preceded by the WordPiece prefix.\n\nThen, again like BPE, WordPiece learns merge rules. The main difference is the way the pair to be merged is selected. Instead of selecting the most frequent pair, WordPiece computes a score for each pair, using the following formula: score\\=(freq\\_of\\_pair)/(freq\\_of\\_first\\_element×freq\\_of\\_second\\_element)\\\\mathrm{score} = (\\\\mathrm{freq\\\\\\_of\\\\\\_pair}) / (\\\\mathrm{freq\\\\\\_of\\\\\\_first\\\\\\_element} \\\\times \\\\mathrm{freq\\\\\\_of\\\\\\_second\\\\\\_element})\n\nBy dividing the frequency of the pair by the product of the frequencies of each of its parts, the algorithm prioritizes the merging of pairs where the individual parts are less frequent in the vocabulary. For instance, it won’t necessarily merge `(\"un\", \"##able\")` even if that pair occurs very frequently in the vocabulary, because the two pairs `\"un\"` and `\"##able\"` will likely each appear in a lot of other words and have a high frequency. In contrast, a pair like `(\"hu\", \"##gging\")` will probably be merged faster (assuming the word “hugging” appears often in the vocabulary) since `\"hu\"` and `\"##gging\"` are likely to be less frequent individually.\n\nLet’s look at the same vocabulary we used in the BPE training example:\n\n```\n(\"hug\", 10), (\"pug\", 5), (\"pun\", 12), (\"bun\", 4), (\"hugs\", 5)```\n\nThe splits here will be:\n\n```\n(\"h\" \"##u\" \"##g\", 10), (\"p\" \"##u\" \"##g\", 5), (\"p\" \"##u\" \"##n\", 12), (\"b\" \"##u\" \"##n\", 4), (\"h\" \"##u\" \"##g\" \"##s\", 5)```\n\nso the initial vocabulary will be `[\"b\", \"h\", \"p\", \"##g\", \"##n\", \"##s\", \"##u\"]` (if we forget about special tokens for now). The most frequent pair is `(\"##u\", \"##g\")` (present 20 times), but the individual frequency of `\"##u\"` is very high, so its score is not the highest (it’s 1 / 36). All pairs with a `\"##u\"` actually have that same score (1 / 36), so the best score goes to the pair `(\"##g\", \"##s\")` — the only one without a `\"##u\"` — at 1 / 20, and the first merge learned is `(\"##g\", \"##s\") -> (\"##gs\")`.\n\nNote that when we merge, we remove the `##` between the two tokens, so we add `\"##gs\"` to the vocabulary and apply the merge in the words of the corpus:\n\n```\nVocabulary: [\"b\", \"h\", \"p\", \"##g\", \"##n\", \"##s\", \"##u\", \"##gs\"]\nCorpus: (\"h\" \"##u\" \"##g\", 10), (\"p\" \"##u\" \"##g\", 5), (\"p\" \"##u\" \"##n\", 12), (\"b\" \"##u\" \"##n\", 4), (\"h\" \"##u\" \"##gs\", 5)```\n\nAt this point, `\"##u\"` is in all the possible pairs, so they all end up with the same score. Let’s say that in this case, the first pair is merged, so `(\"h\", \"##u\") -> \"hu\"`. This takes us to:\n\n```\nVocabulary: [\"b\", \"h\", \"p\", \"##g\", \"##n\", \"##s\", \"##u\", \"##gs\", \"hu\"]\nCorpus: (\"hu\" \"##g\", 10), (\"p\" \"##u\" \"##g\", 5), (\"p\" \"##u\" \"##n\", 12), (\"b\" \"##u\" \"##n\", 4), (\"hu\" \"##gs\", 5)```\n\nThen the next best score is shared by `(\"hu\", \"##g\")` and `(\"hu\", \"##gs\")` (with 1/15, compared to 1/21 for all the other pairs), so the first pair with the biggest score is merged:\n\n```\nVocabulary: [\"b\", \"h\", \"p\", \"##g\", \"##n\", \"##s\", \"##u\", \"##gs\", \"hu\", \"hug\"]\nCorpus: (\"hug\", 10), (\"p\" \"##u\" \"##g\", 5), (\"p\" \"##u\" \"##n\", 12), (\"b\" \"##u\" \"##n\", 4), (\"hu\" \"##gs\", 5)```\n\nand we continue like this until we reach the desired vocabulary size.\n\n✏️ **Now your turn!** What will the next merge rule be?\n\n## [](#tokenization-algorithm)Tokenization algorithm\n\nTokenization differs in WordPiece and BPE in that WordPiece only saves the final vocabulary, not the merge rules learned. Starting from the word to tokenize, WordPiece finds the longest subword that is in the vocabulary, then splits on it. For instance, if we use the vocabulary learned in the example above, for the word `\"hugs\"` the longest subword starting from the beginning that is inside the vocabulary is `\"hug\"`, so we split there and get `[\"hug\", \"##s\"]`. We then continue with `\"##s\"`, which is in the vocabulary, so the tokenization of `\"hugs\"` is `[\"hug\", \"##s\"]`.\n\nWith BPE, we would have applied the merges learned in order and tokenized this as `[\"hu\", \"##gs\"]`, so the encoding is different.\n\nAs another example, let’s see how the word `\"bugs\"` would be tokenized. `\"b\"` is the longest subword starting at the beginning of the word that is in the vocabulary, so we split there and get `[\"b\", \"##ugs\"]`. Then `\"##u\"` is the longest subword starting at the beginning of `\"##ugs\"` that is in the vocabulary, so we split there and get `[\"b\", \"##u, \"##gs\"]`. Finally, `\"##gs\"` is in the vocabulary, so this last list is the tokenization of `\"bugs\"`.\n\nWhen the tokenization gets to a stage where it’s not possible to find a subword in the vocabulary, the whole word is tokenized as unknown — so, for instance, `\"mug\"` would be tokenized as `[\"[UNK]\"]`, as would `\"bum\"` (even if we can begin with `\"b\"` and `\"##u\"`, `\"##m\"` is not the vocabulary, and the resulting tokenization will just be `[\"[UNK]\"]`, not `[\"b\", \"##u\", \"[UNK]\"]`). This is another difference from BPE, which would only classify the individual characters not in the vocabulary as unknown.\n\n✏️ **Now your turn!** How will the word `\"pugs\"` be tokenized?\n\n## [](#implementing-wordpiece)Implementing WordPiece\n\nNow let’s take a look at an implementation of the WordPiece algorithm. Like with BPE, this is just pedagogical, and you won’t able to use this on a big corpus.\n\nWe will use the same corpus as in the BPE example:\n\n```\ncorpus = [\n \"This is the Hugging Face Course.\",\n \"This chapter is about tokenization.\",\n \"This section shows several tokenizer algorithms.\",\n \"Hopefully, you will be able to understand how they are trained and generate tokens.\",\n]```\n\nFirst, we need to pre-tokenize the corpus into words. Since we are replicating a WordPiece tokenizer (like BERT), we will use the `bert-base-cased` tokenizer for the pre-tokenization:\n\n```\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")```\n\nThen we compute the frequencies of each word in the corpus as we do the pre-tokenization:\n\n```\nfrom collections import defaultdict\n\nword_freqs = defaultdict(int)\nfor text in corpus:\n words_with_offsets = tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text)\n new_words = [word for word, offset in words_with_offsets]\n for word in new_words:\n word_freqs[word] += 1\n\nword_freqs```\n\n```\ndefaultdict(\n int, {'This': 3, 'is': 2, 'the': 1, 'Hugging': 1, 'Face': 1, 'Course': 1, '.': 4, 'chapter': 1, 'about': 1,\n 'tokenization': 1, 'section': 1, 'shows': 1, 'several': 1, 'tokenizer': 1, 'algorithms': 1, 'Hopefully': 1,\n ',': 1, 'you': 1, 'will': 1, 'be': 1, 'able': 1, 'to': 1, 'understand': 1, 'how': 1, 'they': 1, 'are': 1,\n 'trained': 1, 'and': 1, 'generate': 1, 'tokens': 1})```\n\nAs we saw before, the alphabet is the unique set composed of all the first letters of words, and all the other letters that appear in words prefixed by `##`:\n\n```\nalphabet = []\nfor word in word_freqs.keys():\n if word[0] not in alphabet:\n alphabet.append(word[0])\n for letter in word[1:]:\n if f\"##{letter}\" not in alphabet:\n alphabet.append(f\"##{letter}\")\n\nalphabet.sort()\nalphabet\n\nprint(alphabet)```\n\n```\n['##a', '##b', '##c', '##d', '##e', '##f', '##g', '##h', '##i', '##k', '##l', '##m', '##n', '##o', '##p', '##r', '##s',\n '##t', '##u', '##v', '##w', '##y', '##z', ',', '.', 'C', 'F', 'H', 'T', 'a', 'b', 'c', 'g', 'h', 'i', 's', 't', 'u',\n 'w', 'y']```\n\nWe also add the special tokens used by the model at the beginning of that vocabulary. In the case of BERT, it’s the list `[\"[PAD]\", \"[UNK]\", \"[CLS]\", \"[SEP]\", \"[MASK]\"]`:\n\n```\nvocab = [\"[PAD]\", \"[UNK]\", \"[CLS]\", \"[SEP]\", \"[MASK]\"] + alphabet.copy()```\n\nNext we need to split each word, with all the letters that are not the first prefixed by `##`:\n\n```\nsplits = {\n word: [c if i == 0 else f\"##{c}\" for i, c in enumerate(word)]\n for word in word_freqs.keys()\n}```\n\nNow that we are ready for training, let’s write a function that computes the score of each pair. We’ll need to use this at each step of the training:\n\n```\ndef compute_pair_scores(splits):\n letter_freqs = defaultdict(int)\n pair_freqs = defaultdict(int)\n for word, freq in word_freqs.items():\n split = splits[word]\n if len(split) == 1:\n letter_freqs[split[0]] += freq\n continue\n for i in range(len(split) - 1):\n pair = (split[i], split[i + 1])\n letter_freqs[split[i]] += freq\n pair_freqs[pair] += freq\n letter_freqs[split[-1]] += freq\n\n scores = {\n pair: freq / (letter_freqs[pair[0]] * letter_freqs[pair[1]])\n for pair, freq in pair_freqs.items()\n }\n return scores```\n\nLet’s have a look at a part of this dictionary after the initial splits:\n\n```\npair_scores = compute_pair_scores(splits)\nfor i, key in enumerate(pair_scores.keys()):\n print(f\"{key}: {pair_scores[key]}\")\n if i >= 5:\n break```\n\n```\n('T', '##h'): 0.125\n('##h', '##i'): 0.03409090909090909\n('##i', '##s'): 0.02727272727272727\n('i', '##s'): 0.1\n('t', '##h'): 0.03571428571428571\n('##h', '##e'): 0.011904761904761904```\n\nNow, finding the pair with the best score only takes a quick loop:\n\n```\nbest_pair = \"\"\nmax_score = None\nfor pair, score in pair_scores.items():\n if max_score is None or max_score < score:\n best_pair = pair\n max_score = score\n\nprint(best_pair, max_score)```\n\nSo the first merge to learn is `('a', '##b') -> 'ab'`, and we add `'ab'` to the vocabulary:\n\nTo continue, we need to apply that merge in our `splits` dictionary. Let’s write another function for this:\n\n```\ndef merge_pair(a, b, splits):\n for word in word_freqs:\n split = splits[word]\n if len(split) == 1:\n continue\n i = 0\n while i < len(split) - 1:\n if split[i] == a and split[i + 1] == b:\n merge = a + b[2:] if b.startswith(\"##\") else a + b\n split = split[:i] + [merge] + split[i + 2 :]\n else:\n i += 1\n splits[word] = split\n return splits```\n\nAnd we can have a look at the result of the first merge:\n\n```\nsplits = merge_pair(\"a\", \"##b\", splits)\nsplits[\"about\"]```\n\n```\n['ab', '##o', '##u', '##t']```\n\nNow we have everything we need to loop until we have learned all the merges we want. Let’s aim for a vocab size of 70:\n\n```\nvocab_size = 70\nwhile len(vocab) < vocab_size:\n scores = compute_pair_scores(splits)\n best_pair, max_score = \"\", None\n for pair, score in scores.items():\n if max_score is None or max_score < score:\n best_pair = pair\n max_score = score\n splits = merge_pair(*best_pair, splits)\n new_token = (\n best_pair[0] + best_pair[1][2:]\n if best_pair[1].startswith(\"##\")\n else best_pair[0] + best_pair[1]\n )\n vocab.append(new_token)```\n\nWe can then look at the generated vocabulary:\n\n```\n['[PAD]', '[UNK]', '[CLS]', '[SEP]', '[MASK]', '##a', '##b', '##c', '##d', '##e', '##f', '##g', '##h', '##i', '##k',\n '##l', '##m', '##n', '##o', '##p', '##r', '##s', '##t', '##u', '##v', '##w', '##y', '##z', ',', '.', 'C', 'F', 'H',\n 'T', 'a', 'b', 'c', 'g', 'h', 'i', 's', 't', 'u', 'w', 'y', 'ab', '##fu', 'Fa', 'Fac', '##ct', '##ful', '##full', '##fully',\n 'Th', 'ch', '##hm', 'cha', 'chap', 'chapt', '##thm', 'Hu', 'Hug', 'Hugg', 'sh', 'th', 'is', '##thms', '##za', '##zat',\n '##ut']```\n\nAs we can see, compared to BPE, this tokenizer learns parts of words as tokens a bit faster.\n\n💡 Using `train_new_from_iterator()` on the same corpus won’t result in the exact same vocabulary. This is because the 🤗 Tokenizers library does not implement WordPiece for the training (since we are not completely sure of its internals), but uses BPE instead.\n\nTo tokenize a new text, we pre-tokenize it, split it, then apply the tokenization algorithm on each word. That is, we look for the biggest subword starting at the beginning of the first word and split it, then we repeat the process on the second part, and so on for the rest of that word and the following words in the text:\n\n```\ndef encode_word(word):\n tokens = []\n while len(word) > 0:\n i = len(word)\n while i > 0 and word[:i] not in vocab:\n i -= 1\n if i == 0:\n return [\"[UNK]\"]\n tokens.append(word[:i])\n word = word[i:]\n if len(word) > 0:\n word = f\"##{word}\"\n return tokens```\n\nLet’s test it on one word that’s in the vocabulary, and another that isn’t:\n\n```\nprint(encode_word(\"Hugging\"))\nprint(encode_word(\"HOgging\"))```\n\n```\n['Hugg', '##i', '##n', '##g']\n['[UNK]']```\n\nNow, let’s write a function that tokenizes a text:\n\n```\ndef tokenize(text):\n pre_tokenize_result = tokenizer._tokenizer.pre_tokenizer.pre_tokenize_str(text)\n pre_tokenized_text = [word for word, offset in pre_tokenize_result]\n encoded_words = [encode_word(word) for word in pre_tokenized_text]\n return sum(encoded_words, [])```\n\nWe can try it on any text:\n\n```\ntokenize(\"This is the Hugging Face course!\")```\n\n```\n['Th', '##i', '##s', 'is', 'th', '##e', 'Hugg', '##i', '##n', '##g', 'Fac', '##e', 'c', '##o', '##u', '##r', '##s',\n '##e', '[UNK]']```\n\nThat’s it for the WordPiece algorithm! Now let’s take a look at Unigram.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tWordPiece tokenization - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

WordPiece tokenization

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

WordPiece tokenization

\"Ask \"Open \"Open

WordPiece is the tokenization algorithm Google developed to pretrain BERT. It has since been reused in quite a few Transformer models based on BERT, such as DistilBERT, MobileBERT, Funnel Transformers, and MPNET. It’s very similar to BPE in terms of the training, but the actual tokenization is done differently.

💡 This section covers WordPiece in depth, going as far as showing a full implementation. You can skip to the end if you just want a general overview of the tokenization algorithm.

Training algorithm

⚠️ Google never open-sourced its implementation of the training algorithm of WordPiece, so what follows is our best guess based on the published literature. It may not be 100% accurate.

Like BPE, WordPiece starts from a small vocabulary including the special tokens used by the model and the initial alphabet. Since it identifies subwords by adding a prefix (like ## for BERT), each word is initially split by adding that prefix to all the characters inside the word. So, for instance, \"word\" gets split like this:

w ##o ##r ##d

Thus, the initial alphabet contains all the characters present at the beginning of a word and the characters present inside a word preceded by the WordPiece prefix.

Then, again like BPE, WordPiece learns merge rules. The main difference is the way the pair to be merged is selected. Instead of selecting the most frequent pair, WordPiece computes a score for each pair, using the following formula:\nscore=(freq_of_pair)/(freq_of_first_element×freq_of_second_element)\\mathrm{score} = (\\mathrm{freq\\_of\\_pair}) / (\\mathrm{freq\\_of\\_first\\_element} \\times \\mathrm{freq\\_of\\_second\\_element})score=(freq_of_pair)/(freq_of_first_element×freq_of_second_element)

By dividing the frequency of the pair by the product of the frequencies of each of its parts, the algorithm prioritizes the merging of pairs where the individual parts are less frequent in the vocabulary. For instance, it won’t necessarily merge (\"un\", \"##able\") even if that pair occurs very frequently in the vocabulary, because the two pairs \"un\" and \"##able\" will likely each appear in a lot of other words and have a high frequency. In contrast, a pair like (\"hu\", \"##gging\") will probably be merged faster (assuming the word “hugging” appears often in the vocabulary) since \"hu\" and \"##gging\" are likely to be less frequent individually.

Let’s look at the same vocabulary we used in the BPE training example:

(\"hug\", 10), (\"pug\", 5), (\"pun\", 12), (\"bun\", 4), (\"hugs\", 5)

The splits here will be:

(\"h\" \"##u\" \"##g\", 10), (\"p\" \"##u\" \"##g\", 5), (\"p\" \"##u\" \"##n\", 12), (\"b\" \"##u\" \"##n\", 4), (\"h\" \"##u\" \"##g\" \"##s\", 5)

so the initial vocabulary will be [\"b\", \"h\", \"p\", \"##g\", \"##n\", \"##s\", \"##u\"] (if we forget about special tokens for now). The most frequent pair is (\"##u\", \"##g\") (present 20 times), but the individual frequency of \"##u\" is very high, so its score is not the highest (it’s 1 / 36). All pairs with a \"##u\" actually have that same score (1 / 36), so the best score goes to the pair (\"##g\", \"##s\") — the only one without a \"##u\" — at 1 / 20, and the first merge learned is (\"##g\", \"##s\") -> (\"##gs\").

Note that when we merge, we remove the ## between the two tokens, so we add \"##gs\" to the vocabulary and apply the merge in the words of the corpus:

Vocabulary: [\"b\", \"h\", \"p\", \"##g\", \"##n\", \"##s\", \"##u\", \"##gs\"]\nCorpus: (\"h\" \"##u\" \"##g\", 10), (\"p\" \"##u\" \"##g\", 5), (\"p\" \"##u\" \"##n\", 12), (\"b\" \"##u\" \"##n\", 4), (\"h\" \"##u\" \"##gs\", 5)

At this point, \"##u\" is in all the possible pairs, so they all end up with the same score. Let’s say that in this case, the first pair is merged, so (\"h\", \"##u\") -> \"hu\". This takes us to:

Vocabulary: [\"b\", \"h\", \"p\", \"##g\", \"##n\", \"##s\", \"##u\", \"##gs\", \"hu\"]\nCorpus: (\"hu\" \"##g\", 10), (\"p\" \"##u\" \"##g\", 5), (\"p\" \"##u\" \"##n\", 12), (\"b\" \"##u\" \"##n\", 4), (\"hu\" \"##gs\", 5)

Then the next best score is shared by (\"hu\", \"##g\") and (\"hu\", \"##gs\") (with 1/15, compared to 1/21 for all the other pairs), so the first pair with the biggest score is merged:

Vocabulary: [\"b\", \"h\", \"p\", \"##g\", \"##n\", \"##s\", \"##u\", \"##gs\", \"hu\", \"hug\"]\nCorpus: (\"hug\", 10), (\"p\" \"##u\" \"##g\", 5), (\"p\" \"##u\" \"##n\", 12), (\"b\" \"##u\" \"##n\", 4), (\"hu\" \"##gs\", 5)

and we continue like this until we reach the desired vocabulary size.

✏️ Now your turn! What will the next merge rule be?

Tokenization algorithm

Tokenization differs in WordPiece and BPE in that WordPiece only saves the final vocabulary, not the merge rules learned. Starting from the word to tokenize, WordPiece finds the longest subword that is in the vocabulary, then splits on it. For instance, if we use the vocabulary learned in the example above, for the word \"hugs\" the longest subword starting from the beginning that is inside the vocabulary is \"hug\", so we split there and get [\"hug\", \"##s\"]. We then continue with \"##s\", which is in the vocabulary, so the tokenization of \"hugs\" is [\"hug\", \"##s\"].

With BPE, we would have applied the merges learned in order and tokenized this as [\"hu\", \"##gs\"], so the encoding is different.

As another example, let’s see how the word \"bugs\" would be tokenized. \"b\" is the longest subword starting at the beginning of the word that is in the vocabulary, so we split there and get [\"b\", \"##ugs\"]. Then \"##u\" is the longest subword starting at the beginning of \"##ugs\" that is in the vocabulary, so we split there and get [\"b\", \"##u, \"##gs\"]. Finally, \"##gs\" is in the vocabulary, so this last list is the tokenization of \"bugs\".

When the tokenization gets to a stage where it’s not possible to find a subword in the vocabulary, the whole word is tokenized as unknown — so, for instance, \"mug\" would be tokenized as [\"[UNK]\"], as would \"bum\" (even if we can begin with \"b\" and \"##u\", \"##m\" is not the vocabulary, and the resulting tokenization will just be [\"[UNK]\"], not [\"b\", \"##u\", \"[UNK]\"]). This is another difference from BPE, which would only classify the individual characters not in the vocabulary as unknown.

✏️ Now your turn! How will the word \"pugs\" be tokenized?

Implementing WordPiece

Now let’s take a look at an implementation of the WordPiece algorithm. Like with BPE, this is just pedagogical, and you won’t able to use this on a big corpus.

We will use the same corpus as in the BPE example:

corpus = [\n    \"This is the Hugging Face Course.\",\n    \"This chapter is about tokenization.\",\n    \"This section shows several tokenizer algorithms.\",\n    \"Hopefully, you will be able to understand how they are trained and generate tokens.\",\n]

First, we need to pre-tokenize the corpus into words. Since we are replicating a WordPiece tokenizer (like BERT), we will use the bert-base-cased tokenizer for the pre-tokenization:

from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")

Then we compute the frequencies of each word in the corpus as we do the pre-tokenization:

from collections import defaultdict\n\nword_freqs = defaultdict(int)\nfor text in corpus:\n    words_with_offsets = tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text)\n    new_words = [word for word, offset in words_with_offsets]\n    for word in new_words:\n        word_freqs[word] += 1\n\nword_freqs
defaultdict(\n    int, {'This': 3, 'is': 2, 'the': 1, 'Hugging': 1, 'Face': 1, 'Course': 1, '.': 4, 'chapter': 1, 'about': 1,\n    'tokenization': 1, 'section': 1, 'shows': 1, 'several': 1, 'tokenizer': 1, 'algorithms': 1, 'Hopefully': 1,\n    ',': 1, 'you': 1, 'will': 1, 'be': 1, 'able': 1, 'to': 1, 'understand': 1, 'how': 1, 'they': 1, 'are': 1,\n    'trained': 1, 'and': 1, 'generate': 1, 'tokens': 1})

As we saw before, the alphabet is the unique set composed of all the first letters of words, and all the other letters that appear in words prefixed by ##:

alphabet = []\nfor word in word_freqs.keys():\n    if word[0] not in alphabet:\n        alphabet.append(word[0])\n    for letter in word[1:]:\n        if f\"##{letter}\" not in alphabet:\n            alphabet.append(f\"##{letter}\")\n\nalphabet.sort()\nalphabet\n\nprint(alphabet)
['##a', '##b', '##c', '##d', '##e', '##f', '##g', '##h', '##i', '##k', '##l', '##m', '##n', '##o', '##p', '##r', '##s',\n '##t', '##u', '##v', '##w', '##y', '##z', ',', '.', 'C', 'F', 'H', 'T', 'a', 'b', 'c', 'g', 'h', 'i', 's', 't', 'u',\n 'w', 'y']

We also add the special tokens used by the model at the beginning of that vocabulary. In the case of BERT, it’s the list [\"[PAD]\", \"[UNK]\", \"[CLS]\", \"[SEP]\", \"[MASK]\"]:

vocab = [\"[PAD]\", \"[UNK]\", \"[CLS]\", \"[SEP]\", \"[MASK]\"] + alphabet.copy()

Next we need to split each word, with all the letters that are not the first prefixed by ##:

splits = {\n    word: [c if i == 0 else f\"##{c}\" for i, c in enumerate(word)]\n    for word in word_freqs.keys()\n}

Now that we are ready for training, let’s write a function that computes the score of each pair. We’ll need to use this at each step of the training:

def compute_pair_scores(splits):\n    letter_freqs = defaultdict(int)\n    pair_freqs = defaultdict(int)\n    for word, freq in word_freqs.items():\n        split = splits[word]\n        if len(split) == 1:\n            letter_freqs[split[0]] += freq\n            continue\n        for i in range(len(split) - 1):\n            pair = (split[i], split[i + 1])\n            letter_freqs[split[i]] += freq\n            pair_freqs[pair] += freq\n        letter_freqs[split[-1]] += freq\n\n    scores = {\n        pair: freq / (letter_freqs[pair[0]] * letter_freqs[pair[1]])\n        for pair, freq in pair_freqs.items()\n    }\n    return scores

Let’s have a look at a part of this dictionary after the initial splits:

pair_scores = compute_pair_scores(splits)\nfor i, key in enumerate(pair_scores.keys()):\n    print(f\"{key}: {pair_scores[key]}\")\n    if i >= 5:\n        break
('T', '##h'): 0.125\n('##h', '##i'): 0.03409090909090909\n('##i', '##s'): 0.02727272727272727\n('i', '##s'): 0.1\n('t', '##h'): 0.03571428571428571\n('##h', '##e'): 0.011904761904761904

Now, finding the pair with the best score only takes a quick loop:

best_pair = \"\"\nmax_score = None\nfor pair, score in pair_scores.items():\n    if max_score is None or max_score < score:\n        best_pair = pair\n        max_score = score\n\nprint(best_pair, max_score)
('a', '##b') 0.2

So the first merge to learn is ('a', '##b') -> 'ab', and we add 'ab' to the vocabulary:

vocab.append(\"ab\")

To continue, we need to apply that merge in our splits dictionary. Let’s write another function for this:

def merge_pair(a, b, splits):\n    for word in word_freqs:\n        split = splits[word]\n        if len(split) == 1:\n            continue\n        i = 0\n        while i < len(split) - 1:\n            if split[i] == a and split[i + 1] == b:\n                merge = a + b[2:] if b.startswith(\"##\") else a + b\n                split = split[:i] + [merge] + split[i + 2 :]\n            else:\n                i += 1\n        splits[word] = split\n    return splits

And we can have a look at the result of the first merge:

splits = merge_pair(\"a\", \"##b\", splits)\nsplits[\"about\"]
['ab', '##o', '##u', '##t']

Now we have everything we need to loop until we have learned all the merges we want. Let’s aim for a vocab size of 70:

vocab_size = 70\nwhile len(vocab) < vocab_size:\n    scores = compute_pair_scores(splits)\n    best_pair, max_score = \"\", None\n    for pair, score in scores.items():\n        if max_score is None or max_score < score:\n            best_pair = pair\n            max_score = score\n    splits = merge_pair(*best_pair, splits)\n    new_token = (\n        best_pair[0] + best_pair[1][2:]\n        if best_pair[1].startswith(\"##\")\n        else best_pair[0] + best_pair[1]\n    )\n    vocab.append(new_token)

We can then look at the generated vocabulary:

print(vocab)
['[PAD]', '[UNK]', '[CLS]', '[SEP]', '[MASK]', '##a', '##b', '##c', '##d', '##e', '##f', '##g', '##h', '##i', '##k',\n '##l', '##m', '##n', '##o', '##p', '##r', '##s', '##t', '##u', '##v', '##w', '##y', '##z', ',', '.', 'C', 'F', 'H',\n 'T', 'a', 'b', 'c', 'g', 'h', 'i', 's', 't', 'u', 'w', 'y', 'ab', '##fu', 'Fa', 'Fac', '##ct', '##ful', '##full', '##fully',\n 'Th', 'ch', '##hm', 'cha', 'chap', 'chapt', '##thm', 'Hu', 'Hug', 'Hugg', 'sh', 'th', 'is', '##thms', '##za', '##zat',\n '##ut']

As we can see, compared to BPE, this tokenizer learns parts of words as tokens a bit faster.

💡 Using train_new_from_iterator() on the same corpus won’t result in the exact same vocabulary. This is because the 🤗 Tokenizers library does not implement WordPiece for the training (since we are not completely sure of its internals), but uses BPE instead.

To tokenize a new text, we pre-tokenize it, split it, then apply the tokenization algorithm on each word. That is, we look for the biggest subword starting at the beginning of the first word and split it, then we repeat the process on the second part, and so on for the rest of that word and the following words in the text:

def encode_word(word):\n    tokens = []\n    while len(word) > 0:\n        i = len(word)\n        while i > 0 and word[:i] not in vocab:\n            i -= 1\n        if i == 0:\n            return [\"[UNK]\"]\n        tokens.append(word[:i])\n        word = word[i:]\n        if len(word) > 0:\n            word = f\"##{word}\"\n    return tokens

Let’s test it on one word that’s in the vocabulary, and another that isn’t:

print(encode_word(\"Hugging\"))\nprint(encode_word(\"HOgging\"))
['Hugg', '##i', '##n', '##g']\n['[UNK]']

Now, let’s write a function that tokenizes a text:

def tokenize(text):\n    pre_tokenize_result = tokenizer._tokenizer.pre_tokenizer.pre_tokenize_str(text)\n    pre_tokenized_text = [word for word, offset in pre_tokenize_result]\n    encoded_words = [encode_word(word) for word in pre_tokenized_text]\n    return sum(encoded_words, [])

We can try it on any text:

tokenize(\"This is the Hugging Face course!\")
['Th', '##i', '##s', 'is', 'th', '##e', 'Hugg', '##i', '##n', '##g', 'Fac', '##e', 'c', '##o', '##u', '##r', '##s',\n '##e', '[UNK]']

That’s it for the WordPiece algorithm! Now let’s take a look at Unigram.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:23.644Z"} {"title":"Unigram tokenization - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter6/7?fw=pt","markdown":"## [](#unigram-tokenization)Unigram tokenization\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-6-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section7.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter6/section7.ipynb)\n\nThe Unigram algorithm is often used in SentencePiece, which is the tokenization algorithm used by models like AlBERT, T5, mBART, Big Bird, and XLNet.\n\n💡 This section covers Unigram in depth, going as far as showing a full implementation. You can skip to the end if you just want a general overview of the tokenization algorithm.\n\n## [](#training-algorithm)Training algorithm\n\nCompared to BPE and WordPiece, Unigram works in the other direction: it starts from a big vocabulary and removes tokens from it until it reaches the desired vocabulary size. There are several options to use to build that base vocabulary: we can take the most common substrings in pre-tokenized words, for instance, or apply BPE on the initial corpus with a large vocabulary size.\n\nAt each step of the training, the Unigram algorithm computes a loss over the corpus given the current vocabulary. Then, for each symbol in the vocabulary, the algorithm computes how much the overall loss would increase if the symbol was removed, and looks for the symbols that would increase it the least. Those symbols have a lower effect on the overall loss over the corpus, so in a sense they are “less needed” and are the best candidates for removal.\n\nThis is all a very costly operation, so we don’t just remove the single symbol associated with the lowest loss increase, but the pp (pp being a hyperparameter you can control, usually 10 or 20) percent of the symbols associated with the lowest loss increase. This process is then repeated until the vocabulary has reached the desired size.\n\nNote that we never remove the base characters, to make sure any word can be tokenized.\n\nNow, this is still a bit vague: the main part of the algorithm is to compute a loss over the corpus and see how it changes when we remove some tokens from the vocabulary, but we haven’t explained how to do this yet. This step relies on the tokenization algorithm of a Unigram model, so we’ll dive into this next.\n\nWe’ll reuse the corpus from the previous examples:\n\n```\n(\"hug\", 10), (\"pug\", 5), (\"pun\", 12), (\"bun\", 4), (\"hugs\", 5)```\n\nand for this example, we will take all strict substrings for the initial vocabulary :\n\n```\n[\"h\", \"u\", \"g\", \"hu\", \"ug\", \"p\", \"pu\", \"n\", \"un\", \"b\", \"bu\", \"s\", \"hug\", \"gs\", \"ugs\"]```\n\n## [](#tokenization-algorithm)Tokenization algorithm\n\nA Unigram model is a type of language model that considers each token to be independent of the tokens before it. It’s the simplest language model, in the sense that the probability of token X given the previous context is just the probability of token X. So, if we used a Unigram language model to generate text, we would always predict the most common token.\n\nThe probability of a given token is its frequency (the number of times we find it) in the original corpus, divided by the sum of all frequencies of all tokens in the vocabulary (to make sure the probabilities sum up to 1). For instance, `\"ug\"` is present in `\"hug\"`, `\"pug\"`, and `\"hugs\"`, so it has a frequency of 20 in our corpus.\n\nHere are the frequencies of all the possible subwords in the vocabulary:\n\n```\n(\"h\", 15) (\"u\", 36) (\"g\", 20) (\"hu\", 15) (\"ug\", 20) (\"p\", 17) (\"pu\", 17) (\"n\", 16)\n(\"un\", 16) (\"b\", 4) (\"bu\", 4) (\"s\", 5) (\"hug\", 15) (\"gs\", 5) (\"ugs\", 5)```\n\nSo, the sum of all frequencies is 210, and the probability of the subword `\"ug\"` is thus 20/210.\n\n✏️ **Now your turn!** Write the code to compute the the frequencies above and double-check that the results shown are correct, as well as the total sum.\n\nNow, to tokenize a given word, we look at all the possible segmentations into tokens and compute the probability of each according to the Unigram model. Since all tokens are considered independent, this probability is just the product of the probability of each token. For instance, the tokenization `[\"p\", \"u\", \"g\"]` of `\"pug\"` has the probability: P(\\[‘‘p\",‘‘u\",‘‘g\"\\])\\=P(‘‘p\")×P(‘‘u\")×P(‘‘g\")\\=5210×36210×20210\\=0.000389P(\\[\\`\\`p\", \\`\\`u\", \\`\\`g\"\\]) = P(\\`\\`p\") \\\\times P(\\`\\`u\") \\\\times P(\\`\\`g\") = \\\\frac{5}{210} \\\\times \\\\frac{36}{210} \\\\times \\\\frac{20}{210} = 0.000389\n\nComparatively, the tokenization `[\"pu\", \"g\"]` has the probability: P(\\[‘‘pu\",‘‘g\"\\])\\=P(‘‘pu\")×P(‘‘g\")\\=5210×20210\\=0.0022676P(\\[\\`\\`pu\", \\`\\`g\"\\]) = P(\\`\\`pu\") \\\\times P(\\`\\`g\") = \\\\frac{5}{210} \\\\times \\\\frac{20}{210} = 0.0022676\n\nso that one is way more likely. In general, tokenizations with the least tokens possible will have the highest probability (because of that division by 210 repeated for each token), which corresponds to what we want intuitively: to split a word into the least number of tokens possible.\n\nThe tokenization of a word with the Unigram model is then the tokenization with the highest probability. In the example of `\"pug\"`, here are the probabilities we would get for each possible segmentation:\n\n```\n[\"p\", \"u\", \"g\"] : 0.000389\n[\"p\", \"ug\"] : 0.0022676\n[\"pu\", \"g\"] : 0.0022676```\n\nSo, `\"pug\"` would be tokenized as `[\"p\", \"ug\"]` or `[\"pu\", \"g\"]`, depending on which of those segmentations is encountered first (note that in a larger corpus, equality cases like this will be rare).\n\nIn this case, it was easy to find all the possible segmentations and compute their probabilities, but in general it’s going to be a bit harder. There is a classic algorithm used for this, called the _Viterbi algorithm_. Essentially, we can build a graph to detect the possible segmentations of a given word by saying there is a branch from character _a_ to character _b_ if the subword from _a_ to _b_ is in the vocabulary, and attribute to that branch the probability of the subword.\n\nTo find the path in that graph that is going to have the best score the Viterbi algorithm determines, for each position in the word, the segmentation with the best score that ends at that position. Since we go from the beginning to the end, that best score can be found by looping through all subwords ending at the current position and then using the best tokenization score from the position this subword begins at. Then, we just have to unroll the path taken to arrive at the end.\n\nLet’s take a look at an example using our vocabulary and the word `\"unhug\"`. For each position, the subwords with the best scores ending there are the following:\n\n```\nCharacter 0 (u): \"u\" (score 0.171429)\nCharacter 1 (n): \"un\" (score 0.076191)\nCharacter 2 (h): \"un\" \"h\" (score 0.005442)\nCharacter 3 (u): \"un\" \"hu\" (score 0.005442)\nCharacter 4 (g): \"un\" \"hug\" (score 0.005442)```\n\nThus `\"unhug\"` would be tokenized as `[\"un\", \"hug\"]`.\n\n✏️ **Now your turn!** Determine the tokenization of the word `\"huggun\"`, and its score.\n\n## [](#back-to-training)Back to training\n\nNow that we have seen how the tokenization works, we can dive a little more deeply into the loss used during training. At any given stage, this loss is computed by tokenizing every word in the corpus, using the current vocabulary and the Unigram model determined by the frequencies of each token in the corpus (as seen before).\n\nEach word in the corpus has a score, and the loss is the negative log likelihood of those scores — that is, the sum for all the words in the corpus of all the `-log(P(word))`.\n\nLet’s go back to our example with the following corpus:\n\n```\n(\"hug\", 10), (\"pug\", 5), (\"pun\", 12), (\"bun\", 4), (\"hugs\", 5)```\n\nThe tokenization of each word with their respective scores is:\n\n```\n\"hug\": [\"hug\"] \n\"pug\": [\"pu\", \"g\"] \n\"pun\": [\"pu\", \"n\"] \n\"bun\": [\"bu\", \"n\"] \n\"hugs\": [\"hug\", \"s\"] ```\n\nSo the loss is:\n\n```\n10 * (-log(0.071428)) + 5 * (-log(0.007710)) + 12 * (-log(0.006168)) + 4 * (-log(0.001451)) + 5 * (-log(0.001701)) = 169.8```\n\nNow we need to compute how removing each token affects the loss. This is rather tedious, so we’ll just do it for two tokens here and save the whole process for when we have code to help us. In this (very) particular case, we had two equivalent tokenizations of all the words: as we saw earlier, for example, `\"pug\"` could be tokenized `[\"p\", \"ug\"]` with the same score. Thus, removing the `\"pu\"` token from the vocabulary will give the exact same loss.\n\nOn the other hand, removing `\"hug\"` will make the loss worse, because the tokenization of `\"hug\"` and `\"hugs\"` will become:\n\n```\n\"hug\": [\"hu\", \"g\"] \n\"hugs\": [\"hu\", \"gs\"] ```\n\nThese changes will cause the loss to rise by:\n\n```\n- 10 * (-log(0.071428)) + 10 * (-log(0.006802)) = 23.5```\n\nTherefore, the token `\"pu\"` will probably be removed from the vocabulary, but not `\"hug\"`.\n\n## [](#implementing-unigram)Implementing Unigram\n\nNow let’s implement everything we’ve seen so far in code. Like with BPE and WordPiece, this is not an efficient implementation of the Unigram algorithm (quite the opposite), but it should help you understand it a bit better.\n\nWe will use the same corpus as before as an example:\n\n```\ncorpus = [\n \"This is the Hugging Face Course.\",\n \"This chapter is about tokenization.\",\n \"This section shows several tokenizer algorithms.\",\n \"Hopefully, you will be able to understand how they are trained and generate tokens.\",\n]```\n\nThis time, we will use `xlnet-base-cased` as our model:\n\n```\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"xlnet-base-cased\")```\n\nLike for BPE and WordPiece, we begin by counting the number of occurrences of each word in the corpus:\n\n```\nfrom collections import defaultdict\n\nword_freqs = defaultdict(int)\nfor text in corpus:\n words_with_offsets = tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text)\n new_words = [word for word, offset in words_with_offsets]\n for word in new_words:\n word_freqs[word] += 1\n\nword_freqs```\n\nThen, we need to initialize our vocabulary to something larger than the vocab size we will want at the end. We have to include all the basic characters (otherwise we won’t be able to tokenize every word), but for the bigger substrings we’ll only keep the most common ones, so we sort them by frequency:\n\n```\nchar_freqs = defaultdict(int)\nsubwords_freqs = defaultdict(int)\nfor word, freq in word_freqs.items():\n for i in range(len(word)):\n char_freqs[word[i]] += freq\n \n for j in range(i + 2, len(word) + 1):\n subwords_freqs[word[i:j]] += freq\n\n\nsorted_subwords = sorted(subwords_freqs.items(), key=lambda x: x[1], reverse=True)\nsorted_subwords[:10]```\n\n```\n[('▁t', 7), ('is', 5), ('er', 5), ('▁a', 5), ('▁to', 4), ('to', 4), ('en', 4), ('▁T', 3), ('▁Th', 3), ('▁Thi', 3)]```\n\nWe group the characters with the best subwords to arrive at an initial vocabulary of size 300:\n\n```\ntoken_freqs = list(char_freqs.items()) + sorted_subwords[: 300 - len(char_freqs)]\ntoken_freqs = {token: freq for token, freq in token_freqs}```\n\n💡 SentencePiece uses a more efficient algorithm called Enhanced Suffix Array (ESA) to create the initial vocabulary.\n\nNext, we compute the sum of all frequencies, to convert the frequencies into probabilities. For our model we will store the logarithms of the probabilities, because it’s more numerically stable to add logarithms than to multiply small numbers, and this will simplify the computation of the loss of the model:\n\n```\nfrom math import log\n\ntotal_sum = sum([freq for token, freq in token_freqs.items()])\nmodel = {token: -log(freq / total_sum) for token, freq in token_freqs.items()}```\n\nNow the main function is the one that tokenizes words using the Viterbi algorithm. As we saw before, that algorithm computes the best segmentation of each substring of the word, which we will store in a variable named `best_segmentations`. We will store one dictionary per position in the word (from 0 to its total length), with two keys: the index of the start of the last token in the best segmentation, and the score of the best segmentation. With the index of the start of the last token, we will be able to retrieve the full segmentation once the list is completely populated.\n\nPopulating the list is done with just two loops: the main loop goes over each start position, and the second loop tries all substrings beginning at that start position. If the substring is in the vocabulary, we have a new segmentation of the word up until that end position, which we compare to what is in `best_segmentations`.\n\nOnce the main loop is finished, we just start from the end and hop from one start position to the next, recording the tokens as we go, until we reach the start of the word:\n\n```\ndef encode_word(word, model):\n best_segmentations = [{\"start\": 0, \"score\": 1}] + [\n {\"start\": None, \"score\": None} for _ in range(len(word))\n ]\n for start_idx in range(len(word)):\n \n best_score_at_start = best_segmentations[start_idx][\"score\"]\n for end_idx in range(start_idx + 1, len(word) + 1):\n token = word[start_idx:end_idx]\n if token in model and best_score_at_start is not None:\n score = model[token] + best_score_at_start\n \n if (\n best_segmentations[end_idx][\"score\"] is None\n or best_segmentations[end_idx][\"score\"] > score\n ):\n best_segmentations[end_idx] = {\"start\": start_idx, \"score\": score}\n\n segmentation = best_segmentations[-1]\n if segmentation[\"score\"] is None:\n \n return [\"\"], None\n\n score = segmentation[\"score\"]\n start = segmentation[\"start\"]\n end = len(word)\n tokens = []\n while start != 0:\n tokens.insert(0, word[start:end])\n next_start = best_segmentations[start][\"start\"]\n end = start\n start = next_start\n tokens.insert(0, word[start:end])\n return tokens, score```\n\nWe can already try our initial model on some words:\n\n```\nprint(encode_word(\"Hopefully\", model))\nprint(encode_word(\"This\", model))```\n\n```\n(['H', 'o', 'p', 'e', 'f', 'u', 'll', 'y'], 41.5157494601402)\n(['This'], 6.288267030694535)```\n\nNow it’s easy to compute the loss of the model on the corpus!\n\n```\ndef compute_loss(model):\n loss = 0\n for word, freq in word_freqs.items():\n _, word_loss = encode_word(word, model)\n loss += freq * word_loss\n return loss```\n\nWe can check it works on the model we have:\n\nComputing the scores for each token is not very hard either; we just have to compute the loss for the models obtained by deleting each token:\n\n```\nimport copy\n\n\ndef compute_scores(model):\n scores = {}\n model_loss = compute_loss(model)\n for token, score in model.items():\n \n if len(token) == 1:\n continue\n model_without_token = copy.deepcopy(model)\n _ = model_without_token.pop(token)\n scores[token] = compute_loss(model_without_token) - model_loss\n return scores```\n\nWe can try it on a given token:\n\n```\nscores = compute_scores(model)\nprint(scores[\"ll\"])\nprint(scores[\"his\"])```\n\nSince `\"ll\"` is used in the tokenization of `\"Hopefully\"`, and removing it will probably make us use the token `\"l\"` twice instead, we expect it will have a positive loss. `\"his\"` is only used inside the word `\"This\"`, which is tokenized as itself, so we expect it to have a zero loss. Here are the results:\n\n💡 This approach is very inefficient, so SentencePiece uses an approximation of the loss of the model without token X: instead of starting from scratch, it just replaces token X by its segmentation in the vocabulary that is left. This way, all the scores can be computed at once at the same time as the model loss.\n\nWith all of this in place, the last thing we need to do is add the special tokens used by the model to the vocabulary, then loop until we have pruned enough tokens from the vocabulary to reach our desired size:\n\n```\npercent_to_remove = 0.1\nwhile len(model) > 100:\n scores = compute_scores(model)\n sorted_scores = sorted(scores.items(), key=lambda x: x[1])\n \n for i in range(int(len(model) * percent_to_remove)):\n _ = token_freqs.pop(sorted_scores[i][0])\n\n total_sum = sum([freq for token, freq in token_freqs.items()])\n model = {token: -log(freq / total_sum) for token, freq in token_freqs.items()}```\n\nThen, to tokenize some text, we just need to apply the pre-tokenization and then use our `encode_word()` function:\n\n```\ndef tokenize(text, model):\n words_with_offsets = tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text)\n pre_tokenized_text = [word for word, offset in words_with_offsets]\n encoded_words = [encode_word(word, model)[0] for word in pre_tokenized_text]\n return sum(encoded_words, [])\n\n\ntokenize(\"This is the Hugging Face course.\", model)```\n\n```\n['▁This', '▁is', '▁the', '▁Hugging', '▁Face', '▁', 'c', 'ou', 'r', 's', 'e', '.']```\n\nThat’s it for Unigram! Hopefully by now you’re feeling like an expert in all things tokenizer. In the next section, we will delve into the building blocks of the 🤗 Tokenizers library, and show you how you can use them to build your own tokenizer.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tUnigram tokenization - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t
\n\n\n\n
\n\t\t

NLP Course documentation\n\t\t\t

\n\t\t\t

Unigram tokenization

\n\t\t\t\t
\n\t\t
\n\t
\n\t\n\t\n\t\n\t\n\t
\n\t\t\n\t\t
\n\t\t\t\n\t\t\t\n\t\t\t\n
\n\t\n\t\n\t\n\t\n\t
\n\t\t\t\n\t\t\t\t
\n\n\t
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Unigram tokenization

\"Ask \"Open \"Open

The Unigram algorithm is often used in SentencePiece, which is the tokenization algorithm used by models like AlBERT, T5, mBART, Big Bird, and XLNet.

💡 This section covers Unigram in depth, going as far as showing a full implementation. You can skip to the end if you just want a general overview of the tokenization algorithm.

Training algorithm

Compared to BPE and WordPiece, Unigram works in the other direction: it starts from a big vocabulary and removes tokens from it until it reaches the desired vocabulary size. There are several options to use to build that base vocabulary: we can take the most common substrings in pre-tokenized words, for instance, or apply BPE on the initial corpus with a large vocabulary size.

At each step of the training, the Unigram algorithm computes a loss over the corpus given the current vocabulary. Then, for each symbol in the vocabulary, the algorithm computes how much the overall loss would increase if the symbol was removed, and looks for the symbols that would increase it the least. Those symbols have a lower effect on the overall loss over the corpus, so in a sense they are “less needed” and are the best candidates for removal.

This is all a very costly operation, so we don’t just remove the single symbol associated with the lowest loss increase, but the ppp (ppp being a hyperparameter you can control, usually 10 or 20) percent of the symbols associated with the lowest loss increase. This process is then repeated until the vocabulary has reached the desired size.

Note that we never remove the base characters, to make sure any word can be tokenized.

Now, this is still a bit vague: the main part of the algorithm is to compute a loss over the corpus and see how it changes when we remove some tokens from the vocabulary, but we haven’t explained how to do this yet. This step relies on the tokenization algorithm of a Unigram model, so we’ll dive into this next.

We’ll reuse the corpus from the previous examples:

(\"hug\", 10), (\"pug\", 5), (\"pun\", 12), (\"bun\", 4), (\"hugs\", 5)

and for this example, we will take all strict substrings for the initial vocabulary :

[\"h\", \"u\", \"g\", \"hu\", \"ug\", \"p\", \"pu\", \"n\", \"un\", \"b\", \"bu\", \"s\", \"hug\", \"gs\", \"ugs\"]

Tokenization algorithm

A Unigram model is a type of language model that considers each token to be independent of the tokens before it. It’s the simplest language model, in the sense that the probability of token X given the previous context is just the probability of token X. So, if we used a Unigram language model to generate text, we would always predict the most common token.

The probability of a given token is its frequency (the number of times we find it) in the original corpus, divided by the sum of all frequencies of all tokens in the vocabulary (to make sure the probabilities sum up to 1). For instance, \"ug\" is present in \"hug\", \"pug\", and \"hugs\", so it has a frequency of 20 in our corpus.

Here are the frequencies of all the possible subwords in the vocabulary:

(\"h\", 15) (\"u\", 36) (\"g\", 20) (\"hu\", 15) (\"ug\", 20) (\"p\", 17) (\"pu\", 17) (\"n\", 16)\n(\"un\", 16) (\"b\", 4) (\"bu\", 4) (\"s\", 5) (\"hug\", 15) (\"gs\", 5) (\"ugs\", 5)

So, the sum of all frequencies is 210, and the probability of the subword \"ug\" is thus 20/210.

✏️ Now your turn! Write the code to compute the the frequencies above and double-check that the results shown are correct, as well as the total sum.

Now, to tokenize a given word, we look at all the possible segmentations into tokens and compute the probability of each according to the Unigram model. Since all tokens are considered independent, this probability is just the product of the probability of each token. For instance, the tokenization [\"p\", \"u\", \"g\"] of \"pug\" has the probability:\nP([p\",u\",g\"])=P(p\")×P(u\")×P(g\")=5210×36210×20210=0.000389P([``p\", ``u\", ``g\"]) = P(``p\") \\times P(``u\") \\times P(``g\") = \\frac{5}{210} \\times \\frac{36}{210} \\times \\frac{20}{210} = 0.000389P([‘‘p\",‘‘u\",‘‘g\"])=P(‘‘p\")×P(‘‘u\")×P(‘‘g\")=2105×21036×21020=0.000389

Comparatively, the tokenization [\"pu\", \"g\"] has the probability:\nP([pu\",g\"])=P(pu\")×P(g\")=5210×20210=0.0022676P([``pu\", ``g\"]) = P(``pu\") \\times P(``g\") = \\frac{5}{210} \\times \\frac{20}{210} = 0.0022676P([‘‘pu\",‘‘g\"])=P(‘‘pu\")×P(‘‘g\")=2105×21020=0.0022676

so that one is way more likely. In general, tokenizations with the least tokens possible will have the highest probability (because of that division by 210 repeated for each token), which corresponds to what we want intuitively: to split a word into the least number of tokens possible.

The tokenization of a word with the Unigram model is then the tokenization with the highest probability. In the example of \"pug\", here are the probabilities we would get for each possible segmentation:

[\"p\", \"u\", \"g\"] : 0.000389\n[\"p\", \"ug\"] : 0.0022676\n[\"pu\", \"g\"] : 0.0022676

So, \"pug\" would be tokenized as [\"p\", \"ug\"] or [\"pu\", \"g\"], depending on which of those segmentations is encountered first (note that in a larger corpus, equality cases like this will be rare).

In this case, it was easy to find all the possible segmentations and compute their probabilities, but in general it’s going to be a bit harder. There is a classic algorithm used for this, called the Viterbi algorithm. Essentially, we can build a graph to detect the possible segmentations of a given word by saying there is a branch from character a to character b if the subword from a to b is in the vocabulary, and attribute to that branch the probability of the subword.

To find the path in that graph that is going to have the best score the Viterbi algorithm determines, for each position in the word, the segmentation with the best score that ends at that position. Since we go from the beginning to the end, that best score can be found by looping through all subwords ending at the current position and then using the best tokenization score from the position this subword begins at. Then, we just have to unroll the path taken to arrive at the end.

Let’s take a look at an example using our vocabulary and the word \"unhug\". For each position, the subwords with the best scores ending there are the following:

Character 0 (u): \"u\" (score 0.171429)\nCharacter 1 (n): \"un\" (score 0.076191)\nCharacter 2 (h): \"un\" \"h\" (score 0.005442)\nCharacter 3 (u): \"un\" \"hu\" (score 0.005442)\nCharacter 4 (g): \"un\" \"hug\" (score 0.005442)

Thus \"unhug\" would be tokenized as [\"un\", \"hug\"].

✏️ Now your turn! Determine the tokenization of the word \"huggun\", and its score.

Back to training

Now that we have seen how the tokenization works, we can dive a little more deeply into the loss used during training. At any given stage, this loss is computed by tokenizing every word in the corpus, using the current vocabulary and the Unigram model determined by the frequencies of each token in the corpus (as seen before).

Each word in the corpus has a score, and the loss is the negative log likelihood of those scores — that is, the sum for all the words in the corpus of all the -log(P(word)).

Let’s go back to our example with the following corpus:

(\"hug\", 10), (\"pug\", 5), (\"pun\", 12), (\"bun\", 4), (\"hugs\", 5)

The tokenization of each word with their respective scores is:

\"hug\": [\"hug\"] (score 0.071428)\n\"pug\": [\"pu\", \"g\"] (score 0.007710)\n\"pun\": [\"pu\", \"n\"] (score 0.006168)\n\"bun\": [\"bu\", \"n\"] (score 0.001451)\n\"hugs\": [\"hug\", \"s\"] (score 0.001701)

So the loss is:

10 * (-log(0.071428)) + 5 * (-log(0.007710)) + 12 * (-log(0.006168)) + 4 * (-log(0.001451)) + 5 * (-log(0.001701)) = 169.8

Now we need to compute how removing each token affects the loss. This is rather tedious, so we’ll just do it for two tokens here and save the whole process for when we have code to help us. In this (very) particular case, we had two equivalent tokenizations of all the words: as we saw earlier, for example, \"pug\" could be tokenized [\"p\", \"ug\"] with the same score. Thus, removing the \"pu\" token from the vocabulary will give the exact same loss.

On the other hand, removing \"hug\" will make the loss worse, because the tokenization of \"hug\" and \"hugs\" will become:

\"hug\": [\"hu\", \"g\"] (score 0.006802)\n\"hugs\": [\"hu\", \"gs\"] (score 0.001701)

These changes will cause the loss to rise by:

- 10 * (-log(0.071428)) + 10 * (-log(0.006802)) = 23.5

Therefore, the token \"pu\" will probably be removed from the vocabulary, but not \"hug\".

Implementing Unigram

Now let’s implement everything we’ve seen so far in code. Like with BPE and WordPiece, this is not an efficient implementation of the Unigram algorithm (quite the opposite), but it should help you understand it a bit better.

We will use the same corpus as before as an example:

corpus = [\n    \"This is the Hugging Face Course.\",\n    \"This chapter is about tokenization.\",\n    \"This section shows several tokenizer algorithms.\",\n    \"Hopefully, you will be able to understand how they are trained and generate tokens.\",\n]

This time, we will use xlnet-base-cased as our model:

from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\"xlnet-base-cased\")

Like for BPE and WordPiece, we begin by counting the number of occurrences of each word in the corpus:

from collections import defaultdict\n\nword_freqs = defaultdict(int)\nfor text in corpus:\n    words_with_offsets = tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text)\n    new_words = [word for word, offset in words_with_offsets]\n    for word in new_words:\n        word_freqs[word] += 1\n\nword_freqs

Then, we need to initialize our vocabulary to something larger than the vocab size we will want at the end. We have to include all the basic characters (otherwise we won’t be able to tokenize every word), but for the bigger substrings we’ll only keep the most common ones, so we sort them by frequency:

char_freqs = defaultdict(int)\nsubwords_freqs = defaultdict(int)\nfor word, freq in word_freqs.items():\n    for i in range(len(word)):\n        char_freqs[word[i]] += freq\n        # Loop through the subwords of length at least 2\n        for j in range(i + 2, len(word) + 1):\n            subwords_freqs[word[i:j]] += freq\n\n# Sort subwords by frequency\nsorted_subwords = sorted(subwords_freqs.items(), key=lambda x: x[1], reverse=True)\nsorted_subwords[:10]
[('▁t', 7), ('is', 5), ('er', 5), ('▁a', 5), ('▁to', 4), ('to', 4), ('en', 4), ('▁T', 3), ('▁Th', 3), ('▁Thi', 3)]

We group the characters with the best subwords to arrive at an initial vocabulary of size 300:

token_freqs = list(char_freqs.items()) + sorted_subwords[: 300 - len(char_freqs)]\ntoken_freqs = {token: freq for token, freq in token_freqs}

💡 SentencePiece uses a more efficient algorithm called Enhanced Suffix Array (ESA) to create the initial vocabulary.

Next, we compute the sum of all frequencies, to convert the frequencies into probabilities. For our model we will store the logarithms of the probabilities, because it’s more numerically stable to add logarithms than to multiply small numbers, and this will simplify the computation of the loss of the model:

from math import log\n\ntotal_sum = sum([freq for token, freq in token_freqs.items()])\nmodel = {token: -log(freq / total_sum) for token, freq in token_freqs.items()}

Now the main function is the one that tokenizes words using the Viterbi algorithm. As we saw before, that algorithm computes the best segmentation of each substring of the word, which we will store in a variable named best_segmentations. We will store one dictionary per position in the word (from 0 to its total length), with two keys: the index of the start of the last token in the best segmentation, and the score of the best segmentation. With the index of the start of the last token, we will be able to retrieve the full segmentation once the list is completely populated.

Populating the list is done with just two loops: the main loop goes over each start position, and the second loop tries all substrings beginning at that start position. If the substring is in the vocabulary, we have a new segmentation of the word up until that end position, which we compare to what is in best_segmentations.

Once the main loop is finished, we just start from the end and hop from one start position to the next, recording the tokens as we go, until we reach the start of the word:

def encode_word(word, model):\n    best_segmentations = [{\"start\": 0, \"score\": 1}] + [\n        {\"start\": None, \"score\": None} for _ in range(len(word))\n    ]\n    for start_idx in range(len(word)):\n        # This should be properly filled by the previous steps of the loop\n        best_score_at_start = best_segmentations[start_idx][\"score\"]\n        for end_idx in range(start_idx + 1, len(word) + 1):\n            token = word[start_idx:end_idx]\n            if token in model and best_score_at_start is not None:\n                score = model[token] + best_score_at_start\n                # If we have found a better segmentation ending at end_idx, we update\n                if (\n                    best_segmentations[end_idx][\"score\"] is None\n                    or best_segmentations[end_idx][\"score\"] > score\n                ):\n                    best_segmentations[end_idx] = {\"start\": start_idx, \"score\": score}\n\n    segmentation = best_segmentations[-1]\n    if segmentation[\"score\"] is None:\n        # We did not find a tokenization of the word -> unknown\n        return [\"<unk>\"], None\n\n    score = segmentation[\"score\"]\n    start = segmentation[\"start\"]\n    end = len(word)\n    tokens = []\n    while start != 0:\n        tokens.insert(0, word[start:end])\n        next_start = best_segmentations[start][\"start\"]\n        end = start\n        start = next_start\n    tokens.insert(0, word[start:end])\n    return tokens, score

We can already try our initial model on some words:

print(encode_word(\"Hopefully\", model))\nprint(encode_word(\"This\", model))
(['H', 'o', 'p', 'e', 'f', 'u', 'll', 'y'], 41.5157494601402)\n(['This'], 6.288267030694535)

Now it’s easy to compute the loss of the model on the corpus!

def compute_loss(model):\n    loss = 0\n    for word, freq in word_freqs.items():\n        _, word_loss = encode_word(word, model)\n        loss += freq * word_loss\n    return loss

We can check it works on the model we have:

compute_loss(model)
413.10377642940875

Computing the scores for each token is not very hard either; we just have to compute the loss for the models obtained by deleting each token:

import copy\n\n\ndef compute_scores(model):\n    scores = {}\n    model_loss = compute_loss(model)\n    for token, score in model.items():\n        # We always keep tokens of length 1\n        if len(token) == 1:\n            continue\n        model_without_token = copy.deepcopy(model)\n        _ = model_without_token.pop(token)\n        scores[token] = compute_loss(model_without_token) - model_loss\n    return scores

We can try it on a given token:

scores = compute_scores(model)\nprint(scores[\"ll\"])\nprint(scores[\"his\"])

Since \"ll\" is used in the tokenization of \"Hopefully\", and removing it will probably make us use the token \"l\" twice instead, we expect it will have a positive loss. \"his\" is only used inside the word \"This\", which is tokenized as itself, so we expect it to have a zero loss. Here are the results:

6.376412403623874\n0.0

💡 This approach is very inefficient, so SentencePiece uses an approximation of the loss of the model without token X: instead of starting from scratch, it just replaces token X by its segmentation in the vocabulary that is left. This way, all the scores can be computed at once at the same time as the model loss.

With all of this in place, the last thing we need to do is add the special tokens used by the model to the vocabulary, then loop until we have pruned enough tokens from the vocabulary to reach our desired size:

percent_to_remove = 0.1\nwhile len(model) > 100:\n    scores = compute_scores(model)\n    sorted_scores = sorted(scores.items(), key=lambda x: x[1])\n    # Remove percent_to_remove tokens with the lowest scores.\n    for i in range(int(len(model) * percent_to_remove)):\n        _ = token_freqs.pop(sorted_scores[i][0])\n\n    total_sum = sum([freq for token, freq in token_freqs.items()])\n    model = {token: -log(freq / total_sum) for token, freq in token_freqs.items()}

Then, to tokenize some text, we just need to apply the pre-tokenization and then use our encode_word() function:

def tokenize(text, model):\n    words_with_offsets = tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text)\n    pre_tokenized_text = [word for word, offset in words_with_offsets]\n    encoded_words = [encode_word(word, model)[0] for word in pre_tokenized_text]\n    return sum(encoded_words, [])\n\n\ntokenize(\"This is the Hugging Face course.\", model)
['▁This', '▁is', '▁the', '▁Hugging', '▁Face', '▁', 'c', 'ou', 'r', 's', 'e', '.']

That’s it for Unigram! Hopefully by now you’re feeling like an expert in all things tokenizer. In the next section, we will delve into the building blocks of the 🤗 Tokenizers library, and show you how you can use them to build your own tokenizer.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:23.912Z"} {"title":"Building a tokenizer, block by block - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter6/8?fw=pt","markdown":"## [](#building-a-tokenizer-block-by-block)Building a tokenizer, block by block\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-6-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section8.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter6/section8.ipynb)\n\nAs we’ve seen in the previous sections, tokenization comprises several steps:\n\n- Normalization (any cleanup of the text that is deemed necessary, such as removing spaces or accents, Unicode normalization, etc.)\n- Pre-tokenization (splitting the input into words)\n- Running the input through the model (using the pre-tokenized words to produce a sequence of tokens)\n- Post-processing (adding the special tokens of the tokenizer, generating the attention mask and token type IDs)\n\nAs a reminder, here’s another look at the overall process:\n\n![The tokenization pipeline.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter6/tokenization_pipeline.svg) ![The tokenization pipeline.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter6/tokenization_pipeline-dark.svg)\n\nThe 🤗 Tokenizers library has been built to provide several options for each of those steps, which you can mix and match together. In this section we’ll see how we can build a tokenizer from scratch, as opposed to training a new tokenizer from an old one as we did in [section 2](/course/chapter6/2). You’ll then be able to build any kind of tokenizer you can think of!\n\nMore precisely, the library is built around a central `Tokenizer` class with the building blocks regrouped in submodules:\n\n- `normalizers` contains all the possible types of `Normalizer` you can use (complete list [here](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html#module-tokenizers.normalizers)).\n- `pre_tokenizers` contains all the possible types of `PreTokenizer` you can use (complete list [here](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html#module-tokenizers.pre_tokenizers)).\n- `models` contains the various types of `Model` you can use, like `BPE`, `WordPiece`, and `Unigram` (complete list [here](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html#module-tokenizers.models)).\n- `trainers` contains all the different types of `Trainer` you can use to train your model on a corpus (one per type of model; complete list [here](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html#module-tokenizers.trainers)).\n- `post_processors` contains the various types of `PostProcessor` you can use (complete list [here](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html#module-tokenizers.processors)).\n- `decoders` contains the various types of `Decoder` you can use to decode the outputs of tokenization (complete list [here](https://huggingface.co/docs/tokenizers/python/latest/components.html#decoders)).\n\nYou can find the whole list of building blocks [here](https://huggingface.co/docs/tokenizers/python/latest/components.html).\n\n## [](#acquiring-a-corpus)Acquiring a corpus\n\nTo train our new tokenizer, we will use a small corpus of text (so the examples run fast). The steps for acquiring the corpus are similar to the ones we took at the [beginning of this chapter](/course/chapter6/2), but this time we’ll use the [WikiText-2](https://huggingface.co/datasets/wikitext) dataset:\n\n```\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"wikitext\", name=\"wikitext-2-raw-v1\", split=\"train\")\n\n\ndef get_training_corpus():\n for i in range(0, len(dataset), 1000):\n yield dataset[i : i + 1000][\"text\"]```\n\nThe function `get_training_corpus()` is a generator that will yield batches of 1,000 texts, which we will use to train the tokenizer.\n\n🤗 Tokenizers can also be trained on text files directly. Here’s how we can generate a text file containing all the texts/inputs from WikiText-2 that we can use locally:\n\n```\nwith open(\"wikitext-2.txt\", \"w\", encoding=\"utf-8\") as f:\n for i in range(len(dataset)):\n f.write(dataset[i][\"text\"] + \"\\n\")```\n\nNext we’ll show you how to build your own BERT, GPT-2, and XLNet tokenizers, block by block. That will give us an example of each of the three main tokenization algorithms: WordPiece, BPE, and Unigram. Let’s start with BERT!\n\n## [](#building-a-wordpiece-tokenizer-from-scratch)Building a WordPiece tokenizer from scratch\n\nTo build a tokenizer with the 🤗 Tokenizers library, we start by instantiating a `Tokenizer` object with a `model`, then set its `normalizer`, `pre_tokenizer`, `post_processor`, and `decoder` attributes to the values we want.\n\nFor this example, we’ll create a `Tokenizer` with a WordPiece model:\n\n```\nfrom tokenizers import (\n decoders,\n models,\n normalizers,\n pre_tokenizers,\n processors,\n trainers,\n Tokenizer,\n)\n\ntokenizer = Tokenizer(models.WordPiece(unk_token=\"[UNK]\"))```\n\nWe have to specify the `unk_token` so the model knows what to return when it encounters characters it hasn’t seen before. Other arguments we can set here include the `vocab` of our model (we’re going to train the model, so we don’t need to set this) and `max_input_chars_per_word`, which specifies a maximum length for each word (words longer than the value passed will be split).\n\nThe first step of tokenization is normalization, so let’s begin with that. Since BERT is widely used, there is a `BertNormalizer` with the classic options we can set for BERT: `lowercase` and `strip_accents`, which are self-explanatory; `clean_text` to remove all control characters and replace repeating spaces with a single one; and `handle_chinese_chars`, which places spaces around Chinese characters. To replicate the `bert-base-uncased` tokenizer, we can just set this normalizer:\n\n```\ntokenizer.normalizer = normalizers.BertNormalizer(lowercase=True)```\n\nGenerally speaking, however, when building a new tokenizer you won’t have access to such a handy normalizer already implemented in the 🤗 Tokenizers library — so let’s see how to create the BERT normalizer by hand. The library provides a `Lowercase` normalizer and a `StripAccents` normalizer, and you can compose several normalizers using a `Sequence`:\n\n```\ntokenizer.normalizer = normalizers.Sequence(\n [normalizers.NFD(), normalizers.Lowercase(), normalizers.StripAccents()]\n)```\n\nWe’re also using an `NFD` Unicode normalizer, as otherwise the `StripAccents` normalizer won’t properly recognize the accented characters and thus won’t strip them out.\n\nAs we’ve seen before, we can use the `normalize_str()` method of the `normalizer` to check out the effects it has on a given text:\n\n```\nprint(tokenizer.normalizer.normalize_str(\"Héllò hôw are ü?\"))```\n\n**To go further** If you test the two versions of the previous normalizers on a string containing the unicode character `u\"\\u0085\"` you will surely notice that these two normalizers are not exactly equivalent. To not over-complicate the version with `normalizers.Sequence` too much , we haven’t included the Regex replacements that the `BertNormalizer` requires when the `clean_text` argument is set to `True` - which is the default behavior. But don’t worry: it is possible to get exactly the same normalization without using the handy `BertNormalizer` by adding two `normalizers.Replace`’s to the normalizers sequence.\n\nNext is the pre-tokenization step. Again, there is a prebuilt `BertPreTokenizer` that we can use:\n\n```\ntokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer()```\n\nOr we can build it from scratch:\n\n```\ntokenizer.pre_tokenizer = pre_tokenizers.Whitespace()```\n\nNote that the `Whitespace` pre-tokenizer splits on whitespace and all characters that are not letters, digits, or the underscore character, so it technically splits on whitespace and punctuation:\n\n```\ntokenizer.pre_tokenizer.pre_tokenize_str(\"Let's test my pre-tokenizer.\")```\n\n```\n[('Let', (0, 3)), (\"'\", (3, 4)), ('s', (4, 5)), ('test', (6, 10)), ('my', (11, 13)), ('pre', (14, 17)),\n ('-', (17, 18)), ('tokenizer', (18, 27)), ('.', (27, 28))]```\n\nIf you only want to split on whitespace, you should use the `WhitespaceSplit` pre-tokenizer instead:\n\n```\npre_tokenizer = pre_tokenizers.WhitespaceSplit()\npre_tokenizer.pre_tokenize_str(\"Let's test my pre-tokenizer.\")```\n\n```\n[(\"Let's\", (0, 5)), ('test', (6, 10)), ('my', (11, 13)), ('pre-tokenizer.', (14, 28))]```\n\nLike with normalizers, you can use a `Sequence` to compose several pre-tokenizers:\n\n```\npre_tokenizer = pre_tokenizers.Sequence(\n [pre_tokenizers.WhitespaceSplit(), pre_tokenizers.Punctuation()]\n)\npre_tokenizer.pre_tokenize_str(\"Let's test my pre-tokenizer.\")```\n\n```\n[('Let', (0, 3)), (\"'\", (3, 4)), ('s', (4, 5)), ('test', (6, 10)), ('my', (11, 13)), ('pre', (14, 17)),\n ('-', (17, 18)), ('tokenizer', (18, 27)), ('.', (27, 28))]```\n\nThe next step in the tokenization pipeline is running the inputs through the model. We already specified our model in the initialization, but we still need to train it, which will require a `WordPieceTrainer`. The main thing to remember when instantiating a trainer in 🤗 Tokenizers is that you need to pass it all the special tokens you intend to use — otherwise it won’t add them to the vocabulary, since they are not in the training corpus:\n\n```\nspecial_tokens = [\"[UNK]\", \"[PAD]\", \"[CLS]\", \"[SEP]\", \"[MASK]\"]\ntrainer = trainers.WordPieceTrainer(vocab_size=25000, special_tokens=special_tokens)```\n\nAs well as specifying the `vocab_size` and `special_tokens`, we can set the `min_frequency` (the number of times a token must appear to be included in the vocabulary) or change the `continuing_subword_prefix` (if we want to use something different from `##`).\n\nTo train our model using the iterator we defined earlier, we just have to execute this command:\n\n```\ntokenizer.train_from_iterator(get_training_corpus(), trainer=trainer)```\n\nWe can also use text files to train our tokenizer, which would look like this (we reinitialize the model with an empty `WordPiece` beforehand):\n\n```\ntokenizer.model = models.WordPiece(unk_token=\"[UNK]\")\ntokenizer.train([\"wikitext-2.txt\"], trainer=trainer)```\n\nIn both cases, we can then test the tokenizer on a text by calling the `encode()` method:\n\n```\nencoding = tokenizer.encode(\"Let's test this tokenizer.\")\nprint(encoding.tokens)```\n\n```\n['let', \"'\", 's', 'test', 'this', 'tok', '##eni', '##zer', '.']```\n\nThe `encoding` obtained is an `Encoding`, which contains all the necessary outputs of the tokenizer in its various attributes: `ids`, `type_ids`, `tokens`, `offsets`, `attention_mask`, `special_tokens_mask`, and `overflowing`.\n\nThe last step in the tokenization pipeline is post-processing. We need to add the `[CLS]` token at the beginning and the `[SEP]` token at the end (or after each sentence, if we have a pair of sentences). We will use a `TemplateProcessor` for this, but first we need to know the IDs of the `[CLS]` and `[SEP]` tokens in the vocabulary:\n\n```\ncls_token_id = tokenizer.token_to_id(\"[CLS]\")\nsep_token_id = tokenizer.token_to_id(\"[SEP]\")\nprint(cls_token_id, sep_token_id)```\n\nTo write the template for the `TemplateProcessor`, we have to specify how to treat a single sentence and a pair of sentences. For both, we write the special tokens we want to use; the first (or single) sentence is represented by `$A`, while the second sentence (if encoding a pair) is represented by `$B`. For each of these (special tokens and sentences), we also specify the corresponding token type ID after a colon.\n\nThe classic BERT template is thus defined as follows:\n\n```\ntokenizer.post_processor = processors.TemplateProcessing(\n single=f\"[CLS]:0 $A:0 [SEP]:0\",\n pair=f\"[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1\",\n special_tokens=[(\"[CLS]\", cls_token_id), (\"[SEP]\", sep_token_id)],\n)```\n\nNote that we need to pass along the IDs of the special tokens, so the tokenizer can properly convert them to their IDs.\n\nOnce this is added, going back to our previous example will give:\n\n```\nencoding = tokenizer.encode(\"Let's test this tokenizer.\")\nprint(encoding.tokens)```\n\n```\n['[CLS]', 'let', \"'\", 's', 'test', 'this', 'tok', '##eni', '##zer', '.', '[SEP]']```\n\nAnd on a pair of sentences, we get the proper result:\n\n```\nencoding = tokenizer.encode(\"Let's test this tokenizer...\", \"on a pair of sentences.\")\nprint(encoding.tokens)\nprint(encoding.type_ids)```\n\n```\n['[CLS]', 'let', \"'\", 's', 'test', 'this', 'tok', '##eni', '##zer', '...', '[SEP]', 'on', 'a', 'pair', 'of', 'sentences', '.', '[SEP]']\n[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]```\n\nWe’ve almost finished building this tokenizer from scratch — the last step is to include a decoder:\n\n```\ntokenizer.decoder = decoders.WordPiece(prefix=\"##\")```\n\nLet’s test it on our previous `encoding`:\n\n```\ntokenizer.decode(encoding.ids)```\n\n```\n\"let's test this tokenizer... on a pair of sentences.\"```\n\nGreat! We can save our tokenizer in a single JSON file like this:\n\n```\ntokenizer.save(\"tokenizer.json\")```\n\nWe can then reload that file in a `Tokenizer` object with the `from_file()` method:\n\n```\nnew_tokenizer = Tokenizer.from_file(\"tokenizer.json\")```\n\nTo use this tokenizer in 🤗 Transformers, we have to wrap it in a `PreTrainedTokenizerFast`. We can either use the generic class or, if our tokenizer corresponds to an existing model, use that class (here, `BertTokenizerFast`). If you apply this lesson to build a brand new tokenizer, you will have to use the first option.\n\nTo wrap the tokenizer in a `PreTrainedTokenizerFast`, we can either pass the tokenizer we built as a `tokenizer_object` or pass the tokenizer file we saved as `tokenizer_file`. The key thing to remember is that we have to manually set all the special tokens, since that class can’t infer from the `tokenizer` object which token is the mask token, the `[CLS]` token, etc.:\n\n```\nfrom transformers import PreTrainedTokenizerFast\n\nwrapped_tokenizer = PreTrainedTokenizerFast(\n tokenizer_object=tokenizer,\n \n unk_token=\"[UNK]\",\n pad_token=\"[PAD]\",\n cls_token=\"[CLS]\",\n sep_token=\"[SEP]\",\n mask_token=\"[MASK]\",\n)```\n\nIf you are using a specific tokenizer class (like `BertTokenizerFast`), you will only need to specify the special tokens that are different from the default ones (here, none):\n\n```\nfrom transformers import BertTokenizerFast\n\nwrapped_tokenizer = BertTokenizerFast(tokenizer_object=tokenizer)```\n\nYou can then use this tokenizer like any other 🤗 Transformers tokenizer. You can save it with the `save_pretrained()` method, or upload it to the Hub with the `push_to_hub()` method.\n\nNow that we’ve seen how to build a WordPiece tokenizer, let’s do the same for a BPE tokenizer. We’ll go a bit faster since you know all the steps, and only highlight the differences.\n\n## [](#building-a-bpe-tokenizer-from-scratch)Building a BPE tokenizer from scratch\n\nLet’s now build a GPT-2 tokenizer. Like for the BERT tokenizer, we start by initializing a `Tokenizer` with a BPE model:\n\n```\ntokenizer = Tokenizer(models.BPE())```\n\nAlso like for BERT, we could initialize this model with a vocabulary if we had one (we would need to pass the `vocab` and `merges` in this case), but since we will train from scratch, we don’t need to do that. We also don’t need to specify an `unk_token` because GPT-2 uses byte-level BPE, which doesn’t require it.\n\nGPT-2 does not use a normalizer, so we skip that step and go directly to the pre-tokenization:\n\n```\ntokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=False)```\n\nThe option we added to `ByteLevel` here is to not add a space at the beginning of a sentence (which is the default otherwise). We can have a look at the pre-tokenization of an example text like before:\n\n```\ntokenizer.pre_tokenizer.pre_tokenize_str(\"Let's test pre-tokenization!\")```\n\n```\n[('Let', (0, 3)), (\"'s\", (3, 5)), ('Ġtest', (5, 10)), ('Ġpre', (10, 14)), ('-', (14, 15)),\n ('tokenization', (15, 27)), ('!', (27, 28))]```\n\nNext is the model, which needs training. For GPT-2, the only special token is the end-of-text token:\n\n```\ntrainer = trainers.BpeTrainer(vocab_size=25000, special_tokens=[\"<|endoftext|>\"])\ntokenizer.train_from_iterator(get_training_corpus(), trainer=trainer)```\n\nLike with the `WordPieceTrainer`, as well as the `vocab_size` and `special_tokens`, we can specify the `min_frequency` if we want to, or if we have an end-of-word suffix (like ``), we can set it with `end_of_word_suffix`.\n\nThis tokenizer can also be trained on text files:\n\n```\ntokenizer.model = models.BPE()\ntokenizer.train([\"wikitext-2.txt\"], trainer=trainer)```\n\nLet’s have a look at the tokenization of a sample text:\n\n```\nencoding = tokenizer.encode(\"Let's test this tokenizer.\")\nprint(encoding.tokens)```\n\n```\n['L', 'et', \"'\", 's', 'Ġtest', 'Ġthis', 'Ġto', 'ken', 'izer', '.']```\n\nWe apply the byte-level post-processing for the GPT-2 tokenizer as follows:\n\n```\ntokenizer.post_processor = processors.ByteLevel(trim_offsets=False)```\n\nThe `trim_offsets = False` option indicates to the post-processor that we should leave the offsets of tokens that begin with ‘Ġ’ as they are: this way the start of the offsets will point to the space before the word, not the first character of the word (since the space is technically part of the token). Let’s have a look at the result with the text we just encoded, where `'Ġtest'` is the token at index 4:\n\n```\nsentence = \"Let's test this tokenizer.\"\nencoding = tokenizer.encode(sentence)\nstart, end = encoding.offsets[4]\nsentence[start:end]```\n\nFinally, we add a byte-level decoder:\n\n```\ntokenizer.decoder = decoders.ByteLevel()```\n\nand we can double-check it works properly:\n\n```\ntokenizer.decode(encoding.ids)```\n\n```\n\"Let's test this tokenizer.\"```\n\nGreat! Now that we’re done, we can save the tokenizer like before, and wrap it in a `PreTrainedTokenizerFast` or `GPT2TokenizerFast` if we want to use it in 🤗 Transformers:\n\n```\nfrom transformers import PreTrainedTokenizerFast\n\nwrapped_tokenizer = PreTrainedTokenizerFast(\n tokenizer_object=tokenizer,\n bos_token=\"<|endoftext|>\",\n eos_token=\"<|endoftext|>\",\n)```\n\nor:\n\n```\nfrom transformers import GPT2TokenizerFast\n\nwrapped_tokenizer = GPT2TokenizerFast(tokenizer_object=tokenizer)```\n\nAs the last example, we’ll show you how to build a Unigram tokenizer from scratch.\n\n## [](#building-a-unigram-tokenizer-from-scratch)Building a Unigram tokenizer from scratch\n\nLet’s now build an XLNet tokenizer. Like for the previous tokenizers, we start by initializing a `Tokenizer` with a Unigram model:\n\n```\ntokenizer = Tokenizer(models.Unigram())```\n\nAgain, we could initialize this model with a vocabulary if we had one.\n\nFor the normalization, XLNet uses a few replacements (which come from SentencePiece):\n\n```\nfrom tokenizers import Regex\n\ntokenizer.normalizer = normalizers.Sequence(\n [\n normalizers.Replace(\"``\", '\"'),\n normalizers.Replace(\"''\", '\"'),\n normalizers.NFKD(),\n normalizers.StripAccents(),\n normalizers.Replace(Regex(\" {2,}\"), \" \"),\n ]\n)```\n\nThis replaces `“` and `”` with `”` and any sequence of two or more spaces with a single space, as well as removing the accents in the texts to tokenize.\n\nThe pre-tokenizer to use for any SentencePiece tokenizer is `Metaspace`:\n\n```\ntokenizer.pre_tokenizer = pre_tokenizers.Metaspace()```\n\nWe can have a look at the pre-tokenization of an example text like before:\n\n```\ntokenizer.pre_tokenizer.pre_tokenize_str(\"Let's test the pre-tokenizer!\")```\n\n```\n[(\"▁Let's\", (0, 5)), ('▁test', (5, 10)), ('▁the', (10, 14)), ('▁pre-tokenizer!', (14, 29))]```\n\nNext is the model, which needs training. XLNet has quite a few special tokens:\n\n```\nspecial_tokens = [\"\", \"\", \"\", \"\", \"\", \"\", \"\"]\ntrainer = trainers.UnigramTrainer(\n vocab_size=25000, special_tokens=special_tokens, unk_token=\"\"\n)\ntokenizer.train_from_iterator(get_training_corpus(), trainer=trainer)```\n\nA very important argument not to forget for the `UnigramTrainer` is the `unk_token`. We can also pass along other arguments specific to the Unigram algorithm, such as the `shrinking_factor` for each step where we remove tokens (defaults to 0.75) or the `max_piece_length` to specify the maximum length of a given token (defaults to 16).\n\nThis tokenizer can also be trained on text files:\n\n```\ntokenizer.model = models.Unigram()\ntokenizer.train([\"wikitext-2.txt\"], trainer=trainer)```\n\nLet’s have a look at the tokenization of a sample text:\n\n```\nencoding = tokenizer.encode(\"Let's test this tokenizer.\")\nprint(encoding.tokens)```\n\n```\n['▁Let', \"'\", 's', '▁test', '▁this', '▁to', 'ken', 'izer', '.']```\n\nA peculiarity of XLNet is that it puts the `` token at the end of the sentence, with a type ID of 2 (to distinguish it from the other tokens). It’s padding on the left, as a result. We can deal with all the special tokens and token type IDs with a template, like for BERT, but first we have to get the IDs of the `` and `` tokens:\n\n```\ncls_token_id = tokenizer.token_to_id(\"\")\nsep_token_id = tokenizer.token_to_id(\"\")\nprint(cls_token_id, sep_token_id)```\n\nThe template looks like this:\n\n```\ntokenizer.post_processor = processors.TemplateProcessing(\n single=\"$A:0 :0 :2\",\n pair=\"$A:0 :0 $B:1 :1 :2\",\n special_tokens=[(\"\", sep_token_id), (\"\", cls_token_id)],\n)```\n\nAnd we can test it works by encoding a pair of sentences:\n\n```\nencoding = tokenizer.encode(\"Let's test this tokenizer...\", \"on a pair of sentences!\")\nprint(encoding.tokens)\nprint(encoding.type_ids)```\n\n```\n['▁Let', \"'\", 's', '▁test', '▁this', '▁to', 'ken', 'izer', '.', '.', '.', '', '▁', 'on', '▁', 'a', '▁pair', \n '▁of', '▁sentence', 's', '!', '', '']\n[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2]```\n\nFinally, we add a `Metaspace` decoder:\n\n```\ntokenizer.decoder = decoders.Metaspace()```\n\nand we’re done with this tokenizer! We can save the tokenizer like before, and wrap it in a `PreTrainedTokenizerFast` or `XLNetTokenizerFast` if we want to use it in 🤗 Transformers. One thing to note when using `PreTrainedTokenizerFast` is that on top of the special tokens, we need to tell the 🤗 Transformers library to pad on the left:\n\n```\nfrom transformers import PreTrainedTokenizerFast\n\nwrapped_tokenizer = PreTrainedTokenizerFast(\n tokenizer_object=tokenizer,\n bos_token=\"\",\n eos_token=\"\",\n unk_token=\"\",\n pad_token=\"\",\n cls_token=\"\",\n sep_token=\"\",\n mask_token=\"\",\n padding_side=\"left\",\n)```\n\nOr alternatively:\n\n```\nfrom transformers import XLNetTokenizerFast\n\nwrapped_tokenizer = XLNetTokenizerFast(tokenizer_object=tokenizer)```\n\nNow that you have seen how the various building blocks are used to build existing tokenizers, you should be able to write any tokenizer you want with the 🤗 Tokenizers library and be able to use it in 🤗 Transformers.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tBuilding a tokenizer, block by block - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Building a tokenizer, block by block

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Building a tokenizer, block by block

\"Ask \"Open \"Open

As we’ve seen in the previous sections, tokenization comprises several steps:

  • Normalization (any cleanup of the text that is deemed necessary, such as removing spaces or accents, Unicode normalization, etc.)
  • Pre-tokenization (splitting the input into words)
  • Running the input through the model (using the pre-tokenized words to produce a sequence of tokens)
  • Post-processing (adding the special tokens of the tokenizer, generating the attention mask and token type IDs)

As a reminder, here’s another look at the overall process:

\"The \"The

The 🤗 Tokenizers library has been built to provide several options for each of those steps, which you can mix and match together. In this section we’ll see how we can build a tokenizer from scratch, as opposed to training a new tokenizer from an old one as we did in section 2. You’ll then be able to build any kind of tokenizer you can think of!

More precisely, the library is built around a central Tokenizer class with the building blocks regrouped in submodules:

  • normalizers contains all the possible types of Normalizer you can use (complete list here).
  • pre_tokenizers contains all the possible types of PreTokenizer you can use (complete list here).
  • models contains the various types of Model you can use, like BPE, WordPiece, and Unigram (complete list here).
  • trainers contains all the different types of Trainer you can use to train your model on a corpus (one per type of model; complete list here).
  • post_processors contains the various types of PostProcessor you can use (complete list here).
  • decoders contains the various types of Decoder you can use to decode the outputs of tokenization (complete list here).

You can find the whole list of building blocks here.

Acquiring a corpus

To train our new tokenizer, we will use a small corpus of text (so the examples run fast). The steps for acquiring the corpus are similar to the ones we took at the beginning of this chapter, but this time we’ll use the WikiText-2 dataset:

from datasets import load_dataset\n\ndataset = load_dataset(\"wikitext\", name=\"wikitext-2-raw-v1\", split=\"train\")\n\n\ndef get_training_corpus():\n    for i in range(0, len(dataset), 1000):\n        yield dataset[i : i + 1000][\"text\"]

The function get_training_corpus() is a generator that will yield batches of 1,000 texts, which we will use to train the tokenizer.

🤗 Tokenizers can also be trained on text files directly. Here’s how we can generate a text file containing all the texts/inputs from WikiText-2 that we can use locally:

with open(\"wikitext-2.txt\", \"w\", encoding=\"utf-8\") as f:\n    for i in range(len(dataset)):\n        f.write(dataset[i][\"text\"] + \"\\n\")

Next we’ll show you how to build your own BERT, GPT-2, and XLNet tokenizers, block by block. That will give us an example of each of the three main tokenization algorithms: WordPiece, BPE, and Unigram. Let’s start with BERT!

Building a WordPiece tokenizer from scratch

To build a tokenizer with the 🤗 Tokenizers library, we start by instantiating a Tokenizer object with a model, then set its normalizer, pre_tokenizer, post_processor, and decoder attributes to the values we want.

For this example, we’ll create a Tokenizer with a WordPiece model:

from tokenizers import (\n    decoders,\n    models,\n    normalizers,\n    pre_tokenizers,\n    processors,\n    trainers,\n    Tokenizer,\n)\n\ntokenizer = Tokenizer(models.WordPiece(unk_token=\"[UNK]\"))

We have to specify the unk_token so the model knows what to return when it encounters characters it hasn’t seen before. Other arguments we can set here include the vocab of our model (we’re going to train the model, so we don’t need to set this) and max_input_chars_per_word, which specifies a maximum length for each word (words longer than the value passed will be split).

The first step of tokenization is normalization, so let’s begin with that. Since BERT is widely used, there is a BertNormalizer with the classic options we can set for BERT: lowercase and strip_accents, which are self-explanatory; clean_text to remove all control characters and replace repeating spaces with a single one; and handle_chinese_chars, which places spaces around Chinese characters. To replicate the bert-base-uncased tokenizer, we can just set this normalizer:

tokenizer.normalizer = normalizers.BertNormalizer(lowercase=True)

Generally speaking, however, when building a new tokenizer you won’t have access to such a handy normalizer already implemented in the 🤗 Tokenizers library — so let’s see how to create the BERT normalizer by hand. The library provides a Lowercase normalizer and a StripAccents normalizer, and you can compose several normalizers using a Sequence:

tokenizer.normalizer = normalizers.Sequence(\n    [normalizers.NFD(), normalizers.Lowercase(), normalizers.StripAccents()]\n)

We’re also using an NFD Unicode normalizer, as otherwise the StripAccents normalizer won’t properly recognize the accented characters and thus won’t strip them out.

As we’ve seen before, we can use the normalize_str() method of the normalizer to check out the effects it has on a given text:

print(tokenizer.normalizer.normalize_str(\"Héllò hôw are ü?\"))
hello how are u?

To go further If you test the two versions of the previous normalizers on a string containing the unicode character u\"\\u0085\" you will surely notice that these two normalizers are not exactly equivalent.\nTo not over-complicate the version with normalizers.Sequence too much , we haven’t included the Regex replacements that the BertNormalizer requires when the clean_text argument is set to True - which is the default behavior. But don’t worry: it is possible to get exactly the same normalization without using the handy BertNormalizer by adding two normalizers.Replace’s to the normalizers sequence.

Next is the pre-tokenization step. Again, there is a prebuilt BertPreTokenizer that we can use:

tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer()

Or we can build it from scratch:

tokenizer.pre_tokenizer = pre_tokenizers.Whitespace()

Note that the Whitespace pre-tokenizer splits on whitespace and all characters that are not letters, digits, or the underscore character, so it technically splits on whitespace and punctuation:

tokenizer.pre_tokenizer.pre_tokenize_str(\"Let's test my pre-tokenizer.\")
[('Let', (0, 3)), (\"'\", (3, 4)), ('s', (4, 5)), ('test', (6, 10)), ('my', (11, 13)), ('pre', (14, 17)),\n ('-', (17, 18)), ('tokenizer', (18, 27)), ('.', (27, 28))]

If you only want to split on whitespace, you should use the WhitespaceSplit pre-tokenizer instead:

pre_tokenizer = pre_tokenizers.WhitespaceSplit()\npre_tokenizer.pre_tokenize_str(\"Let's test my pre-tokenizer.\")
[(\"Let's\", (0, 5)), ('test', (6, 10)), ('my', (11, 13)), ('pre-tokenizer.', (14, 28))]

Like with normalizers, you can use a Sequence to compose several pre-tokenizers:

pre_tokenizer = pre_tokenizers.Sequence(\n    [pre_tokenizers.WhitespaceSplit(), pre_tokenizers.Punctuation()]\n)\npre_tokenizer.pre_tokenize_str(\"Let's test my pre-tokenizer.\")
[('Let', (0, 3)), (\"'\", (3, 4)), ('s', (4, 5)), ('test', (6, 10)), ('my', (11, 13)), ('pre', (14, 17)),\n ('-', (17, 18)), ('tokenizer', (18, 27)), ('.', (27, 28))]

The next step in the tokenization pipeline is running the inputs through the model. We already specified our model in the initialization, but we still need to train it, which will require a WordPieceTrainer. The main thing to remember when instantiating a trainer in 🤗 Tokenizers is that you need to pass it all the special tokens you intend to use — otherwise it won’t add them to the vocabulary, since they are not in the training corpus:

special_tokens = [\"[UNK]\", \"[PAD]\", \"[CLS]\", \"[SEP]\", \"[MASK]\"]\ntrainer = trainers.WordPieceTrainer(vocab_size=25000, special_tokens=special_tokens)

As well as specifying the vocab_size and special_tokens, we can set the min_frequency (the number of times a token must appear to be included in the vocabulary) or change the continuing_subword_prefix (if we want to use something different from ##).

To train our model using the iterator we defined earlier, we just have to execute this command:

tokenizer.train_from_iterator(get_training_corpus(), trainer=trainer)

We can also use text files to train our tokenizer, which would look like this (we reinitialize the model with an empty WordPiece beforehand):

tokenizer.model = models.WordPiece(unk_token=\"[UNK]\")\ntokenizer.train([\"wikitext-2.txt\"], trainer=trainer)

In both cases, we can then test the tokenizer on a text by calling the encode() method:

encoding = tokenizer.encode(\"Let's test this tokenizer.\")\nprint(encoding.tokens)
['let', \"'\", 's', 'test', 'this', 'tok', '##eni', '##zer', '.']

The encoding obtained is an Encoding, which contains all the necessary outputs of the tokenizer in its various attributes: ids, type_ids, tokens, offsets, attention_mask, special_tokens_mask, and overflowing.

The last step in the tokenization pipeline is post-processing. We need to add the [CLS] token at the beginning and the [SEP] token at the end (or after each sentence, if we have a pair of sentences). We will use a TemplateProcessor for this, but first we need to know the IDs of the [CLS] and [SEP] tokens in the vocabulary:

cls_token_id = tokenizer.token_to_id(\"[CLS]\")\nsep_token_id = tokenizer.token_to_id(\"[SEP]\")\nprint(cls_token_id, sep_token_id)
(2, 3)

To write the template for the TemplateProcessor, we have to specify how to treat a single sentence and a pair of sentences. For both, we write the special tokens we want to use; the first (or single) sentence is represented by $A, while the second sentence (if encoding a pair) is represented by $B. For each of these (special tokens and sentences), we also specify the corresponding token type ID after a colon.

The classic BERT template is thus defined as follows:

tokenizer.post_processor = processors.TemplateProcessing(\n    single=f\"[CLS]:0 $A:0 [SEP]:0\",\n    pair=f\"[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1\",\n    special_tokens=[(\"[CLS]\", cls_token_id), (\"[SEP]\", sep_token_id)],\n)

Note that we need to pass along the IDs of the special tokens, so the tokenizer can properly convert them to their IDs.

Once this is added, going back to our previous example will give:

encoding = tokenizer.encode(\"Let's test this tokenizer.\")\nprint(encoding.tokens)
['[CLS]', 'let', \"'\", 's', 'test', 'this', 'tok', '##eni', '##zer', '.', '[SEP]']

And on a pair of sentences, we get the proper result:

encoding = tokenizer.encode(\"Let's test this tokenizer...\", \"on a pair of sentences.\")\nprint(encoding.tokens)\nprint(encoding.type_ids)
['[CLS]', 'let', \"'\", 's', 'test', 'this', 'tok', '##eni', '##zer', '...', '[SEP]', 'on', 'a', 'pair', 'of', 'sentences', '.', '[SEP]']\n[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]

We’ve almost finished building this tokenizer from scratch — the last step is to include a decoder:

tokenizer.decoder = decoders.WordPiece(prefix=\"##\")

Let’s test it on our previous encoding:

tokenizer.decode(encoding.ids)
\"let's test this tokenizer... on a pair of sentences.\"

Great! We can save our tokenizer in a single JSON file like this:

tokenizer.save(\"tokenizer.json\")

We can then reload that file in a Tokenizer object with the from_file() method:

new_tokenizer = Tokenizer.from_file(\"tokenizer.json\")

To use this tokenizer in 🤗 Transformers, we have to wrap it in a PreTrainedTokenizerFast. We can either use the generic class or, if our tokenizer corresponds to an existing model, use that class (here, BertTokenizerFast). If you apply this lesson to build a brand new tokenizer, you will have to use the first option.

To wrap the tokenizer in a PreTrainedTokenizerFast, we can either pass the tokenizer we built as a tokenizer_object or pass the tokenizer file we saved as tokenizer_file. The key thing to remember is that we have to manually set all the special tokens, since that class can’t infer from the tokenizer object which token is the mask token, the [CLS] token, etc.:

from transformers import PreTrainedTokenizerFast\n\nwrapped_tokenizer = PreTrainedTokenizerFast(\n    tokenizer_object=tokenizer,\n    # tokenizer_file=\"tokenizer.json\", # You can load from the tokenizer file, alternatively\n    unk_token=\"[UNK]\",\n    pad_token=\"[PAD]\",\n    cls_token=\"[CLS]\",\n    sep_token=\"[SEP]\",\n    mask_token=\"[MASK]\",\n)

If you are using a specific tokenizer class (like BertTokenizerFast), you will only need to specify the special tokens that are different from the default ones (here, none):

from transformers import BertTokenizerFast\n\nwrapped_tokenizer = BertTokenizerFast(tokenizer_object=tokenizer)

You can then use this tokenizer like any other 🤗 Transformers tokenizer. You can save it with the save_pretrained() method, or upload it to the Hub with the push_to_hub() method.

Now that we’ve seen how to build a WordPiece tokenizer, let’s do the same for a BPE tokenizer. We’ll go a bit faster since you know all the steps, and only highlight the differences.

Building a BPE tokenizer from scratch

Let’s now build a GPT-2 tokenizer. Like for the BERT tokenizer, we start by initializing a Tokenizer with a BPE model:

tokenizer = Tokenizer(models.BPE())

Also like for BERT, we could initialize this model with a vocabulary if we had one (we would need to pass the vocab and merges in this case), but since we will train from scratch, we don’t need to do that. We also don’t need to specify an unk_token because GPT-2 uses byte-level BPE, which doesn’t require it.

GPT-2 does not use a normalizer, so we skip that step and go directly to the pre-tokenization:

tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=False)

The option we added to ByteLevel here is to not add a space at the beginning of a sentence (which is the default otherwise). We can have a look at the pre-tokenization of an example text like before:

tokenizer.pre_tokenizer.pre_tokenize_str(\"Let's test pre-tokenization!\")
[('Let', (0, 3)), (\"'s\", (3, 5)), ('Ġtest', (5, 10)), ('Ġpre', (10, 14)), ('-', (14, 15)),\n ('tokenization', (15, 27)), ('!', (27, 28))]

Next is the model, which needs training. For GPT-2, the only special token is the end-of-text token:

trainer = trainers.BpeTrainer(vocab_size=25000, special_tokens=[\"<|endoftext|>\"])\ntokenizer.train_from_iterator(get_training_corpus(), trainer=trainer)

Like with the WordPieceTrainer, as well as the vocab_size and special_tokens, we can specify the min_frequency if we want to, or if we have an end-of-word suffix (like </w>), we can set it with end_of_word_suffix.

This tokenizer can also be trained on text files:

tokenizer.model = models.BPE()\ntokenizer.train([\"wikitext-2.txt\"], trainer=trainer)

Let’s have a look at the tokenization of a sample text:

encoding = tokenizer.encode(\"Let's test this tokenizer.\")\nprint(encoding.tokens)
['L', 'et', \"'\", 's', 'Ġtest', 'Ġthis', 'Ġto', 'ken', 'izer', '.']

We apply the byte-level post-processing for the GPT-2 tokenizer as follows:

tokenizer.post_processor = processors.ByteLevel(trim_offsets=False)

The trim_offsets = False option indicates to the post-processor that we should leave the offsets of tokens that begin with ‘Ġ’ as they are: this way the start of the offsets will point to the space before the word, not the first character of the word (since the space is technically part of the token). Let’s have a look at the result with the text we just encoded, where 'Ġtest' is the token at index 4:

sentence = \"Let's test this tokenizer.\"\nencoding = tokenizer.encode(sentence)\nstart, end = encoding.offsets[4]\nsentence[start:end]
' test'

Finally, we add a byte-level decoder:

tokenizer.decoder = decoders.ByteLevel()

and we can double-check it works properly:

tokenizer.decode(encoding.ids)
\"Let's test this tokenizer.\"

Great! Now that we’re done, we can save the tokenizer like before, and wrap it in a PreTrainedTokenizerFast or GPT2TokenizerFast if we want to use it in 🤗 Transformers:

from transformers import PreTrainedTokenizerFast\n\nwrapped_tokenizer = PreTrainedTokenizerFast(\n    tokenizer_object=tokenizer,\n    bos_token=\"<|endoftext|>\",\n    eos_token=\"<|endoftext|>\",\n)

or:

from transformers import GPT2TokenizerFast\n\nwrapped_tokenizer = GPT2TokenizerFast(tokenizer_object=tokenizer)

As the last example, we’ll show you how to build a Unigram tokenizer from scratch.

Building a Unigram tokenizer from scratch

Let’s now build an XLNet tokenizer. Like for the previous tokenizers, we start by initializing a Tokenizer with a Unigram model:

tokenizer = Tokenizer(models.Unigram())

Again, we could initialize this model with a vocabulary if we had one.

For the normalization, XLNet uses a few replacements (which come from SentencePiece):

from tokenizers import Regex\n\ntokenizer.normalizer = normalizers.Sequence(\n    [\n        normalizers.Replace(\"``\", '\"'),\n        normalizers.Replace(\"''\", '\"'),\n        normalizers.NFKD(),\n        normalizers.StripAccents(),\n        normalizers.Replace(Regex(\" {2,}\"), \" \"),\n    ]\n)

This replaces and with and any sequence of two or more spaces with a single space, as well as removing the accents in the texts to tokenize.

The pre-tokenizer to use for any SentencePiece tokenizer is Metaspace:

tokenizer.pre_tokenizer = pre_tokenizers.Metaspace()

We can have a look at the pre-tokenization of an example text like before:

tokenizer.pre_tokenizer.pre_tokenize_str(\"Let's test the pre-tokenizer!\")
[(\"▁Let's\", (0, 5)), ('▁test', (5, 10)), ('▁the', (10, 14)), ('▁pre-tokenizer!', (14, 29))]

Next is the model, which needs training. XLNet has quite a few special tokens:

special_tokens = [\"<cls>\", \"<sep>\", \"<unk>\", \"<pad>\", \"<mask>\", \"<s>\", \"</s>\"]\ntrainer = trainers.UnigramTrainer(\n    vocab_size=25000, special_tokens=special_tokens, unk_token=\"<unk>\"\n)\ntokenizer.train_from_iterator(get_training_corpus(), trainer=trainer)

A very important argument not to forget for the UnigramTrainer is the unk_token. We can also pass along other arguments specific to the Unigram algorithm, such as the shrinking_factor for each step where we remove tokens (defaults to 0.75) or the max_piece_length to specify the maximum length of a given token (defaults to 16).

This tokenizer can also be trained on text files:

tokenizer.model = models.Unigram()\ntokenizer.train([\"wikitext-2.txt\"], trainer=trainer)

Let’s have a look at the tokenization of a sample text:

encoding = tokenizer.encode(\"Let's test this tokenizer.\")\nprint(encoding.tokens)
['▁Let', \"'\", 's', '▁test', '▁this', '▁to', 'ken', 'izer', '.']

A peculiarity of XLNet is that it puts the <cls> token at the end of the sentence, with a type ID of 2 (to distinguish it from the other tokens). It’s padding on the left, as a result. We can deal with all the special tokens and token type IDs with a template, like for BERT, but first we have to get the IDs of the <cls> and <sep> tokens:

cls_token_id = tokenizer.token_to_id(\"<cls>\")\nsep_token_id = tokenizer.token_to_id(\"<sep>\")\nprint(cls_token_id, sep_token_id)
0 1

The template looks like this:

tokenizer.post_processor = processors.TemplateProcessing(\n    single=\"$A:0 <sep>:0 <cls>:2\",\n    pair=\"$A:0 <sep>:0 $B:1 <sep>:1 <cls>:2\",\n    special_tokens=[(\"<sep>\", sep_token_id), (\"<cls>\", cls_token_id)],\n)

And we can test it works by encoding a pair of sentences:

encoding = tokenizer.encode(\"Let's test this tokenizer...\", \"on a pair of sentences!\")\nprint(encoding.tokens)\nprint(encoding.type_ids)
['▁Let', \"'\", 's', '▁test', '▁this', '▁to', 'ken', 'izer', '.', '.', '.', '<sep>', '▁', 'on', '▁', 'a', '▁pair', \n  '▁of', '▁sentence', 's', '!', '<sep>', '<cls>']\n[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2]

Finally, we add a Metaspace decoder:

tokenizer.decoder = decoders.Metaspace()

and we’re done with this tokenizer! We can save the tokenizer like before, and wrap it in a PreTrainedTokenizerFast or XLNetTokenizerFast if we want to use it in 🤗 Transformers. One thing to note when using PreTrainedTokenizerFast is that on top of the special tokens, we need to tell the 🤗 Transformers library to pad on the left:

from transformers import PreTrainedTokenizerFast\n\nwrapped_tokenizer = PreTrainedTokenizerFast(\n    tokenizer_object=tokenizer,\n    bos_token=\"<s>\",\n    eos_token=\"</s>\",\n    unk_token=\"<unk>\",\n    pad_token=\"<pad>\",\n    cls_token=\"<cls>\",\n    sep_token=\"<sep>\",\n    mask_token=\"<mask>\",\n    padding_side=\"left\",\n)

Or alternatively:

from transformers import XLNetTokenizerFast\n\nwrapped_tokenizer = XLNetTokenizerFast(tokenizer_object=tokenizer)

Now that you have seen how the various building blocks are used to build existing tokenizers, you should be able to write any tokenizer you want with the 🤗 Tokenizers library and be able to use it in 🤗 Transformers.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:24.535Z"} {"title":"Tokenizers, check! - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter6/9?fw=pt","markdown":"![Hugging Face's logo](/front/assets/huggingface_logo-noborder.svg)\n\nJoin the Hugging Face community\n\nand get access to the augmented documentation experience\n\nCollaborate on models, datasets and Spaces\n\nFaster examples with accelerated inference\n\nSwitch between documentation themes\n\n## [](#tokenizers-check)Tokenizers, check!\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-6-questions)\n\nGreat job finishing this chapter!\n\nAfter this deep dive into tokenizers, you should:\n\n- Be able to train a new tokenizer using an old one as a template\n- Understand how to use offsets to map tokens’ positions to their original span of text\n- Know the differences between BPE, WordPiece, and Unigram\n- Be able to mix and match the blocks provided by the 🤗 Tokenizers library to build your own tokenizer\n- Be able to use that tokenizer inside the 🤗 Transformers library","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tTokenizers, check! - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Tokenizers, check!

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Tokenizers, check!

\"Ask

Great job finishing this chapter!

After this deep dive into tokenizers, you should:

  • Be able to train a new tokenizer using an old one as a template
  • Understand how to use offsets to map tokens’ positions to their original span of text
  • Know the differences between BPE, WordPiece, and Unigram
  • Be able to mix and match the blocks provided by the 🤗 Tokenizers library to build your own tokenizer
  • Be able to use that tokenizer inside the 🤗 Transformers library
\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:24.799Z"} {"title":"End-of-chapter quiz - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter6/10?fw=pt","markdown":"## [](#end-of-chapter-quiz)End-of-chapter quiz\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-6-questions)\n\nLet’s test what you learned in this chapter!\n\n### [](#1.-when-should-you-train-a-new-tokenizer?)1\\. When should you train a new tokenizer?\n\n### [](#2.-what-is-the-advantage-of-using-a-generator-of-lists-of-texts-compared-to-a-list-of-lists-of-texts-when-using-train_new_from_iterator()?)2\\. What is the advantage of using a generator of lists of texts compared to a list of lists of texts when using `train_new_from_iterator()`?\n\n### [](#3.-what-are-the-advantages-of-using-a-“fast”-tokenizer?)3\\. What are the advantages of using a “fast” tokenizer?\n\n### [](#4.-how-does-the-token-classification-pipeline-handle-entities-that-span-over-several-tokens?)4\\. How does the `token-classification` pipeline handle entities that span over several tokens?\n\n### [](#5.-how-does-the-question-answering-pipeline-handle-long-contexts?)5\\. How does the `question-answering` pipeline handle long contexts?\n\n### [](#6.-what-is-normalization?)6\\. What is normalization?\n\n### [](#7.-what-is-pre-tokenization-for-a-subword-tokenizer?)7\\. What is pre-tokenization for a subword tokenizer?\n\n### [](#8.-select-the-sentences-that-apply-to-the-bpe-model-of-tokenization.)8\\. Select the sentences that apply to the BPE model of tokenization.\n\n### [](#9.-select-the-sentences-that-apply-to-the-wordpiece-model-of-tokenization.)9\\. Select the sentences that apply to the WordPiece model of tokenization.\n\n### [](#10.-select-the-sentences-that-apply-to-the-unigram-model-of-tokenization.)10\\. Select the sentences that apply to the Unigram model of tokenization.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tEnd-of-chapter quiz - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

End-of-chapter quiz

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

End-of-chapter quiz

\"Ask

Let’s test what you learned in this chapter!

1. When should you train a new tokenizer?

train_new_from_iterator()?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#2.-what-is-the-advantage-of-using-a-generator-of-lists-of-texts-compared-to-a-list-of-lists-of-texts-when-using-train_new_from_iterator()?\"> 2. What is the advantage of using a generator of lists of texts compared to a list of lists of texts when using train_new_from_iterator()?

3. What are the advantages of using a “fast” tokenizer?

token-classification-pipeline-handle-entities-that-span-over-several-tokens?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#4.-how-does-the-token-classification-pipeline-handle-entities-that-span-over-several-tokens?\"> 4. How does the token-classification pipeline handle entities that span over several tokens?

question-answering-pipeline-handle-long-contexts?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#5.-how-does-the-question-answering-pipeline-handle-long-contexts?\"> 5. How does the question-answering pipeline handle long contexts?

6. What is normalization?

7. What is pre-tokenization for a subword tokenizer?

8. Select the sentences that apply to the BPE model of tokenization.

9. Select the sentences that apply to the WordPiece model of tokenization.

10. Select the sentences that apply to the Unigram model of tokenization.

\n\t\t\t\t
Tokenizers, check!\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tNext chapter
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:24.947Z"} {"title":"Introduction - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#introduction)Introduction\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-7-questions)\n\nIn [Chapter 3](/course/chapter3), you saw how to fine-tune a model for text classification. In this chapter, we will tackle the following common NLP tasks:\n\n- Token classification\n- Masked language modeling (like BERT)\n- Summarization\n- Translation\n- Causal language modeling pretraining (like GPT-2)\n- Question answering\n\nTo do this, you’ll need to leverage everything you learned about the `Trainer` API and the 🤗 Accelerate library in [Chapter 3](/course/chapter3), the 🤗 Datasets library in [Chapter 5](/course/chapter5), and the 🤗 Tokenizers library in [Chapter 6](/course/chapter6). We’ll also upload our results to the Model Hub, like we did in [Chapter 4](/course/chapter4), so this is really the chapter where everything comes together!\n\nEach section can be read independently and will show you how to train a model with the `Trainer` API or with your own training loop, using 🤗 Accelerate. Feel free to skip either part and focus on the one that interests you the most: the `Trainer` API is great for fine-tuning or training your model without worrying about what’s going on behind the scenes, while the training loop with `Accelerate` will let you customize any part you want more easily.\n\nIf you read the sections in sequence, you will notice that they have quite a bit of code and prose in common. The repetition is intentional, to allow you to dip in (or come back later) to any task that interests you and find a complete working example.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tIntroduction - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Introduction

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Introduction

\"Ask

In Chapter 3, you saw how to fine-tune a model for text classification. In this chapter, we will tackle the following common NLP tasks:

  • Token classification
  • Masked language modeling (like BERT)
  • Summarization
  • Translation
  • Causal language modeling pretraining (like GPT-2)
  • Question answering

To do this, you’ll need to leverage everything you learned about the Trainer API and the 🤗 Accelerate library in Chapter 3, the 🤗 Datasets library in Chapter 5, and the 🤗 Tokenizers library in Chapter 6. We’ll also upload our results to the Model Hub, like we did in Chapter 4, so this is really the chapter where everything comes together!

Each section can be read independently and will show you how to train a model with the Trainer API or with your own training loop, using 🤗 Accelerate. Feel free to skip either part and focus on the one that interests you the most: the Trainer API is great for fine-tuning or training your model without worrying about what’s going on behind the scenes, while the training loop with Accelerate will let you customize any part you want more easily.

If you read the sections in sequence, you will notice that they have quite a bit of code and prose in common. The repetition is intentional, to allow you to dip in (or come back later) to any task that interests you and find a complete working example.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:26.397Z"} {"title":"Token classification - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter7/2?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#token-classification)Token classification\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-7-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter7/section2_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter7/section2_pt.ipynb)\n\nThe first application we’ll explore is token classification. This generic task encompasses any problem that can be formulated as “attributing a label to each token in a sentence,” such as:\n\n- **Named entity recognition (NER)**: Find the entities (such as persons, locations, or organizations) in a sentence. This can be formulated as attributing a label to each token by having one class per entity and one class for “no entity.”\n- **Part-of-speech tagging (POS)**: Mark each word in a sentence as corresponding to a particular part of speech (such as noun, verb, adjective, etc.).\n- **Chunking**: Find the tokens that belong to the same entity. This task (which can be combined with POS or NER) can be formulated as attributing one label (usually `B-`) to any tokens that are at the beginning of a chunk, another label (usually `I-`) to tokens that are inside a chunk, and a third label (usually `O`) to tokens that don’t belong to any chunk.\n\nOf course, there are many other types of token classification problem; those are just a few representative examples. In this section, we will fine-tune a model (BERT) on a NER task, which will then be able to compute predictions like this one:\n\n [![One-hot encoded labels for question answering.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/model-eval-bert-finetuned-ner.png) ![One-hot encoded labels for question answering.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/model-eval-bert-finetuned-ner-dark.png)](/huggingface-course/bert-finetuned-ner) \n\nYou can find the model we’ll train and upload to the Hub and double-check its predictions [here](https://huggingface.co/huggingface-course/bert-finetuned-ner?text=My+name+is+Sylvain+and+I+work+at+Hugging+Face+in+Brooklyn).\n\n## [](#preparing-the-data)Preparing the data\n\nFirst things first, we need a dataset suitable for token classification. In this section we will use the [CoNLL-2003 dataset](https://huggingface.co/datasets/conll2003), which contains news stories from Reuters.\n\n💡 As long as your dataset consists of texts split into words with their corresponding labels, you will be able to adapt the data processing procedures described here to your own dataset. Refer back to [Chapter 5](/course/chapter5) if you need a refresher on how to load your own custom data in a `Dataset`.\n\n### [](#the-conll-2003-dataset)The CoNLL-2003 dataset\n\nTo load the CoNLL-2003 dataset, we use the `load_dataset()` method from the 🤗 Datasets library:\n\n```\nfrom datasets import load_dataset\n\nraw_datasets = load_dataset(\"conll2003\")```\n\nThis will download and cache the dataset, like we saw in [Chapter 3](/course/chapter3) for the GLUE MRPC dataset. Inspecting this object shows us the columns present and the split between the training, validation, and test sets:\n\n```\nDatasetDict({\n train: Dataset({\n features: ['chunk_tags', 'id', 'ner_tags', 'pos_tags', 'tokens'],\n num_rows: 14041\n })\n validation: Dataset({\n features: ['chunk_tags', 'id', 'ner_tags', 'pos_tags', 'tokens'],\n num_rows: 3250\n })\n test: Dataset({\n features: ['chunk_tags', 'id', 'ner_tags', 'pos_tags', 'tokens'],\n num_rows: 3453\n })\n})```\n\nIn particular, we can see the dataset contains labels for the three tasks we mentioned earlier: NER, POS, and chunking. A big difference from other datasets is that the input texts are not presented as sentences or documents, but lists of words (the last column is called `tokens`, but it contains words in the sense that these are pre-tokenized inputs that still need to go through the tokenizer for subword tokenization).\n\nLet’s have a look at the first element of the training set:\n\n```\nraw_datasets[\"train\"][0][\"tokens\"]```\n\n```\n['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.']```\n\nSince we want to perform named entity recognition, we will look at the NER tags:\n\n```\nraw_datasets[\"train\"][0][\"ner_tags\"]```\n\n```\n[3, 0, 7, 0, 0, 0, 7, 0, 0]```\n\nThose are the labels as integers ready for training, but they’re not necessarily useful when we want to inspect the data. Like for text classification, we can access the correspondence between those integers and the label names by looking at the `features` attribute of our dataset:\n\n```\nner_feature = raw_datasets[\"train\"].features[\"ner_tags\"]\nner_feature```\n\n```\nSequence(feature=ClassLabel(num_classes=9, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC'], names_file=None, id=None), length=-1, id=None)```\n\nSo this column contains elements that are sequences of `ClassLabel`s. The type of the elements of the sequence is in the `feature` attribute of this `ner_feature`, and we can access the list of names by looking at the `names` attribute of that `feature`:\n\n```\nlabel_names = ner_feature.feature.names\nlabel_names```\n\n```\n['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC']```\n\nWe already saw these labels when digging into the `token-classification` pipeline in [Chapter 6](/course/chapter6/3), but for a quick refresher:\n\n- `O` means the word doesn’t correspond to any entity.\n- `B-PER`/`I-PER` means the word corresponds to the beginning of/is inside a _person_ entity.\n- `B-ORG`/`I-ORG` means the word corresponds to the beginning of/is inside an _organization_ entity.\n- `B-LOC`/`I-LOC` means the word corresponds to the beginning of/is inside a _location_ entity.\n- `B-MISC`/`I-MISC` means the word corresponds to the beginning of/is inside a _miscellaneous_ entity.\n\nNow decoding the labels we saw earlier gives us this:\n\n```\nwords = raw_datasets[\"train\"][0][\"tokens\"]\nlabels = raw_datasets[\"train\"][0][\"ner_tags\"]\nline1 = \"\"\nline2 = \"\"\nfor word, label in zip(words, labels):\n full_label = label_names[label]\n max_length = max(len(word), len(full_label))\n line1 += word + \" \" * (max_length - len(word) + 1)\n line2 += full_label + \" \" * (max_length - len(full_label) + 1)\n\nprint(line1)\nprint(line2)```\n\n```\n'EU rejects German call to boycott British lamb .'\n'B-ORG O B-MISC O O O B-MISC O O'```\n\nAnd for an example mixing `B-` and `I-` labels, here’s what the same code gives us on the element of the training set at index 4:\n\n```\n'Germany \\'s representative to the European Union \\'s veterinary committee Werner Zwingmann said on Wednesday consumers should buy sheepmeat from countries other than Britain until the scientific advice was clearer .'\n'B-LOC O O O O B-ORG I-ORG O O O B-PER I-PER O O O O O O O O O O O B-LOC O O O O O O O'```\n\nAs we can see, entities spanning two words, like “European Union” and “Werner Zwingmann,” are attributed a `B-` label for the first word and an `I-` label for the second.\n\n✏️ **Your turn!** Print the same two sentences with their POS or chunking labels.\n\n### [](#processing-the-data)Processing the data\n\nAs usual, our texts need to be converted to token IDs before the model can make sense of them. As we saw in [Chapter 6](/course/chapter6/), a big difference in the case of token classification tasks is that we have pre-tokenized inputs. Fortunately, the tokenizer API can deal with that pretty easily; we just need to warn the `tokenizer` with a special flag.\n\nTo begin, let’s create our `tokenizer` object. As we said before, we will be using a BERT pretrained model, so we’ll start by downloading and caching the associated tokenizer:\n\n```\nfrom transformers import AutoTokenizer\n\nmodel_checkpoint = \"bert-base-cased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)```\n\nYou can replace the `model_checkpoint` with any other model you prefer from the [Hub](https://huggingface.co/models), or with a local folder in which you’ve saved a pretrained model and a tokenizer. The only constraint is that the tokenizer needs to be backed by the 🤗 Tokenizers library, so there’s a “fast” version available. You can see all the architectures that come with a fast version in [this big table](https://huggingface.co/transformers/#supported-frameworks), and to check that the `tokenizer` object you’re using is indeed backed by 🤗 Tokenizers you can look at its `is_fast` attribute:\n\nTo tokenize a pre-tokenized input, we can use our `tokenizer` as usual and just add `is_split_into_words=True`:\n\n```\ninputs = tokenizer(raw_datasets[\"train\"][0][\"tokens\"], is_split_into_words=True)\ninputs.tokens()```\n\n```\n['[CLS]', 'EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'la', '##mb', '.', '[SEP]']```\n\nAs we can see, the tokenizer added the special tokens used by the model (`[CLS]` at the beginning and `[SEP]` at the end) and left most of the words untouched. The word `lamb`, however, was tokenized into two subwords, `la` and `##mb`. This introduces a mismatch between our inputs and the labels: the list of labels has only 9 elements, whereas our input now has 12 tokens. Accounting for the special tokens is easy (we know they are at the beginning and the end), but we also need to make sure we align all the labels with the proper words.\n\nFortunately, because we’re using a fast tokenizer we have access to the 🤗 Tokenizers superpowers, which means we can easily map each token to its corresponding word (as seen in [Chapter 6](/course/chapter6/3)):\n\n```\n[None, 0, 1, 2, 3, 4, 5, 6, 7, 7, 8, None]```\n\nWith a tiny bit of work, we can then expand our label list to match the tokens. The first rule we’ll apply is that special tokens get a label of `-100`. This is because by default `-100` is an index that is ignored in the loss function we will use (cross entropy). Then, each token gets the same label as the token that started the word it’s inside, since they are part of the same entity. For tokens inside a word but not at the beginning, we replace the `B-` with `I-` (since the token does not begin the entity):\n\n```\ndef align_labels_with_tokens(labels, word_ids):\n new_labels = []\n current_word = None\n for word_id in word_ids:\n if word_id != current_word:\n \n current_word = word_id\n label = -100 if word_id is None else labels[word_id]\n new_labels.append(label)\n elif word_id is None:\n \n new_labels.append(-100)\n else:\n \n label = labels[word_id]\n \n if label % 2 == 1:\n label += 1\n new_labels.append(label)\n\n return new_labels```\n\nLet’s try it out on our first sentence:\n\n```\nlabels = raw_datasets[\"train\"][0][\"ner_tags\"]\nword_ids = inputs.word_ids()\nprint(labels)\nprint(align_labels_with_tokens(labels, word_ids))```\n\n```\n[3, 0, 7, 0, 0, 0, 7, 0, 0]\n[-100, 3, 0, 7, 0, 0, 0, 7, 0, 0, 0, -100]```\n\nAs we can see, our function added the `-100` for the two special tokens at the beginning and the end, and a new `0` for our word that was split into two tokens.\n\n✏️ **Your turn!** Some researchers prefer to attribute only one label per word, and assign `-100` to the other subtokens in a given word. This is to avoid long words that split into lots of subtokens contributing heavily to the loss. Change the previous function to align labels with input IDs by following this rule.\n\nTo preprocess our whole dataset, we need to tokenize all the inputs and apply `align_labels_with_tokens()` on all the labels. To take advantage of the speed of our fast tokenizer, it’s best to tokenize lots of texts at the same time, so we’ll write a function that processes a list of examples and use the `Dataset.map()` method with the option `batched=True`. The only thing that is different from our previous example is that the `word_ids()` function needs to get the index of the example we want the word IDs of when the inputs to the tokenizer are lists of texts (or in our case, list of lists of words), so we add that too:\n\n```\ndef tokenize_and_align_labels(examples):\n tokenized_inputs = tokenizer(\n examples[\"tokens\"], truncation=True, is_split_into_words=True\n )\n all_labels = examples[\"ner_tags\"]\n new_labels = []\n for i, labels in enumerate(all_labels):\n word_ids = tokenized_inputs.word_ids(i)\n new_labels.append(align_labels_with_tokens(labels, word_ids))\n\n tokenized_inputs[\"labels\"] = new_labels\n return tokenized_inputs```\n\nNote that we haven’t padded our inputs yet; we’ll do that later, when creating the batches with a data collator.\n\nWe can now apply all that preprocessing in one go on the other splits of our dataset:\n\n```\ntokenized_datasets = raw_datasets.map(\n tokenize_and_align_labels,\n batched=True,\n remove_columns=raw_datasets[\"train\"].column_names,\n)```\n\nWe’ve done the hardest part! Now that the data has been preprocessed, the actual training will look a lot like what we did in [Chapter 3](/course/chapter3).\n\n## [](#fine-tuning-the-model-with-the-trainer-api)Fine-tuning the model with the `Trainer` API\n\nThe actual code using the `Trainer` will be the same as before; the only changes are the way the data is collated into a batch and the metric computation function.\n\n### [](#data-collation)Data collation\n\nWe can’t just use a `DataCollatorWithPadding` like in [Chapter 3](/course/chapter3) because that only pads the inputs (input IDs, attention mask, and token type IDs). Here our labels should be padded the exact same way as the inputs so that they stay the same size, using `-100` as a value so that the corresponding predictions are ignored in the loss computation.\n\nThis is all done by a [`DataCollatorForTokenClassification`](https://huggingface.co/transformers/main_classes/data_collator.html#datacollatorfortokenclassification). Like the `DataCollatorWithPadding`, it takes the `tokenizer` used to preprocess the inputs:\n\n```\nfrom transformers import DataCollatorForTokenClassification\n\ndata_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)```\n\nTo test this on a few samples, we can just call it on a list of examples from our tokenized training set:\n\n```\nbatch = data_collator([tokenized_datasets[\"train\"][i] for i in range(2)])\nbatch[\"labels\"]```\n\n```\ntensor([[-100, 3, 0, 7, 0, 0, 0, 7, 0, 0, 0, -100],\n [-100, 1, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100]])```\n\nLet’s compare this to the labels for the first and second elements in our dataset:\n\n```\nfor i in range(2):\n print(tokenized_datasets[\"train\"][i][\"labels\"])```\n\n```\n[-100, 3, 0, 7, 0, 0, 0, 7, 0, 0, 0, -100]\n[-100, 1, 2, -100]```\n\nAs we can see, the second set of labels has been padded to the length of the first one using `-100`s.\n\n### [](#metrics)Metrics\n\nTo have the `Trainer` compute a metric every epoch, we will need to define a `compute_metrics()` function that takes the arrays of predictions and labels, and returns a dictionary with the metric names and values.\n\nThe traditional framework used to evaluate token classification prediction is [_seqeval_](https://github.com/chakki-works/seqeval). To use this metric, we first need to install the _seqeval_ library:\n\nWe can then load it via the `evaluate.load()` function like we did in [Chapter 3](/course/chapter3):\n\n```\nimport evaluate\n\nmetric = evaluate.load(\"seqeval\")```\n\nThis metric does not behave like the standard accuracy: it will actually take the lists of labels as strings, not integers, so we will need to fully decode the predictions and labels before passing them to the metric. Let’s see how it works. First, we’ll get the labels for our first training example:\n\n```\nlabels = raw_datasets[\"train\"][0][\"ner_tags\"]\nlabels = [label_names[i] for i in labels]\nlabels```\n\n```\n['B-ORG', 'O', 'B-MISC', 'O', 'O', 'O', 'B-MISC', 'O', 'O']```\n\nWe can then create fake predictions for those by just changing the value at index 2:\n\n```\npredictions = labels.copy()\npredictions[2] = \"O\"\nmetric.compute(predictions=[predictions], references=[labels])```\n\nNote that the metric takes a list of predictions (not just one) and a list of labels. Here’s the output:\n\n```\n{'MISC': {'precision': 1.0, 'recall': 0.5, 'f1': 0.67, 'number': 2},\n 'ORG': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1},\n 'overall_precision': 1.0,\n 'overall_recall': 0.67,\n 'overall_f1': 0.8,\n 'overall_accuracy': 0.89}```\n\nThis is sending back a lot of information! We get the precision, recall, and F1 score for each separate entity, as well as overall. For our metric computation we will only keep the overall score, but feel free to tweak the `compute_metrics()` function to return all the metrics you would like reported.\n\nThis `compute_metrics()` function first takes the argmax of the logits to convert them to predictions (as usual, the logits and the probabilities are in the same order, so we don’t need to apply the softmax). Then we have to convert both labels and predictions from integers to strings. We remove all the values where the label is `-100`, then pass the results to the `metric.compute()` method:\n\n```\nimport numpy as np\n\n\ndef compute_metrics(eval_preds):\n logits, labels = eval_preds\n predictions = np.argmax(logits, axis=-1)\n\n \n true_labels = [[label_names[l] for l in label if l != -100] for label in labels]\n true_predictions = [\n [label_names[p] for (p, l) in zip(prediction, label) if l != -100]\n for prediction, label in zip(predictions, labels)\n ]\n all_metrics = metric.compute(predictions=true_predictions, references=true_labels)\n return {\n \"precision\": all_metrics[\"overall_precision\"],\n \"recall\": all_metrics[\"overall_recall\"],\n \"f1\": all_metrics[\"overall_f1\"],\n \"accuracy\": all_metrics[\"overall_accuracy\"],\n }```\n\nNow that this is done, we are almost ready to define our `Trainer`. We just need a `model` to fine-tune!\n\n### [](#defining-the-model)Defining the model\n\nSince we are working on a token classification problem, we will use the `AutoModelForTokenClassification` class. The main thing to remember when defining this model is to pass along some information on the number of labels we have. The easiest way to do this is to pass that number with the `num_labels` argument, but if we want a nice inference widget working like the one we saw at the beginning of this section, it’s better to set the correct label correspondences instead.\n\nThey should be set by two dictionaries, `id2label` and `label2id`, which contain the mappings from ID to label and vice versa:\n\n```\nid2label = {i: label for i, label in enumerate(label_names)}\nlabel2id = {v: k for k, v in id2label.items()}```\n\nNow we can just pass them to the `AutoModelForTokenClassification.from_pretrained()` method, and they will be set in the model’s configuration and then properly saved and uploaded to the Hub:\n\n```\nfrom transformers import AutoModelForTokenClassification\n\nmodel = AutoModelForTokenClassification.from_pretrained(\n model_checkpoint,\n id2label=id2label,\n label2id=label2id,\n)```\n\nLike when we defined our `AutoModelForSequenceClassification` in [Chapter 3](/course/chapter3), creating the model issues a warning that some weights were not used (the ones from the pretraining head) and some other weights are randomly initialized (the ones from the new token classification head), and that this model should be trained. We will do that in a minute, but first let’s double-check that our model has the right number of labels:\n\n⚠️ If you have a model with the wrong number of labels, you will get an obscure error when calling the `Trainer.train()` method later on (something like “CUDA error: device-side assert triggered”). This is the number one cause of bugs reported by users for such errors, so make sure you do this check to confirm that you have the expected number of labels.\n\n### [](#fine-tuning-the-model)Fine-tuning the model\n\nWe are now ready to train our model! We just need to do two last things before we define our `Trainer`: log in to Hugging Face and define our training arguments. If you’re working in a notebook, there’s a convenience function to help you with this:\n\n```\nfrom huggingface_hub import notebook_login\n\nnotebook_login()```\n\nThis will display a widget where you can enter your Hugging Face login credentials.\n\nIf you aren’t working in a notebook, just type the following line in your terminal:\n\nOnce this is done, we can define our `TrainingArguments`:\n\n```\nfrom transformers import TrainingArguments\n\nargs = TrainingArguments(\n \"bert-finetuned-ner\",\n evaluation_strategy=\"epoch\",\n save_strategy=\"epoch\",\n learning_rate=2e-5,\n num_train_epochs=3,\n weight_decay=0.01,\n push_to_hub=True,\n)```\n\nYou’ve seen most of those before: we set some hyperparameters (like the learning rate, the number of epochs to train for, and the weight decay), and we specify `push_to_hub=True` to indicate that we want to save the model and evaluate it at the end of every epoch, and that we want to upload our results to the Model Hub. Note that you can specify the name of the repository you want to push to with the `hub_model_id` argument (in particular, you will have to use this argument to push to an organization). For instance, when we pushed the model to the [`huggingface-course` organization](https://huggingface.co/huggingface-course), we added `hub_model_id=\"huggingface-course/bert-finetuned-ner\"` to `TrainingArguments`. By default, the repository used will be in your namespace and named after the output directory you set, so in our case it will be `\"sgugger/bert-finetuned-ner\"`.\n\n💡 If the output directory you are using already exists, it needs to be a local clone of the repository you want to push to. If it isn’t, you’ll get an error when defining your `Trainer` and will need to set a new name.\n\nFinally, we just pass everything to the `Trainer` and launch the training:\n\n```\nfrom transformers import Trainer\n\ntrainer = Trainer(\n model=model,\n args=args,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"validation\"],\n data_collator=data_collator,\n compute_metrics=compute_metrics,\n tokenizer=tokenizer,\n)\ntrainer.train()```\n\nNote that while the training happens, each time the model is saved (here, every epoch) it is uploaded to the Hub in the background. This way, you will be able to to resume your training on another machine if necessary.\n\nOnce the training is complete, we use the `push_to_hub()` method to make sure we upload the most recent version of the model:\n\n```\ntrainer.push_to_hub(commit_message=\"Training complete\")```\n\nThis command returns the URL of the commit it just did, if you want to inspect it:\n\n```\n'https://huggingface.co/sgugger/bert-finetuned-ner/commit/26ab21e5b1568f9afeccdaed2d8715f571d786ed'```\n\nThe `Trainer` also drafts a model card with all the evaluation results and uploads it. At this stage, you can use the inference widget on the Model Hub to test your model and share it with your friends. You have successfully fine-tuned a model on a token classification task — congratulations!\n\nIf you want to dive a bit more deeply into the training loop, we will now show you how to do the same thing using 🤗 Accelerate.\n\n## [](#a-custom-training-loop)A custom training loop\n\nLet’s now take a look at the full training loop, so you can easily customize the parts you need. It will look a lot like what we did in [Chapter 3](/course/chapter3/4), with a few changes for the evaluation.\n\n### [](#preparing-everything-for-training)Preparing everything for training\n\nFirst we need to build the `DataLoader`s from our datasets. We’ll reuse our `data_collator` as a `collate_fn` and shuffle the training set, but not the validation set:\n\n```\nfrom torch.utils.data import DataLoader\n\ntrain_dataloader = DataLoader(\n tokenized_datasets[\"train\"],\n shuffle=True,\n collate_fn=data_collator,\n batch_size=8,\n)\neval_dataloader = DataLoader(\n tokenized_datasets[\"validation\"], collate_fn=data_collator, batch_size=8\n)```\n\nNext we reinstantiate our model, to make sure we’re not continuing the fine-tuning from before but starting from the BERT pretrained model again:\n\n```\nmodel = AutoModelForTokenClassification.from_pretrained(\n model_checkpoint,\n id2label=id2label,\n label2id=label2id,\n)```\n\nThen we will need an optimizer. We’ll use the classic `AdamW`, which is like `Adam`, but with a fix in the way weight decay is applied:\n\n```\nfrom torch.optim import AdamW\n\noptimizer = AdamW(model.parameters(), lr=2e-5)```\n\nOnce we have all those objects, we can send them to the `accelerator.prepare()` method:\n\n```\nfrom accelerate import Accelerator\n\naccelerator = Accelerator()\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n model, optimizer, train_dataloader, eval_dataloader\n)```\n\n🚨 If you’re training on a TPU, you’ll need to move all the code starting from the cell above into a dedicated training function. See [Chapter 3](/course/chapter3) for more details.\n\nNow that we have sent our `train_dataloader` to `accelerator.prepare()`, we can use its length to compute the number of training steps. Remember that we should always do this after preparing the dataloader, as that method will change its length. We use a classic linear schedule from the learning rate to 0:\n\n```\nfrom transformers import get_scheduler\n\nnum_train_epochs = 3\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n \"linear\",\n optimizer=optimizer,\n num_warmup_steps=0,\n num_training_steps=num_training_steps,\n)```\n\nLastly, to push our model to the Hub, we will need to create a `Repository` object in a working folder. First log in to Hugging Face, if you’re not logged in already. We’ll determine the repository name from the model ID we want to give our model (feel free to replace the `repo_name` with your own choice; it just needs to contain your username, which is what the function `get_full_repo_name()` does):\n\n```\nfrom huggingface_hub import Repository, get_full_repo_name\n\nmodel_name = \"bert-finetuned-ner-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name```\n\n```\n'sgugger/bert-finetuned-ner-accelerate'```\n\nThen we can clone that repository in a local folder. If it already exists, this local folder should be an existing clone of the repository we are working with:\n\n```\noutput_dir = \"bert-finetuned-ner-accelerate\"\nrepo = Repository(output_dir, clone_from=repo_name)```\n\nWe can now upload anything we save in `output_dir` by calling the `repo.push_to_hub()` method. This will help us upload the intermediate models at the end of each epoch.\n\n### [](#training-loop)Training loop\n\nWe are now ready to write the full training loop. To simplify its evaluation part, we define this `postprocess()` function that takes predictions and labels and converts them to lists of strings, like our `metric` object expects:\n\n```\ndef postprocess(predictions, labels):\n predictions = predictions.detach().cpu().clone().numpy()\n labels = labels.detach().cpu().clone().numpy()\n\n \n true_labels = [[label_names[l] for l in label if l != -100] for label in labels]\n true_predictions = [\n [label_names[p] for (p, l) in zip(prediction, label) if l != -100]\n for prediction, label in zip(predictions, labels)\n ]\n return true_labels, true_predictions```\n\nThen we can write the training loop. After defining a progress bar to follow how training goes, the loop has three parts:\n\n- The training in itself, which is the classic iteration over the `train_dataloader`, forward pass through the model, then backward pass and optimizer step.\n- The evaluation, in which there is a novelty after getting the outputs of our model on a batch: since two processes may have padded the inputs and labels to different shapes, we need to use `accelerator.pad_across_processes()` to make the predictions and labels the same shape before calling the `gather()` method. If we don’t do this, the evaluation will either error out or hang forever. Then we send the results to `metric.add_batch()` and call `metric.compute()` once the evaluation loop is over.\n- Saving and uploading, where we first save the model and the tokenizer, then call `repo.push_to_hub()`. Notice that we use the argument `blocking=False` to tell the 🤗 Hub library to push in an asynchronous process. This way, training continues normally and this (long) instruction is executed in the background.\n\nHere’s the complete code for the training loop:\n\n```\nfrom tqdm.auto import tqdm\nimport torch\n\nprogress_bar = tqdm(range(num_training_steps))\n\nfor epoch in range(num_train_epochs):\n \n model.train()\n for batch in train_dataloader:\n outputs = model(**batch)\n loss = outputs.loss\n accelerator.backward(loss)\n\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n progress_bar.update(1)\n\n \n model.eval()\n for batch in eval_dataloader:\n with torch.no_grad():\n outputs = model(**batch)\n\n predictions = outputs.logits.argmax(dim=-1)\n labels = batch[\"labels\"]\n\n \n predictions = accelerator.pad_across_processes(predictions, dim=1, pad_index=-100)\n labels = accelerator.pad_across_processes(labels, dim=1, pad_index=-100)\n\n predictions_gathered = accelerator.gather(predictions)\n labels_gathered = accelerator.gather(labels)\n\n true_predictions, true_labels = postprocess(predictions_gathered, labels_gathered)\n metric.add_batch(predictions=true_predictions, references=true_labels)\n\n results = metric.compute()\n print(\n f\"epoch {epoch}:\",\n {\n key: results[f\"overall_{key}\"]\n for key in [\"precision\", \"recall\", \"f1\", \"accuracy\"]\n },\n )\n\n \n accelerator.wait_for_everyone()\n unwrapped_model = accelerator.unwrap_model(model)\n unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n if accelerator.is_main_process:\n tokenizer.save_pretrained(output_dir)\n repo.push_to_hub(\n commit_message=f\"Training in progress epoch {epoch}\", blocking=False\n )```\n\nIn case this is the first time you’re seeing a model saved with 🤗 Accelerate, let’s take a moment to inspect the three lines of code that go with it:\n\n```\naccelerator.wait_for_everyone()\nunwrapped_model = accelerator.unwrap_model(model)\nunwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)```\n\nThe first line is self-explanatory: it tells all the processes to wait until everyone is at that stage before continuing. This is to make sure we have the same model in every process before saving. Then we grab the `unwrapped_model`, which is the base model we defined. The `accelerator.prepare()` method changes the model to work in distributed training, so it won’t have the `save_pretrained()` method anymore; the `accelerator.unwrap_model()` method undoes that step. Lastly, we call `save_pretrained()` but tell that method to use `accelerator.save()` instead of `torch.save()`.\n\nOnce this is done, you should have a model that produces results pretty similar to the one trained with the `Trainer`. You can check the model we trained using this code at [_huggingface-course/bert-finetuned-ner-accelerate_](https://huggingface.co/huggingface-course/bert-finetuned-ner-accelerate). And if you want to test out any tweaks to the training loop, you can directly implement them by editing the code shown above!\n\n## [](#using-the-fine-tuned-model)Using the fine-tuned model\n\nWe’ve already shown you how you can use the model we fine-tuned on the Model Hub with the inference widget. To use it locally in a `pipeline`, you just have to specify the proper model identifier:\n\n```\nfrom transformers import pipeline\n\n\nmodel_checkpoint = \"huggingface-course/bert-finetuned-ner\"\ntoken_classifier = pipeline(\n \"token-classification\", model=model_checkpoint, aggregation_strategy=\"simple\"\n)\ntoken_classifier(\"My name is Sylvain and I work at Hugging Face in Brooklyn.\")```\n\n```\n[{'entity_group': 'PER', 'score': 0.9988506, 'word': 'Sylvain', 'start': 11, 'end': 18},\n {'entity_group': 'ORG', 'score': 0.9647625, 'word': 'Hugging Face', 'start': 33, 'end': 45},\n {'entity_group': 'LOC', 'score': 0.9986118, 'word': 'Brooklyn', 'start': 49, 'end': 57}]```\n\nGreat! Our model is working as well as the default one for this pipeline!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tToken classification - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Token classification

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Token classification

\"Ask \"Open \"Open

The first application we’ll explore is token classification. This generic task encompasses any problem that can be formulated as “attributing a label to each token in a sentence,” such as:

  • Named entity recognition (NER): Find the entities (such as persons, locations, or organizations) in a sentence. This can be formulated as attributing a label to each token by having one class per entity and one class for “no entity.”
  • Part-of-speech tagging (POS): Mark each word in a sentence as corresponding to a particular part of speech (such as noun, verb, adjective, etc.).
  • Chunking: Find the tokens that belong to the same entity. This task (which can be combined with POS or NER) can be formulated as attributing one label (usually B-) to any tokens that are at the beginning of a chunk, another label (usually I-) to tokens that are inside a chunk, and a third label (usually O) to tokens that don’t belong to any chunk.

Of course, there are many other types of token classification problem; those are just a few representative examples. In this section, we will fine-tune a model (BERT) on a NER task, which will then be able to compute predictions like this one:

\"One-hot \"One-hot

You can find the model we’ll train and upload to the Hub and double-check its predictions here.

Preparing the data

First things first, we need a dataset suitable for token classification. In this section we will use the CoNLL-2003 dataset, which contains news stories from Reuters.

💡 As long as your dataset consists of texts split into words with their corresponding labels, you will be able to adapt the data processing procedures described here to your own dataset. Refer back to Chapter 5 if you need a refresher on how to load your own custom data in a Dataset.

The CoNLL-2003 dataset

To load the CoNLL-2003 dataset, we use the load_dataset() method from the 🤗 Datasets library:

from datasets import load_dataset\n\nraw_datasets = load_dataset(\"conll2003\")

This will download and cache the dataset, like we saw in Chapter 3 for the GLUE MRPC dataset. Inspecting this object shows us the columns present and the split between the training, validation, and test sets:

raw_datasets
DatasetDict({\n    train: Dataset({\n        features: ['chunk_tags', 'id', 'ner_tags', 'pos_tags', 'tokens'],\n        num_rows: 14041\n    })\n    validation: Dataset({\n        features: ['chunk_tags', 'id', 'ner_tags', 'pos_tags', 'tokens'],\n        num_rows: 3250\n    })\n    test: Dataset({\n        features: ['chunk_tags', 'id', 'ner_tags', 'pos_tags', 'tokens'],\n        num_rows: 3453\n    })\n})

In particular, we can see the dataset contains labels for the three tasks we mentioned earlier: NER, POS, and chunking. A big difference from other datasets is that the input texts are not presented as sentences or documents, but lists of words (the last column is called tokens, but it contains words in the sense that these are pre-tokenized inputs that still need to go through the tokenizer for subword tokenization).

Let’s have a look at the first element of the training set:

raw_datasets[\"train\"][0][\"tokens\"]
['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.']

Since we want to perform named entity recognition, we will look at the NER tags:

raw_datasets[\"train\"][0][\"ner_tags\"]
[3, 0, 7, 0, 0, 0, 7, 0, 0]

Those are the labels as integers ready for training, but they’re not necessarily useful when we want to inspect the data. Like for text classification, we can access the correspondence between those integers and the label names by looking at the features attribute of our dataset:

ner_feature = raw_datasets[\"train\"].features[\"ner_tags\"]\nner_feature
Sequence(feature=ClassLabel(num_classes=9, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC'], names_file=None, id=None), length=-1, id=None)

So this column contains elements that are sequences of ClassLabels. The type of the elements of the sequence is in the feature attribute of this ner_feature, and we can access the list of names by looking at the names attribute of that feature:

label_names = ner_feature.feature.names\nlabel_names
['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC']

We already saw these labels when digging into the token-classification pipeline in Chapter 6, but for a quick refresher:

  • O means the word doesn’t correspond to any entity.
  • B-PER/I-PER means the word corresponds to the beginning of/is inside a person entity.
  • B-ORG/I-ORG means the word corresponds to the beginning of/is inside an organization entity.
  • B-LOC/I-LOC means the word corresponds to the beginning of/is inside a location entity.
  • B-MISC/I-MISC means the word corresponds to the beginning of/is inside a miscellaneous entity.

Now decoding the labels we saw earlier gives us this:

words = raw_datasets[\"train\"][0][\"tokens\"]\nlabels = raw_datasets[\"train\"][0][\"ner_tags\"]\nline1 = \"\"\nline2 = \"\"\nfor word, label in zip(words, labels):\n    full_label = label_names[label]\n    max_length = max(len(word), len(full_label))\n    line1 += word + \" \" * (max_length - len(word) + 1)\n    line2 += full_label + \" \" * (max_length - len(full_label) + 1)\n\nprint(line1)\nprint(line2)
'EU    rejects German call to boycott British lamb .'\n'B-ORG O       B-MISC O    O  O       B-MISC  O    O'

And for an example mixing B- and I- labels, here’s what the same code gives us on the element of the training set at index 4:

'Germany \\'s representative to the European Union \\'s veterinary committee Werner Zwingmann said on Wednesday consumers should buy sheepmeat from countries other than Britain until the scientific advice was clearer .'\n'B-LOC   O  O              O  O   B-ORG    I-ORG O  O          O         B-PER  I-PER     O    O  O         O         O      O   O         O    O         O     O    B-LOC   O     O   O          O      O   O       O'

As we can see, entities spanning two words, like “European Union” and “Werner Zwingmann,” are attributed a B- label for the first word and an I- label for the second.

✏️ Your turn! Print the same two sentences with their POS or chunking labels.

Processing the data

As usual, our texts need to be converted to token IDs before the model can make sense of them. As we saw in Chapter 6, a big difference in the case of token classification tasks is that we have pre-tokenized inputs. Fortunately, the tokenizer API can deal with that pretty easily; we just need to warn the tokenizer with a special flag.

To begin, let’s create our tokenizer object. As we said before, we will be using a BERT pretrained model, so we’ll start by downloading and caching the associated tokenizer:

from transformers import AutoTokenizer\n\nmodel_checkpoint = \"bert-base-cased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

You can replace the model_checkpoint with any other model you prefer from the Hub, or with a local folder in which you’ve saved a pretrained model and a tokenizer. The only constraint is that the tokenizer needs to be backed by the 🤗 Tokenizers library, so there’s a “fast” version available. You can see all the architectures that come with a fast version in this big table, and to check that the tokenizer object you’re using is indeed backed by 🤗 Tokenizers you can look at its is_fast attribute:

tokenizer.is_fast
True

To tokenize a pre-tokenized input, we can use our tokenizer as usual and just add is_split_into_words=True:

inputs = tokenizer(raw_datasets[\"train\"][0][\"tokens\"], is_split_into_words=True)\ninputs.tokens()
['[CLS]', 'EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'la', '##mb', '.', '[SEP]']

As we can see, the tokenizer added the special tokens used by the model ([CLS] at the beginning and [SEP] at the end) and left most of the words untouched. The word lamb, however, was tokenized into two subwords, la and ##mb. This introduces a mismatch between our inputs and the labels: the list of labels has only 9 elements, whereas our input now has 12 tokens. Accounting for the special tokens is easy (we know they are at the beginning and the end), but we also need to make sure we align all the labels with the proper words.

Fortunately, because we’re using a fast tokenizer we have access to the 🤗 Tokenizers superpowers, which means we can easily map each token to its corresponding word (as seen in Chapter 6):

inputs.word_ids()
[None, 0, 1, 2, 3, 4, 5, 6, 7, 7, 8, None]

With a tiny bit of work, we can then expand our label list to match the tokens. The first rule we’ll apply is that special tokens get a label of -100. This is because by default -100 is an index that is ignored in the loss function we will use (cross entropy). Then, each token gets the same label as the token that started the word it’s inside, since they are part of the same entity. For tokens inside a word but not at the beginning, we replace the B- with I- (since the token does not begin the entity):

def align_labels_with_tokens(labels, word_ids):\n    new_labels = []\n    current_word = None\n    for word_id in word_ids:\n        if word_id != current_word:\n            # Start of a new word!\n            current_word = word_id\n            label = -100 if word_id is None else labels[word_id]\n            new_labels.append(label)\n        elif word_id is None:\n            # Special token\n            new_labels.append(-100)\n        else:\n            # Same word as previous token\n            label = labels[word_id]\n            # If the label is B-XXX we change it to I-XXX\n            if label % 2 == 1:\n                label += 1\n            new_labels.append(label)\n\n    return new_labels

Let’s try it out on our first sentence:

labels = raw_datasets[\"train\"][0][\"ner_tags\"]\nword_ids = inputs.word_ids()\nprint(labels)\nprint(align_labels_with_tokens(labels, word_ids))
[3, 0, 7, 0, 0, 0, 7, 0, 0]\n[-100, 3, 0, 7, 0, 0, 0, 7, 0, 0, 0, -100]

As we can see, our function added the -100 for the two special tokens at the beginning and the end, and a new 0 for our word that was split into two tokens.

✏️ Your turn! Some researchers prefer to attribute only one label per word, and assign -100 to the other subtokens in a given word. This is to avoid long words that split into lots of subtokens contributing heavily to the loss. Change the previous function to align labels with input IDs by following this rule.

To preprocess our whole dataset, we need to tokenize all the inputs and apply align_labels_with_tokens() on all the labels. To take advantage of the speed of our fast tokenizer, it’s best to tokenize lots of texts at the same time, so we’ll write a function that processes a list of examples and use the Dataset.map() method with the option batched=True. The only thing that is different from our previous example is that the word_ids() function needs to get the index of the example we want the word IDs of when the inputs to the tokenizer are lists of texts (or in our case, list of lists of words), so we add that too:

def tokenize_and_align_labels(examples):\n    tokenized_inputs = tokenizer(\n        examples[\"tokens\"], truncation=True, is_split_into_words=True\n    )\n    all_labels = examples[\"ner_tags\"]\n    new_labels = []\n    for i, labels in enumerate(all_labels):\n        word_ids = tokenized_inputs.word_ids(i)\n        new_labels.append(align_labels_with_tokens(labels, word_ids))\n\n    tokenized_inputs[\"labels\"] = new_labels\n    return tokenized_inputs

Note that we haven’t padded our inputs yet; we’ll do that later, when creating the batches with a data collator.

We can now apply all that preprocessing in one go on the other splits of our dataset:

tokenized_datasets = raw_datasets.map(\n    tokenize_and_align_labels,\n    batched=True,\n    remove_columns=raw_datasets[\"train\"].column_names,\n)

We’ve done the hardest part! Now that the data has been preprocessed, the actual training will look a lot like what we did in Chapter 3.

Fine-tuning the model with the Trainer API

The actual code using the Trainer will be the same as before; the only changes are the way the data is collated into a batch and the metric computation function.

Data collation

We can’t just use a DataCollatorWithPadding like in Chapter 3 because that only pads the inputs (input IDs, attention mask, and token type IDs). Here our labels should be padded the exact same way as the inputs so that they stay the same size, using -100 as a value so that the corresponding predictions are ignored in the loss computation.

This is all done by a DataCollatorForTokenClassification. Like the DataCollatorWithPadding, it takes the tokenizer used to preprocess the inputs:

from transformers import DataCollatorForTokenClassification\n\ndata_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)

To test this on a few samples, we can just call it on a list of examples from our tokenized training set:

batch = data_collator([tokenized_datasets[\"train\"][i] for i in range(2)])\nbatch[\"labels\"]
tensor([[-100,    3,    0,    7,    0,    0,    0,    7,    0,    0,    0, -100],\n        [-100,    1,    2, -100, -100, -100, -100, -100, -100, -100, -100, -100]])

Let’s compare this to the labels for the first and second elements in our dataset:

for i in range(2):\n    print(tokenized_datasets[\"train\"][i][\"labels\"])
[-100, 3, 0, 7, 0, 0, 0, 7, 0, 0, 0, -100]\n[-100, 1, 2, -100]

As we can see, the second set of labels has been padded to the length of the first one using -100s.

Metrics

To have the Trainer compute a metric every epoch, we will need to define a compute_metrics() function that takes the arrays of predictions and labels, and returns a dictionary with the metric names and values.

The traditional framework used to evaluate token classification prediction is seqeval. To use this metric, we first need to install the seqeval library:

!pip install seqeval

We can then load it via the evaluate.load() function like we did in Chapter 3:

import evaluate\n\nmetric = evaluate.load(\"seqeval\")

This metric does not behave like the standard accuracy: it will actually take the lists of labels as strings, not integers, so we will need to fully decode the predictions and labels before passing them to the metric. Let’s see how it works. First, we’ll get the labels for our first training example:

labels = raw_datasets[\"train\"][0][\"ner_tags\"]\nlabels = [label_names[i] for i in labels]\nlabels
['B-ORG', 'O', 'B-MISC', 'O', 'O', 'O', 'B-MISC', 'O', 'O']

We can then create fake predictions for those by just changing the value at index 2:

predictions = labels.copy()\npredictions[2] = \"O\"\nmetric.compute(predictions=[predictions], references=[labels])

Note that the metric takes a list of predictions (not just one) and a list of labels. Here’s the output:

{'MISC': {'precision': 1.0, 'recall': 0.5, 'f1': 0.67, 'number': 2},\n 'ORG': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1},\n 'overall_precision': 1.0,\n 'overall_recall': 0.67,\n 'overall_f1': 0.8,\n 'overall_accuracy': 0.89}

This is sending back a lot of information! We get the precision, recall, and F1 score for each separate entity, as well as overall. For our metric computation we will only keep the overall score, but feel free to tweak the compute_metrics() function to return all the metrics you would like reported.

This compute_metrics() function first takes the argmax of the logits to convert them to predictions (as usual, the logits and the probabilities are in the same order, so we don’t need to apply the softmax). Then we have to convert both labels and predictions from integers to strings. We remove all the values where the label is -100, then pass the results to the metric.compute() method:

import numpy as np\n\n\ndef compute_metrics(eval_preds):\n    logits, labels = eval_preds\n    predictions = np.argmax(logits, axis=-1)\n\n    # Remove ignored index (special tokens) and convert to labels\n    true_labels = [[label_names[l] for l in label if l != -100] for label in labels]\n    true_predictions = [\n        [label_names[p] for (p, l) in zip(prediction, label) if l != -100]\n        for prediction, label in zip(predictions, labels)\n    ]\n    all_metrics = metric.compute(predictions=true_predictions, references=true_labels)\n    return {\n        \"precision\": all_metrics[\"overall_precision\"],\n        \"recall\": all_metrics[\"overall_recall\"],\n        \"f1\": all_metrics[\"overall_f1\"],\n        \"accuracy\": all_metrics[\"overall_accuracy\"],\n    }

Now that this is done, we are almost ready to define our Trainer. We just need a model to fine-tune!

Defining the model

Since we are working on a token classification problem, we will use the AutoModelForTokenClassification class. The main thing to remember when defining this model is to pass along some information on the number of labels we have. The easiest way to do this is to pass that number with the num_labels argument, but if we want a nice inference widget working like the one we saw at the beginning of this section, it’s better to set the correct label correspondences instead.

They should be set by two dictionaries, id2label and label2id, which contain the mappings from ID to label and vice versa:

id2label = {i: label for i, label in enumerate(label_names)}\nlabel2id = {v: k for k, v in id2label.items()}

Now we can just pass them to the AutoModelForTokenClassification.from_pretrained() method, and they will be set in the model’s configuration and then properly saved and uploaded to the Hub:

from transformers import AutoModelForTokenClassification\n\nmodel = AutoModelForTokenClassification.from_pretrained(\n    model_checkpoint,\n    id2label=id2label,\n    label2id=label2id,\n)

Like when we defined our AutoModelForSequenceClassification in Chapter 3, creating the model issues a warning that some weights were not used (the ones from the pretraining head) and some other weights are randomly initialized (the ones from the new token classification head), and that this model should be trained. We will do that in a minute, but first let’s double-check that our model has the right number of labels:

model.config.num_labels
9

⚠️ If you have a model with the wrong number of labels, you will get an obscure error when calling the Trainer.train() method later on (something like “CUDA error: device-side assert triggered”). This is the number one cause of bugs reported by users for such errors, so make sure you do this check to confirm that you have the expected number of labels.

Fine-tuning the model

We are now ready to train our model! We just need to do two last things before we define our Trainer: log in to Hugging Face and define our training arguments. If you’re working in a notebook, there’s a convenience function to help you with this:

from huggingface_hub import notebook_login\n\nnotebook_login()

This will display a widget where you can enter your Hugging Face login credentials.

If you aren’t working in a notebook, just type the following line in your terminal:

huggingface-cli login

Once this is done, we can define our TrainingArguments:

from transformers import TrainingArguments\n\nargs = TrainingArguments(\n    \"bert-finetuned-ner\",\n    evaluation_strategy=\"epoch\",\n    save_strategy=\"epoch\",\n    learning_rate=2e-5,\n    num_train_epochs=3,\n    weight_decay=0.01,\n    push_to_hub=True,\n)

You’ve seen most of those before: we set some hyperparameters (like the learning rate, the number of epochs to train for, and the weight decay), and we specify push_to_hub=True to indicate that we want to save the model and evaluate it at the end of every epoch, and that we want to upload our results to the Model Hub. Note that you can specify the name of the repository you want to push to with the hub_model_id argument (in particular, you will have to use this argument to push to an organization). For instance, when we pushed the model to the huggingface-course organization, we added hub_model_id=\"huggingface-course/bert-finetuned-ner\" to TrainingArguments. By default, the repository used will be in your namespace and named after the output directory you set, so in our case it will be \"sgugger/bert-finetuned-ner\".

💡 If the output directory you are using already exists, it needs to be a local clone of the repository you want to push to. If it isn’t, you’ll get an error when defining your Trainer and will need to set a new name.

Finally, we just pass everything to the Trainer and launch the training:

from transformers import Trainer\n\ntrainer = Trainer(\n    model=model,\n    args=args,\n    train_dataset=tokenized_datasets[\"train\"],\n    eval_dataset=tokenized_datasets[\"validation\"],\n    data_collator=data_collator,\n    compute_metrics=compute_metrics,\n    tokenizer=tokenizer,\n)\ntrainer.train()

Note that while the training happens, each time the model is saved (here, every epoch) it is uploaded to the Hub in the background. This way, you will be able to to resume your training on another machine if necessary.

Once the training is complete, we use the push_to_hub() method to make sure we upload the most recent version of the model:

trainer.push_to_hub(commit_message=\"Training complete\")

This command returns the URL of the commit it just did, if you want to inspect it:

'https://huggingface.co/sgugger/bert-finetuned-ner/commit/26ab21e5b1568f9afeccdaed2d8715f571d786ed'

The Trainer also drafts a model card with all the evaluation results and uploads it. At this stage, you can use the inference widget on the Model Hub to test your model and share it with your friends. You have successfully fine-tuned a model on a token classification task — congratulations!

If you want to dive a bit more deeply into the training loop, we will now show you how to do the same thing using 🤗 Accelerate.

A custom training loop

Let’s now take a look at the full training loop, so you can easily customize the parts you need. It will look a lot like what we did in Chapter 3, with a few changes for the evaluation.

Preparing everything for training

First we need to build the DataLoaders from our datasets. We’ll reuse our data_collator as a collate_fn and shuffle the training set, but not the validation set:

from torch.utils.data import DataLoader\n\ntrain_dataloader = DataLoader(\n    tokenized_datasets[\"train\"],\n    shuffle=True,\n    collate_fn=data_collator,\n    batch_size=8,\n)\neval_dataloader = DataLoader(\n    tokenized_datasets[\"validation\"], collate_fn=data_collator, batch_size=8\n)

Next we reinstantiate our model, to make sure we’re not continuing the fine-tuning from before but starting from the BERT pretrained model again:

model = AutoModelForTokenClassification.from_pretrained(\n    model_checkpoint,\n    id2label=id2label,\n    label2id=label2id,\n)

Then we will need an optimizer. We’ll use the classic AdamW, which is like Adam, but with a fix in the way weight decay is applied:

from torch.optim import AdamW\n\noptimizer = AdamW(model.parameters(), lr=2e-5)

Once we have all those objects, we can send them to the accelerator.prepare() method:

from accelerate import Accelerator\n\naccelerator = Accelerator()\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n    model, optimizer, train_dataloader, eval_dataloader\n)

🚨 If you’re training on a TPU, you’ll need to move all the code starting from the cell above into a dedicated training function. See Chapter 3 for more details.

Now that we have sent our train_dataloader to accelerator.prepare(), we can use its length to compute the number of training steps. Remember that we should always do this after preparing the dataloader, as that method will change its length. We use a classic linear schedule from the learning rate to 0:

from transformers import get_scheduler\n\nnum_train_epochs = 3\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n    \"linear\",\n    optimizer=optimizer,\n    num_warmup_steps=0,\n    num_training_steps=num_training_steps,\n)

Lastly, to push our model to the Hub, we will need to create a Repository object in a working folder. First log in to Hugging Face, if you’re not logged in already. We’ll determine the repository name from the model ID we want to give our model (feel free to replace the repo_name with your own choice; it just needs to contain your username, which is what the function get_full_repo_name() does):

from huggingface_hub import Repository, get_full_repo_name\n\nmodel_name = \"bert-finetuned-ner-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name
'sgugger/bert-finetuned-ner-accelerate'

Then we can clone that repository in a local folder. If it already exists, this local folder should be an existing clone of the repository we are working with:

output_dir = \"bert-finetuned-ner-accelerate\"\nrepo = Repository(output_dir, clone_from=repo_name)

We can now upload anything we save in output_dir by calling the repo.push_to_hub() method. This will help us upload the intermediate models at the end of each epoch.

Training loop

We are now ready to write the full training loop. To simplify its evaluation part, we define this postprocess() function that takes predictions and labels and converts them to lists of strings, like our metric object expects:

def postprocess(predictions, labels):\n    predictions = predictions.detach().cpu().clone().numpy()\n    labels = labels.detach().cpu().clone().numpy()\n\n    # Remove ignored index (special tokens) and convert to labels\n    true_labels = [[label_names[l] for l in label if l != -100] for label in labels]\n    true_predictions = [\n        [label_names[p] for (p, l) in zip(prediction, label) if l != -100]\n        for prediction, label in zip(predictions, labels)\n    ]\n    return true_labels, true_predictions

Then we can write the training loop. After defining a progress bar to follow how training goes, the loop has three parts:

  • The training in itself, which is the classic iteration over the train_dataloader, forward pass through the model, then backward pass and optimizer step.
  • The evaluation, in which there is a novelty after getting the outputs of our model on a batch: since two processes may have padded the inputs and labels to different shapes, we need to use accelerator.pad_across_processes() to make the predictions and labels the same shape before calling the gather() method. If we don’t do this, the evaluation will either error out or hang forever. Then we send the results to metric.add_batch() and call metric.compute() once the evaluation loop is over.
  • Saving and uploading, where we first save the model and the tokenizer, then call repo.push_to_hub(). Notice that we use the argument blocking=False to tell the 🤗 Hub library to push in an asynchronous process. This way, training continues normally and this (long) instruction is executed in the background.

Here’s the complete code for the training loop:

from tqdm.auto import tqdm\nimport torch\n\nprogress_bar = tqdm(range(num_training_steps))\n\nfor epoch in range(num_train_epochs):\n    # Training\n    model.train()\n    for batch in train_dataloader:\n        outputs = model(**batch)\n        loss = outputs.loss\n        accelerator.backward(loss)\n\n        optimizer.step()\n        lr_scheduler.step()\n        optimizer.zero_grad()\n        progress_bar.update(1)\n\n    # Evaluation\n    model.eval()\n    for batch in eval_dataloader:\n        with torch.no_grad():\n            outputs = model(**batch)\n\n        predictions = outputs.logits.argmax(dim=-1)\n        labels = batch[\"labels\"]\n\n        # Necessary to pad predictions and labels for being gathered\n        predictions = accelerator.pad_across_processes(predictions, dim=1, pad_index=-100)\n        labels = accelerator.pad_across_processes(labels, dim=1, pad_index=-100)\n\n        predictions_gathered = accelerator.gather(predictions)\n        labels_gathered = accelerator.gather(labels)\n\n        true_predictions, true_labels = postprocess(predictions_gathered, labels_gathered)\n        metric.add_batch(predictions=true_predictions, references=true_labels)\n\n    results = metric.compute()\n    print(\n        f\"epoch {epoch}:\",\n        {\n            key: results[f\"overall_{key}\"]\n            for key in [\"precision\", \"recall\", \"f1\", \"accuracy\"]\n        },\n    )\n\n    # Save and upload\n    accelerator.wait_for_everyone()\n    unwrapped_model = accelerator.unwrap_model(model)\n    unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n    if accelerator.is_main_process:\n        tokenizer.save_pretrained(output_dir)\n        repo.push_to_hub(\n            commit_message=f\"Training in progress epoch {epoch}\", blocking=False\n        )

In case this is the first time you’re seeing a model saved with 🤗 Accelerate, let’s take a moment to inspect the three lines of code that go with it:

accelerator.wait_for_everyone()\nunwrapped_model = accelerator.unwrap_model(model)\nunwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)

The first line is self-explanatory: it tells all the processes to wait until everyone is at that stage before continuing. This is to make sure we have the same model in every process before saving. Then we grab the unwrapped_model, which is the base model we defined. The accelerator.prepare() method changes the model to work in distributed training, so it won’t have the save_pretrained() method anymore; the accelerator.unwrap_model() method undoes that step. Lastly, we call save_pretrained() but tell that method to use accelerator.save() instead of torch.save().

Once this is done, you should have a model that produces results pretty similar to the one trained with the Trainer. You can check the model we trained using this code at huggingface-course/bert-finetuned-ner-accelerate. And if you want to test out any tweaks to the training loop, you can directly implement them by editing the code shown above!

Using the fine-tuned model

We’ve already shown you how you can use the model we fine-tuned on the Model Hub with the inference widget. To use it locally in a pipeline, you just have to specify the proper model identifier:

from transformers import pipeline\n\n# Replace this with your own checkpoint\nmodel_checkpoint = \"huggingface-course/bert-finetuned-ner\"\ntoken_classifier = pipeline(\n    \"token-classification\", model=model_checkpoint, aggregation_strategy=\"simple\"\n)\ntoken_classifier(\"My name is Sylvain and I work at Hugging Face in Brooklyn.\")
[{'entity_group': 'PER', 'score': 0.9988506, 'word': 'Sylvain', 'start': 11, 'end': 18},\n {'entity_group': 'ORG', 'score': 0.9647625, 'word': 'Hugging Face', 'start': 33, 'end': 45},\n {'entity_group': 'LOC', 'score': 0.9986118, 'word': 'Brooklyn', 'start': 49, 'end': 57}]

Great! Our model is working as well as the default one for this pipeline!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:28.281Z"} {"title":"Fine-tuning a masked language model - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter7/3?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#fine-tuning-a-masked-language-model)Fine-tuning a masked language model\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-7-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter7/section3_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter7/section3_pt.ipynb)\n\nFor many NLP applications involving Transformer models, you can simply take a pretrained model from the Hugging Face Hub and fine-tune it directly on your data for the task at hand. Provided that the corpus used for pretraining is not too different from the corpus used for fine-tuning, transfer learning will usually produce good results.\n\nHowever, there are a few cases where you’ll want to first fine-tune the language models on your data, before training a task-specific head. For example, if your dataset contains legal contracts or scientific articles, a vanilla Transformer model like BERT will typically treat the domain-specific words in your corpus as rare tokens, and the resulting performance may be less than satisfactory. By fine-tuning the language model on in-domain data you can boost the performance of many downstream tasks, which means you usually only have to do this step once!\n\nThis process of fine-tuning a pretrained language model on in-domain data is usually called _domain adaptation_. It was popularized in 2018 by [ULMFiT](https://arxiv.org/abs/1801.06146), which was one of the first neural architectures (based on LSTMs) to make transfer learning really work for NLP. An example of domain adaptation with ULMFiT is shown in the image below; in this section we’ll do something similar, but with a Transformer instead of an LSTM!\n\n![ULMFiT.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/ulmfit.svg) ![ULMFiT.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/ulmfit-dark.svg)\n\nBy the end of this section you’ll have a [masked language model](https://huggingface.co/huggingface-course/distilbert-base-uncased-finetuned-imdb?text=This+is+a+great+%5BMASK%5D.) on the Hub that can autocomplete sentences as shown below:\n\nLet’s dive in!\n\n🙋 If the terms “masked language modeling” and “pretrained model” sound unfamiliar to you, go check out [Chapter 1](/course/chapter1), where we explain all these core concepts, complete with videos!\n\n## [](#picking-a-pretrained-model-for-masked-language-modeling)Picking a pretrained model for masked language modeling\n\nTo get started, let’s pick a suitable pretrained model for masked language modeling. As shown in the following screenshot, you can find a list of candidates by applying the “Fill-Mask” filter on the [Hugging Face Hub](https://huggingface.co/models?pipeline_tag=fill-mask&sort=downloads):\n\n![Hub models.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/mlm-models.png)\n\nAlthough the BERT and RoBERTa family of models are the most downloaded, we’ll use a model called [DistilBERT](https://huggingface.co/distilbert-base-uncased) that can be trained much faster with little to no loss in downstream performance. This model was trained using a special technique called [_knowledge distillation_](https://en.wikipedia.org/wiki/Knowledge_distillation), where a large “teacher model” like BERT is used to guide the training of a “student model” that has far fewer parameters. An explanation of the details of knowledge distillation would take us too far afield in this section, but if you’re interested you can read all about it in [_Natural Language Processing with Transformers_](https://www.oreilly.com/library/view/natural-language-processing/9781098136789/) (colloquially known as the Transformers textbook).\n\nLet’s go ahead and download DistilBERT using the `AutoModelForMaskedLM` class:\n\n```\nfrom transformers import AutoModelForMaskedLM\n\nmodel_checkpoint = \"distilbert-base-uncased\"\nmodel = AutoModelForMaskedLM.from_pretrained(model_checkpoint)```\n\nWe can see how many parameters this model has by calling the `num_parameters()` method:\n\n```\ndistilbert_num_parameters = model.num_parameters() / 1_000_000\nprint(f\"'>>> DistilBERT number of parameters: {round(distilbert_num_parameters)}M'\")\nprint(f\"'>>> BERT number of parameters: 110M'\")```\n\n```\n'>>> DistilBERT number of parameters: 67M'\n'>>> BERT number of parameters: 110M'```\n\nWith around 67 million parameters, DistilBERT is approximately two times smaller than the BERT base model, which roughly translates into a two-fold speedup in training — nice! Let’s now see what kinds of tokens this model predicts are the most likely completions of a small sample of text:\n\n```\ntext = \"This is a great [MASK].\"```\n\nAs humans, we can imagine many possibilities for the `[MASK]` token, such as “day”, “ride”, or “painting”. For pretrained models, the predictions depend on the corpus the model was trained on, since it learns to pick up the statistical patterns present in the data. Like BERT, DistilBERT was pretrained on the [English Wikipedia](https://huggingface.co/datasets/wikipedia) and [BookCorpus](https://huggingface.co/datasets/bookcorpus) datasets, so we expect the predictions for `[MASK]` to reflect these domains. To predict the mask we need DistilBERT’s tokenizer to produce the inputs for the model, so let’s download that from the Hub as well:\n\n```\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)```\n\nWith a tokenizer and a model, we can now pass our text example to the model, extract the logits, and print out the top 5 candidates:\n\n```\nimport torch\n\ninputs = tokenizer(text, return_tensors=\"pt\")\ntoken_logits = model(**inputs).logits\n\nmask_token_index = torch.where(inputs[\"input_ids\"] == tokenizer.mask_token_id)[1]\nmask_token_logits = token_logits[0, mask_token_index, :]\n\ntop_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()\n\nfor token in top_5_tokens:\n print(f\"'>>> {text.replace(tokenizer.mask_token, tokenizer.decode([token]))}'\")```\n\n```\n'>>> This is a great deal.'\n'>>> This is a great success.'\n'>>> This is a great adventure.'\n'>>> This is a great idea.'\n'>>> This is a great feat.'```\n\nWe can see from the outputs that the model’s predictions refer to everyday terms, which is perhaps not surprising given the foundation of English Wikipedia. Let’s see how we can change this domain to something a bit more niche — highly polarized movie reviews!\n\n## [](#the-dataset)The dataset\n\nTo showcase domain adaptation, we’ll use the famous [Large Movie Review Dataset](https://huggingface.co/datasets/imdb) (or IMDb for short), which is a corpus of movie reviews that is often used to benchmark sentiment analysis models. By fine-tuning DistilBERT on this corpus, we expect the language model will adapt its vocabulary from the factual data of Wikipedia that it was pretrained on to the more subjective elements of movie reviews. We can get the data from the Hugging Face Hub with the `load_dataset()` function from 🤗 Datasets:\n\n```\nfrom datasets import load_dataset\n\nimdb_dataset = load_dataset(\"imdb\")\nimdb_dataset```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['text', 'label'],\n num_rows: 25000\n })\n test: Dataset({\n features: ['text', 'label'],\n num_rows: 25000\n })\n unsupervised: Dataset({\n features: ['text', 'label'],\n num_rows: 50000\n })\n})```\n\nWe can see that the `train` and `test` splits each consist of 25,000 reviews, while there is an unlabeled split called `unsupervised` that contains 50,000 reviews. Let’s take a look at a few samples to get an idea of what kind of text we’re dealing with. As we’ve done in previous chapters of the course, we’ll chain the `Dataset.shuffle()` and `Dataset.select()` functions to create a random sample:\n\n```\nsample = imdb_dataset[\"train\"].shuffle(seed=42).select(range(3))\n\nfor row in sample:\n print(f\"\\n'>>> Review: {row['text']}'\")\n print(f\"'>>> Label: {row['label']}'\")```\n\n```\n'>>> Review: This is your typical Priyadarshan movie--a bunch of loony characters out on some silly mission. His signature climax has the entire cast of the film coming together and fighting each other in some crazy moshpit over hidden money. Whether it is a winning lottery ticket in Malamaal Weekly, black money in Hera Pheri, \"kodokoo\" in Phir Hera Pheri, etc., etc., the director is becoming ridiculously predictable. Don\\'t get me wrong; as clichéd and preposterous his movies may be, I usually end up enjoying the comedy. However, in most his previous movies there has actually been some good humor, (Hungama and Hera Pheri being noteworthy ones). Now, the hilarity of his films is fading as he is using the same formula over and over again.

Songs are good. Tanushree Datta looks awesome. Rajpal Yadav is irritating, and Tusshar is not a whole lot better. Kunal Khemu is OK, and Sharman Joshi is the best.'\n'>>> Label: 0'\n\n'>>> Review: Okay, the story makes no sense, the characters lack any dimensionally, the best dialogue is ad-libs about the low quality of movie, the cinematography is dismal, and only editing saves a bit of the muddle, but Sam\" Peckinpah directed the film. Somehow, his direction is not enough. For those who appreciate Peckinpah and his great work, this movie is a disappointment. Even a great cast cannot redeem the time the viewer wastes with this minimal effort.

The proper response to the movie is the contempt that the director San Peckinpah, James Caan, Robert Duvall, Burt Young, Bo Hopkins, Arthur Hill, and even Gig Young bring to their work. Watch the great Peckinpah films. Skip this mess.'\n'>>> Label: 0'\n\n'>>> Review: I saw this movie at the theaters when I was about 6 or 7 years old. I loved it then, and have recently come to own a VHS version.

My 4 and 6 year old children love this movie and have been asking again and again to watch it.

I have enjoyed watching it again too. Though I have to admit it is not as good on a little TV.

I do not have older children so I do not know what they would think of it.

The songs are very cute. My daughter keeps singing them over and over.

Hope this helps.'\n'>>> Label: 1'```\n\nYep, these are certainly movie reviews, and if you’re old enough you may even understand the comment in the last review about owning a VHS version 😜! Although we won’t need the labels for language modeling, we can already see that a `0` denotes a negative review, while a `1` corresponds to a positive one.\n\n✏️ **Try it out!** Create a random sample of the `unsupervised` split and verify that the labels are neither `0` nor `1`. While you’re at it, you could also check that the labels in the `train` and `test` splits are indeed `0` or `1` — this is a useful sanity check that every NLP practitioner should perform at the start of a new project!\n\nNow that we’ve had a quick look at the data, let’s dive into preparing it for masked language modeling. As we’ll see, there are some additional steps that one needs to take compared to the sequence classification tasks we saw in [Chapter 3](/course/chapter3). Let’s go!\n\n## [](#preprocessing-the-data)Preprocessing the data\n\nFor both auto-regressive and masked language modeling, a common preprocessing step is to concatenate all the examples and then split the whole corpus into chunks of equal size. This is quite different from our usual approach, where we simply tokenize individual examples. Why concatenate everything together? The reason is that individual examples might get truncated if they’re too long, and that would result in losing information that might be useful for the language modeling task!\n\nSo to get started, we’ll first tokenize our corpus as usual, but _without_ setting the `truncation=True` option in our tokenizer. We’ll also grab the word IDs if they are available ((which they will be if we’re using a fast tokenizer, as described in [Chapter 6](/course/chapter6/3)), as we will need them later on to do whole word masking. We’ll wrap this in a simple function, and while we’re at it we’ll remove the `text` and `label` columns since we don’t need them any longer:\n\n```\ndef tokenize_function(examples):\n result = tokenizer(examples[\"text\"])\n if tokenizer.is_fast:\n result[\"word_ids\"] = [result.word_ids(i) for i in range(len(result[\"input_ids\"]))]\n return result\n\n\n\ntokenized_datasets = imdb_dataset.map(\n tokenize_function, batched=True, remove_columns=[\"text\", \"label\"]\n)\ntokenized_datasets```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['attention_mask', 'input_ids', 'word_ids'],\n num_rows: 25000\n })\n test: Dataset({\n features: ['attention_mask', 'input_ids', 'word_ids'],\n num_rows: 25000\n })\n unsupervised: Dataset({\n features: ['attention_mask', 'input_ids', 'word_ids'],\n num_rows: 50000\n })\n})```\n\nSince DistilBERT is a BERT-like model, we can see that the encoded texts consist of the `input_ids` and `attention_mask` that we’ve seen in other chapters, as well as the `word_ids` we added.\n\nNow that we’ve tokenized our movie reviews, the next step is to group them all together and split the result into chunks. But how big should these chunks be? This will ultimately be determined by the amount of GPU memory that you have available, but a good starting point is to see what the model’s maximum context size is. This can be inferred by inspecting the `model_max_length` attribute of the tokenizer:\n\n```\ntokenizer.model_max_length```\n\nThis value is derived from the _tokenizer\\_config.json_ file associated with a checkpoint; in this case we can see that the context size is 512 tokens, just like with BERT.\n\n✏️ **Try it out!** Some Transformer models, like [BigBird](https://huggingface.co/google/bigbird-roberta-base) and [Longformer](hf.co/allenai/longformer-base-4096), have a much longer context length than BERT and other early Transformer models. Instantiate the tokenizer for one of these checkpoints and verify that the `model_max_length` agrees with what’s quoted on its model card.\n\nSo, in order to run our experiments on GPUs like those found on Google Colab, we’ll pick something a bit smaller that can fit in memory:\n\nNote that using a small chunk size can be detrimental in real-world scenarios, so you should use a size that corresponds to the use case you will apply your model to.\n\nNow comes the fun part. To show how the concatenation works, let’s take a few reviews from our tokenized training set and print out the number of tokens per review:\n\n```\ntokenized_samples = tokenized_datasets[\"train\"][:3]\n\nfor idx, sample in enumerate(tokenized_samples[\"input_ids\"]):\n print(f\"'>>> Review {idx} length: {len(sample)}'\")```\n\n```\n'>>> Review 0 length: 200'\n'>>> Review 1 length: 559'\n'>>> Review 2 length: 192'```\n\nWe can then concatenate all these examples with a simple dictionary comprehension, as follows:\n\n```\nconcatenated_examples = {\n k: sum(tokenized_samples[k], []) for k in tokenized_samples.keys()\n}\ntotal_length = len(concatenated_examples[\"input_ids\"])\nprint(f\"'>>> Concatenated reviews length: {total_length}'\")```\n\n```\n'>>> Concatenated reviews length: 951'```\n\nGreat, the total length checks out — so now let’s split the concatenated reviews into chunks of the size given by `block_size`. To do so, we iterate over the features in `concatenated_examples` and use a list comprehension to create slices of each feature. The result is a dictionary of chunks for each feature:\n\n```\nchunks = {\n k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]\n for k, t in concatenated_examples.items()\n}\n\nfor chunk in chunks[\"input_ids\"]:\n print(f\"'>>> Chunk length: {len(chunk)}'\")```\n\n```\n'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 55'```\n\nAs you can see in this example, the last chunk will generally be smaller than the maximum chunk size. There are two main strategies for dealing with this:\n\n- Drop the last chunk if it’s smaller than `chunk_size`.\n- Pad the last chunk until its length equals `chunk_size`.\n\nWe’ll take the first approach here, so let’s wrap all of the above logic in a single function that we can apply to our tokenized datasets:\n\n```\ndef group_texts(examples):\n \n concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\n \n total_length = len(concatenated_examples[list(examples.keys())[0]])\n \n total_length = (total_length // chunk_size) * chunk_size\n \n result = {\n k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]\n for k, t in concatenated_examples.items()\n }\n \n result[\"labels\"] = result[\"input_ids\"].copy()\n return result```\n\nNote that in the last step of `group_texts()` we create a new `labels` column which is a copy of the `input_ids` one. As we’ll see shortly, that’s because in masked language modeling the objective is to predict randomly masked tokens in the input batch, and by creating a `labels` column we provide the ground truth for our language model to learn from.\n\nLet’s now apply `group_texts()` to our tokenized datasets using our trusty `Dataset.map()` function:\n\n```\nlm_datasets = tokenized_datasets.map(group_texts, batched=True)\nlm_datasets```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n num_rows: 61289\n })\n test: Dataset({\n features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n num_rows: 59905\n })\n unsupervised: Dataset({\n features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n num_rows: 122963\n })\n})```\n\nYou can see that grouping and then chunking the texts has produced many more examples than our original 25,000 for the `train` and `test` splits. That’s because we now have examples involving _contiguous tokens_ that span across multiple examples from the original corpus. You can see this explicitly by looking for the special `[SEP]` and `[CLS]` tokens in one of the chunks:\n\n```\ntokenizer.decode(lm_datasets[\"train\"][1][\"input_ids\"])```\n\n```\n\".... at.......... high. a classic line : inspector : i'm here to sack one of your teachers. student : welcome to bromwell high. i expect that many adults of my age think that bromwell high is far fetched. what a pity that it isn't! [SEP] [CLS] homelessness ( or houselessness as george carlin stated ) has been an issue for years but never a plan to help those on the street that were once considered human who did everything from going to school, work, or vote for the matter. most people think of the homeless\"```\n\nIn this example you can see two overlapping movie reviews, one about a high school movie and the other about homelessness. Let’s also check out what the labels look like for masked language modeling:\n\n```\ntokenizer.decode(lm_datasets[\"train\"][1][\"labels\"])```\n\n```\n\".... at.......... high. a classic line : inspector : i'm here to sack one of your teachers. student : welcome to bromwell high. i expect that many adults of my age think that bromwell high is far fetched. what a pity that it isn't! [SEP] [CLS] homelessness ( or houselessness as george carlin stated ) has been an issue for years but never a plan to help those on the street that were once considered human who did everything from going to school, work, or vote for the matter. most people think of the homeless\"```\n\nAs expected from our `group_texts()` function above, this looks identical to the decoded `input_ids` — but then how can our model possibly learn anything? We’re missing a key step: inserting `[MASK]` tokens at random positions in the inputs! Let’s see how we can do this on the fly during fine-tuning using a special data collator.\n\n## [](#fine-tuning-distilbert-with-the-trainer-api)Fine-tuning DistilBERT with the `Trainer` API\n\nFine-tuning a masked language model is almost identical to fine-tuning a sequence classification model, like we did in [Chapter 3](/course/chapter3). The only difference is that we need a special data collator that can randomly mask some of the tokens in each batch of texts. Fortunately, 🤗 Transformers comes prepared with a dedicated `DataCollatorForLanguageModeling` for just this task. We just have to pass it the tokenizer and an `mlm_probability` argument that specifies what fraction of the tokens to mask. We’ll pick 15%, which is the amount used for BERT and a common choice in the literature:\n\n```\nfrom transformers import DataCollatorForLanguageModeling\n\ndata_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)```\n\nTo see how the random masking works, let’s feed a few examples to the data collator. Since it expects a list of `dict`s, where each `dict` represents a single chunk of contiguous text, we first iterate over the dataset before feeding the batch to the collator. We remove the `\"word_ids\"` key for this data collator as it does not expect it:\n\n```\nsamples = [lm_datasets[\"train\"][i] for i in range(2)]\nfor sample in samples:\n _ = sample.pop(\"word_ids\")\n\nfor chunk in data_collator(samples)[\"input_ids\"]:\n print(f\"\\n'>>> {tokenizer.decode(chunk)}'\")```\n\n```\n'>>> [CLS] bromwell [MASK] is a cartoon comedy. it ran at the same [MASK] as some other [MASK] about school life, [MASK] as \" teachers \". [MASK] [MASK] [MASK] in the teaching [MASK] lead [MASK] to believe that bromwell high\\'[MASK] satire is much closer to reality than is \" teachers \". the scramble [MASK] [MASK] financially, the [MASK]ful students whogn [MASK] right through [MASK] pathetic teachers\\'pomp, the pettiness of the whole situation, distinction remind me of the schools i knew and their students. when i saw [MASK] episode in [MASK] a student repeatedly tried to burn down the school, [MASK] immediately recalled. [MASK]...'\n\n'>>> .... at.. [MASK]... [MASK]... high. a classic line plucked inspector : i\\'[MASK] here to [MASK] one of your [MASK]. student : welcome to bromwell [MASK]. i expect that many adults of my age think that [MASK]mwell [MASK] is [MASK] fetched. what a pity that it isn\\'t! [SEP] [CLS] [MASK]ness ( or [MASK]lessness as george 宇in stated )公 been an issue for years but never [MASK] plan to help those on the street that were once considered human [MASK] did everything from going to school, [MASK], [MASK] vote for the matter. most people think [MASK] the homeless'```\n\nNice, it worked! We can see that the `[MASK]` token has been randomly inserted at various locations in our text. These will be the tokens which our model will have to predict during training — and the beauty of the data collator is that it will randomize the `[MASK]` insertion with every batch!\n\n✏️ **Try it out!** Run the code snippet above several times to see the random masking happen in front of your very eyes! Also replace the `tokenizer.decode()` method with `tokenizer.convert_ids_to_tokens()` to see that sometimes a single token from a given word is masked, and not the others.\n\nOne side effect of random masking is that our evaluation metrics will not be deterministic when using the `Trainer`, since we use the same data collator for the training and test sets. We’ll see later, when we look at fine-tuning with 🤗 Accelerate, how we can use the flexibility of a custom evaluation loop to freeze the randomness.\n\nWhen training models for masked language modeling, one technique that can be used is to mask whole words together, not just individual tokens. This approach is called _whole word masking_. If we want to use whole word masking, we will need to build a data collator ourselves. A data collator is just a function that takes a list of samples and converts them into a batch, so let’s do this now! We’ll use the word IDs computed earlier to make a map between word indices and the corresponding tokens, then randomly decide which words to mask and apply that mask on the inputs. Note that the labels are all `-100` except for the ones corresponding to mask words.\n\n```\nimport collections\nimport numpy as np\n\nfrom transformers import default_data_collator\n\nwwm_probability = 0.2\n\n\ndef whole_word_masking_data_collator(features):\n for feature in features:\n word_ids = feature.pop(\"word_ids\")\n\n \n mapping = collections.defaultdict(list)\n current_word_index = -1\n current_word = None\n for idx, word_id in enumerate(word_ids):\n if word_id is not None:\n if word_id != current_word:\n current_word = word_id\n current_word_index += 1\n mapping[current_word_index].append(idx)\n\n \n mask = np.random.binomial(1, wwm_probability, (len(mapping),))\n input_ids = feature[\"input_ids\"]\n labels = feature[\"labels\"]\n new_labels = [-100] * len(labels)\n for word_id in np.where(mask)[0]:\n word_id = word_id.item()\n for idx in mapping[word_id]:\n new_labels[idx] = labels[idx]\n input_ids[idx] = tokenizer.mask_token_id\n feature[\"labels\"] = new_labels\n\n return default_data_collator(features)```\n\nNext, we can try it on the same samples as before:\n\n```\nsamples = [lm_datasets[\"train\"][i] for i in range(2)]\nbatch = whole_word_masking_data_collator(samples)\n\nfor chunk in batch[\"input_ids\"]:\n print(f\"\\n'>>> {tokenizer.decode(chunk)}'\")```\n\n```\n'>>> [CLS] bromwell high is a cartoon comedy [MASK] it ran at the same time as some other programs about school life, such as \" teachers \". my 35 years in the teaching profession lead me to believe that bromwell high\\'s satire is much closer to reality than is \" teachers \". the scramble to survive financially, the insightful students who can see right through their pathetic teachers\\'pomp, the pettiness of the whole situation, all remind me of the schools i knew and their students. when i saw the episode in which a student repeatedly tried to burn down the school, i immediately recalled.....'\n\n'>>> .... [MASK] [MASK] [MASK] [MASK]....... high. a classic line : inspector : i\\'m here to sack one of your teachers. student : welcome to bromwell high. i expect that many adults of my age think that bromwell high is far fetched. what a pity that it isn\\'t! [SEP] [CLS] homelessness ( or houselessness as george carlin stated ) has been an issue for years but never a plan to help those on the street that were once considered human who did everything from going to school, work, or vote for the matter. most people think of the homeless'```\n\n✏️ **Try it out!** Run the code snippet above several times to see the random masking happen in front of your very eyes! Also replace the `tokenizer.decode()` method with `tokenizer.convert_ids_to_tokens()` to see that the tokens from a given word are always masked together.\n\nNow that we have two data collators, the rest of the fine-tuning steps are standard. Training can take a while on Google Colab if you’re not lucky enough to score a mythical P100 GPU 😭, so we’ll first downsample the size of the training set to a few thousand examples. Don’t worry, we’ll still get a pretty decent language model! A quick way to downsample a dataset in 🤗 Datasets is via the `Dataset.train_test_split()` function that we saw in [Chapter 5](/course/chapter5):\n\n```\ntrain_size = 10_000\ntest_size = int(0.1 * train_size)\n\ndownsampled_dataset = lm_datasets[\"train\"].train_test_split(\n train_size=train_size, test_size=test_size, seed=42\n)\ndownsampled_dataset```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n num_rows: 10000\n })\n test: Dataset({\n features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n num_rows: 1000\n })\n})```\n\nThis has automatically created new `train` and `test` splits, with the training set size set to 10,000 examples and the validation set to 10% of that — feel free to increase this if you have a beefy GPU! The next thing we need to do is log in to the Hugging Face Hub. If you’re running this code in a notebook, you can do so with the following utility function:\n\n```\nfrom huggingface_hub import notebook_login\n\nnotebook_login()```\n\nwhich will display a widget where you can enter your credentials. Alternatively, you can run:\n\nin your favorite terminal and log in there.\n\nOnce we’re logged in, we can specify the arguments for the `Trainer`:\n\n```\nfrom transformers import TrainingArguments\n\nbatch_size = 64\n\nlogging_steps = len(downsampled_dataset[\"train\"]) // batch_size\nmodel_name = model_checkpoint.split(\"/\")[-1]\n\ntraining_args = TrainingArguments(\n output_dir=f\"{model_name}-finetuned-imdb\",\n overwrite_output_dir=True,\n evaluation_strategy=\"epoch\",\n learning_rate=2e-5,\n weight_decay=0.01,\n per_device_train_batch_size=batch_size,\n per_device_eval_batch_size=batch_size,\n push_to_hub=True,\n fp16=True,\n logging_steps=logging_steps,\n)```\n\nHere we tweaked a few of the default options, including `logging_steps` to ensure we track the training loss with each epoch. We’ve also used `fp16=True` to enable mixed-precision training, which gives us another boost in speed. By default, the `Trainer` will remove any columns that are not part of the model’s `forward()` method. This means that if you’re using the whole word masking collator, you’ll also need to set `remove_unused_columns=False` to ensure we don’t lose the `word_ids` column during training.\n\nNote that you can specify the name of the repository you want to push to with the `hub_model_id` argument (in particular, you will have to use this argument to push to an organization). For instance, when we pushed the model to the [`huggingface-course` organization](https://huggingface.co/huggingface-course), we added `hub_model_id=\"huggingface-course/distilbert-finetuned-imdb\"` to `TrainingArguments`. By default, the repository used will be in your namespace and named after the output directory you set, so in our case it will be `\"lewtun/distilbert-finetuned-imdb\"`.\n\nWe now have all the ingredients to instantiate the `Trainer`. Here we just use the standard `data_collator`, but you can try the whole word masking collator and compare the results as an exercise:\n\n```\nfrom transformers import Trainer\n\ntrainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=downsampled_dataset[\"train\"],\n eval_dataset=downsampled_dataset[\"test\"],\n data_collator=data_collator,\n tokenizer=tokenizer,\n)```\n\nWe’re now ready to run `trainer.train()` — but before doing so let’s briefly look at _perplexity_, which is a common metric to evaluate the performance of language models.\n\n### [](#perplexity-for-language-models)Perplexity for language models\n\nUnlike other tasks like text classification or question answering where we’re given a labeled corpus to train on, with language modeling we don’t have any explicit labels. So how do we determine what makes a good language model? Like with the autocorrect feature in your phone, a good language model is one that assigns high probabilities to sentences that are grammatically correct, and low probabilities to nonsense sentences. To give you a better idea of what this looks like, you can find whole sets of “autocorrect fails” online, where the model in a person’s phone has produced some rather funny (and often inappropriate) completions!\n\nAssuming our test set consists mostly of sentences that are grammatically correct, then one way to measure the quality of our language model is to calculate the probabilities it assigns to the next word in all the sentences of the test set. High probabilities indicates that the model is not “surprised” or “perplexed” by the unseen examples, and suggests it has learned the basic patterns of grammar in the language. There are various mathematical definitions of perplexity, but the one we’ll use defines it as the exponential of the cross-entropy loss. Thus, we can calculate the perplexity of our pretrained model by using the `Trainer.evaluate()` function to compute the cross-entropy loss on the test set and then taking the exponential of the result:\n\n```\nimport math\n\neval_results = trainer.evaluate()\nprint(f\">>> Perplexity: {math.exp(eval_results['eval_loss']):.2f}\")```\n\nA lower perplexity score means a better language model, and we can see here that our starting model has a somewhat large value. Let’s see if we can lower it by fine-tuning! To do that, we first run the training loop:\n\nand then compute the resulting perplexity on the test set as before:\n\n```\neval_results = trainer.evaluate()\nprint(f\">>> Perplexity: {math.exp(eval_results['eval_loss']):.2f}\")```\n\nNice — this is quite a reduction in perplexity, which tells us the model has learned something about the domain of movie reviews!\n\nOnce training is finished, we can push the model card with the training information to the Hub (the checkpoints are saved during training itself):\n\n✏️ **Your turn!** Run the training above after changing the data collator to the whole word masking collator. Do you get better results?\n\nIn our use case we didn’t need to do anything special with the training loop, but in some cases you might need to implement some custom logic. For these applications, you can use 🤗 Accelerate — let’s take a look!\n\n## [](#fine-tuning-distilbert-with-accelerate)Fine-tuning DistilBERT with 🤗 Accelerate\n\nAs we saw with the `Trainer`, fine-tuning a masked language model is very similar to the text classification example from [Chapter 3](/course/chapter3). In fact, the only subtlety is the use of a special data collator, and we’ve already covered that earlier in this section!\n\nHowever, we saw that `DataCollatorForLanguageModeling` also applies random masking with each evaluation, so we’ll see some fluctuations in our perplexity scores with each training run. One way to eliminate this source of randomness is to apply the masking _once_ on the whole test set, and then use the default data collator in 🤗 Transformers to collect the batches during evaluation. To see how this works, let’s implement a simple function that applies the masking on a batch, similar to our first encounter with `DataCollatorForLanguageModeling`:\n\n```\ndef insert_random_mask(batch):\n features = [dict(zip(batch, t)) for t in zip(*batch.values())]\n masked_inputs = data_collator(features)\n \n return {\"masked_\" + k: v.numpy() for k, v in masked_inputs.items()}```\n\nNext, we’ll apply this function to our test set and drop the unmasked columns so we can replace them with the masked ones. You can use whole word masking by replacing the `data_collator` above with the appropriate one, in which case you should remove the first line here:\n\n```\ndownsampled_dataset = downsampled_dataset.remove_columns([\"word_ids\"])\neval_dataset = downsampled_dataset[\"test\"].map(\n insert_random_mask,\n batched=True,\n remove_columns=downsampled_dataset[\"test\"].column_names,\n)\neval_dataset = eval_dataset.rename_columns(\n {\n \"masked_input_ids\": \"input_ids\",\n \"masked_attention_mask\": \"attention_mask\",\n \"masked_labels\": \"labels\",\n }\n)```\n\nWe can then set up the dataloaders as usual, but we’ll use the `default_data_collator` from 🤗 Transformers for the evaluation set:\n\n```\nfrom torch.utils.data import DataLoader\nfrom transformers import default_data_collator\n\nbatch_size = 64\ntrain_dataloader = DataLoader(\n downsampled_dataset[\"train\"],\n shuffle=True,\n batch_size=batch_size,\n collate_fn=data_collator,\n)\neval_dataloader = DataLoader(\n eval_dataset, batch_size=batch_size, collate_fn=default_data_collator\n)```\n\nForm here, we follow the standard steps with 🤗 Accelerate. The first order of business is to load a fresh version of the pretrained model:\n\n```\nmodel = AutoModelForMaskedLM.from_pretrained(model_checkpoint)```\n\nThen we need to specify the optimizer; we’ll use the standard `AdamW`:\n\n```\nfrom torch.optim import AdamW\n\noptimizer = AdamW(model.parameters(), lr=5e-5)```\n\nWith these objects, we can now prepare everything for training with the `Accelerator` object:\n\n```\nfrom accelerate import Accelerator\n\naccelerator = Accelerator()\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n model, optimizer, train_dataloader, eval_dataloader\n)```\n\nNow that our model, optimizer, and dataloaders are configured, we can specify the learning rate scheduler as follows:\n\n```\nfrom transformers import get_scheduler\n\nnum_train_epochs = 3\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n \"linear\",\n optimizer=optimizer,\n num_warmup_steps=0,\n num_training_steps=num_training_steps,\n)```\n\nThere is just one last thing to do before training: create a model repository on the Hugging Face Hub! We can use the 🤗 Hub library to first generate the full name of our repo:\n\n```\nfrom huggingface_hub import get_full_repo_name\n\nmodel_name = \"distilbert-base-uncased-finetuned-imdb-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name```\n\n```\n'lewtun/distilbert-base-uncased-finetuned-imdb-accelerate'```\n\nthen create and clone the repository using the `Repository` class from 🤗 Hub:\n\n```\nfrom huggingface_hub import Repository\n\noutput_dir = model_name\nrepo = Repository(output_dir, clone_from=repo_name)```\n\nWith that done, it’s just a simple matter of writing out the full training and evaluation loop:\n\n```\nfrom tqdm.auto import tqdm\nimport torch\nimport math\n\nprogress_bar = tqdm(range(num_training_steps))\n\nfor epoch in range(num_train_epochs):\n \n model.train()\n for batch in train_dataloader:\n outputs = model(**batch)\n loss = outputs.loss\n accelerator.backward(loss)\n\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n progress_bar.update(1)\n\n \n model.eval()\n losses = []\n for step, batch in enumerate(eval_dataloader):\n with torch.no_grad():\n outputs = model(**batch)\n\n loss = outputs.loss\n losses.append(accelerator.gather(loss.repeat(batch_size)))\n\n losses = torch.cat(losses)\n losses = losses[: len(eval_dataset)]\n try:\n perplexity = math.exp(torch.mean(losses))\n except OverflowError:\n perplexity = float(\"inf\")\n\n print(f\">>> Epoch {epoch}: Perplexity: {perplexity}\")\n\n \n accelerator.wait_for_everyone()\n unwrapped_model = accelerator.unwrap_model(model)\n unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n if accelerator.is_main_process:\n tokenizer.save_pretrained(output_dir)\n repo.push_to_hub(\n commit_message=f\"Training in progress epoch {epoch}\", blocking=False\n )```\n\n```\n>>> Epoch 0: Perplexity: 11.397545307900472\n>>> Epoch 1: Perplexity: 10.904909330983092\n>>> Epoch 2: Perplexity: 10.729503505340409```\n\nCool, we’ve been able to evaluate perplexity with each epoch and ensure that multiple training runs are reproducible!\n\n## [](#using-our-fine-tuned-model)Using our fine-tuned model\n\nYou can interact with your fine-tuned model either by using its widget on the Hub or locally with the `pipeline` from 🤗 Transformers. Let’s use the latter to download our model using the `fill-mask` pipeline:\n\n```\nfrom transformers import pipeline\n\nmask_filler = pipeline(\n \"fill-mask\", model=\"huggingface-course/distilbert-base-uncased-finetuned-imdb\"\n)```\n\nWe can then feed the pipeline our sample text of “This is a great \\[MASK\\]” and see what the top 5 predictions are:\n\n```\npreds = mask_filler(text)\n\nfor pred in preds:\n print(f\">>> {pred['sequence']}\")```\n\n```\n'>>> this is a great movie.'\n'>>> this is a great film.'\n'>>> this is a great story.'\n'>>> this is a great movies.'\n'>>> this is a great character.'```\n\nNeat — our model has clearly adapted its weights to predict words that are more strongly associated with movies!\n\nThis wraps up our first experiment with training a language model. In [section 6](/course/en/chapter7/section6) you’ll learn how to train an auto-regressive model like GPT-2 from scratch; head over there if you’d like to see how you can pretrain your very own Transformer model!\n\n✏️ **Try it out!** To quantify the benefits of domain adaptation, fine-tune a classifier on the IMDb labels for both the pretrained and fine-tuned DistilBERT checkpoints. If you need a refresher on text classification, check out [Chapter 3](/course/chapter3).","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tFine-tuning a masked language model - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Fine-tuning a masked language model

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Fine-tuning a masked language model

\"Ask \"Open \"Open

For many NLP applications involving Transformer models, you can simply take a pretrained model from the Hugging Face Hub and fine-tune it directly on your data for the task at hand. Provided that the corpus used for pretraining is not too different from the corpus used for fine-tuning, transfer learning will usually produce good results.

However, there are a few cases where you’ll want to first fine-tune the language models on your data, before training a task-specific head. For example, if your dataset contains legal contracts or scientific articles, a vanilla Transformer model like BERT will typically treat the domain-specific words in your corpus as rare tokens, and the resulting performance may be less than satisfactory. By fine-tuning the language model on in-domain data you can boost the performance of many downstream tasks, which means you usually only have to do this step once!

This process of fine-tuning a pretrained language model on in-domain data is usually called domain adaptation. It was popularized in 2018 by ULMFiT, which was one of the first neural architectures (based on LSTMs) to make transfer learning really work for NLP. An example of domain adaptation with ULMFiT is shown in the image below; in this section we’ll do something similar, but with a Transformer instead of an LSTM!

\"ULMFiT.\" \"ULMFiT.\"

By the end of this section you’ll have a masked language model on the Hub that can autocomplete sentences as shown below:

Let’s dive in!

🙋 If the terms “masked language modeling” and “pretrained model” sound unfamiliar to you, go check out Chapter 1, where we explain all these core concepts, complete with videos!

Picking a pretrained model for masked language modeling

To get started, let’s pick a suitable pretrained model for masked language modeling. As shown in the following screenshot, you can find a list of candidates by applying the “Fill-Mask” filter on the Hugging Face Hub:

\"Hub

Although the BERT and RoBERTa family of models are the most downloaded, we’ll use a model called DistilBERT\nthat can be trained much faster with little to no loss in downstream performance. This model was trained using a special technique called knowledge distillation, where a large “teacher model” like BERT is used to guide the training of a “student model” that has far fewer parameters. An explanation of the details of knowledge distillation would take us too far afield in this section, but if you’re interested you can read all about it in Natural Language Processing with Transformers (colloquially known as the Transformers textbook).

Let’s go ahead and download DistilBERT using the AutoModelForMaskedLM class:

from transformers import AutoModelForMaskedLM\n\nmodel_checkpoint = \"distilbert-base-uncased\"\nmodel = AutoModelForMaskedLM.from_pretrained(model_checkpoint)

We can see how many parameters this model has by calling the num_parameters() method:

distilbert_num_parameters = model.num_parameters() / 1_000_000\nprint(f\"'>>> DistilBERT number of parameters: {round(distilbert_num_parameters)}M'\")\nprint(f\"'>>> BERT number of parameters: 110M'\")
'>>> DistilBERT number of parameters: 67M'\n'>>> BERT number of parameters: 110M'

With around 67 million parameters, DistilBERT is approximately two times smaller than the BERT base model, which roughly translates into a two-fold speedup in training — nice! Let’s now see what kinds of tokens this model predicts are the most likely completions of a small sample of text:

text = \"This is a great [MASK].\"

As humans, we can imagine many possibilities for the [MASK] token, such as “day”, “ride”, or “painting”. For pretrained models, the predictions depend on the corpus the model was trained on, since it learns to pick up the statistical patterns present in the data. Like BERT, DistilBERT was pretrained on the English Wikipedia and BookCorpus datasets, so we expect the predictions for [MASK] to reflect these domains. To predict the mask we need DistilBERT’s tokenizer to produce the inputs for the model, so let’s download that from the Hub as well:

from transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

With a tokenizer and a model, we can now pass our text example to the model, extract the logits, and print out the top 5 candidates:

import torch\n\ninputs = tokenizer(text, return_tensors=\"pt\")\ntoken_logits = model(**inputs).logits\n# Find the location of [MASK] and extract its logits\nmask_token_index = torch.where(inputs[\"input_ids\"] == tokenizer.mask_token_id)[1]\nmask_token_logits = token_logits[0, mask_token_index, :]\n# Pick the [MASK] candidates with the highest logits\ntop_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()\n\nfor token in top_5_tokens:\n    print(f\"'>>> {text.replace(tokenizer.mask_token, tokenizer.decode([token]))}'\")
'>>> This is a great deal.'\n'>>> This is a great success.'\n'>>> This is a great adventure.'\n'>>> This is a great idea.'\n'>>> This is a great feat.'

We can see from the outputs that the model’s predictions refer to everyday terms, which is perhaps not surprising given the foundation of English Wikipedia. Let’s see how we can change this domain to something a bit more niche — highly polarized movie reviews!

The dataset

To showcase domain adaptation, we’ll use the famous Large Movie Review Dataset (or IMDb for short), which is a corpus of movie reviews that is often used to benchmark sentiment analysis models. By fine-tuning DistilBERT on this corpus, we expect the language model will adapt its vocabulary from the factual data of Wikipedia that it was pretrained on to the more subjective elements of movie reviews. We can get the data from the Hugging Face Hub with the load_dataset() function from 🤗 Datasets:

from datasets import load_dataset\n\nimdb_dataset = load_dataset(\"imdb\")\nimdb_dataset
DatasetDict({\n    train: Dataset({\n        features: ['text', 'label'],\n        num_rows: 25000\n    })\n    test: Dataset({\n        features: ['text', 'label'],\n        num_rows: 25000\n    })\n    unsupervised: Dataset({\n        features: ['text', 'label'],\n        num_rows: 50000\n    })\n})

We can see that the train and test splits each consist of 25,000 reviews, while there is an unlabeled split called unsupervised that contains 50,000 reviews. Let’s take a look at a few samples to get an idea of what kind of text we’re dealing with. As we’ve done in previous chapters of the course, we’ll chain the Dataset.shuffle() and Dataset.select() functions to create a random sample:

sample = imdb_dataset[\"train\"].shuffle(seed=42).select(range(3))\n\nfor row in sample:\n    print(f\"\\n'>>> Review: {row['text']}'\")\n    print(f\"'>>> Label: {row['label']}'\")
\n'>>> Review: This is your typical Priyadarshan movie--a bunch of loony characters out on some silly mission. His signature climax has the entire cast of the film coming together and fighting each other in some crazy moshpit over hidden money. Whether it is a winning lottery ticket in Malamaal Weekly, black money in Hera Pheri, \"kodokoo\" in Phir Hera Pheri, etc., etc., the director is becoming ridiculously predictable. Don\\'t get me wrong; as clichéd and preposterous his movies may be, I usually end up enjoying the comedy. However, in most his previous movies there has actually been some good humor, (Hungama and Hera Pheri being noteworthy ones). Now, the hilarity of his films is fading as he is using the same formula over and over again.<br /><br />Songs are good. Tanushree Datta looks awesome. Rajpal Yadav is irritating, and Tusshar is not a whole lot better. Kunal Khemu is OK, and Sharman Joshi is the best.'\n'>>> Label: 0'\n\n'>>> Review: Okay, the story makes no sense, the characters lack any dimensionally, the best dialogue is ad-libs about the low quality of movie, the cinematography is dismal, and only editing saves a bit of the muddle, but Sam\" Peckinpah directed the film. Somehow, his direction is not enough. For those who appreciate Peckinpah and his great work, this movie is a disappointment. Even a great cast cannot redeem the time the viewer wastes with this minimal effort.<br /><br />The proper response to the movie is the contempt that the director San Peckinpah, James Caan, Robert Duvall, Burt Young, Bo Hopkins, Arthur Hill, and even Gig Young bring to their work. Watch the great Peckinpah films. Skip this mess.'\n'>>> Label: 0'\n\n'>>> Review: I saw this movie at the theaters when I was about 6 or 7 years old. I loved it then, and have recently come to own a VHS version. <br /><br />My 4 and 6 year old children love this movie and have been asking again and again to watch it. <br /><br />I have enjoyed watching it again too. Though I have to admit it is not as good on a little TV.<br /><br />I do not have older children so I do not know what they would think of it. <br /><br />The songs are very cute. My daughter keeps singing them over and over.<br /><br />Hope this helps.'\n'>>> Label: 1'

Yep, these are certainly movie reviews, and if you’re old enough you may even understand the comment in the last review about owning a VHS version 😜! Although we won’t need the labels for language modeling, we can already see that a 0 denotes a negative review, while a 1 corresponds to a positive one.

✏️ Try it out! Create a random sample of the unsupervised split and verify that the labels are neither 0 nor 1. While you’re at it, you could also check that the labels in the train and test splits are indeed 0 or 1 — this is a useful sanity check that every NLP practitioner should perform at the start of a new project!

Now that we’ve had a quick look at the data, let’s dive into preparing it for masked language modeling. As we’ll see, there are some additional steps that one needs to take compared to the sequence classification tasks we saw in Chapter 3. Let’s go!

Preprocessing the data

For both auto-regressive and masked language modeling, a common preprocessing step is to concatenate all the examples and then split the whole corpus into chunks of equal size. This is quite different from our usual approach, where we simply tokenize individual examples. Why concatenate everything together? The reason is that individual examples might get truncated if they’re too long, and that would result in losing information that might be useful for the language modeling task!

So to get started, we’ll first tokenize our corpus as usual, but without setting the truncation=True option in our tokenizer. We’ll also grab the word IDs if they are available ((which they will be if we’re using a fast tokenizer, as described in Chapter 6), as we will need them later on to do whole word masking. We’ll wrap this in a simple function, and while we’re at it we’ll remove the text and label columns since we don’t need them any longer:

def tokenize_function(examples):\n    result = tokenizer(examples[\"text\"])\n    if tokenizer.is_fast:\n        result[\"word_ids\"] = [result.word_ids(i) for i in range(len(result[\"input_ids\"]))]\n    return result\n\n\n# Use batched=True to activate fast multithreading!\ntokenized_datasets = imdb_dataset.map(\n    tokenize_function, batched=True, remove_columns=[\"text\", \"label\"]\n)\ntokenized_datasets
DatasetDict({\n    train: Dataset({\n        features: ['attention_mask', 'input_ids', 'word_ids'],\n        num_rows: 25000\n    })\n    test: Dataset({\n        features: ['attention_mask', 'input_ids', 'word_ids'],\n        num_rows: 25000\n    })\n    unsupervised: Dataset({\n        features: ['attention_mask', 'input_ids', 'word_ids'],\n        num_rows: 50000\n    })\n})

Since DistilBERT is a BERT-like model, we can see that the encoded texts consist of the input_ids and attention_mask that we’ve seen in other chapters, as well as the word_ids we added.

Now that we’ve tokenized our movie reviews, the next step is to group them all together and split the result into chunks. But how big should these chunks be? This will ultimately be determined by the amount of GPU memory that you have available, but a good starting point is to see what the model’s maximum context size is. This can be inferred by inspecting the model_max_length attribute of the tokenizer:

tokenizer.model_max_length
512

This value is derived from the tokenizer_config.json file associated with a checkpoint; in this case we can see that the context size is 512 tokens, just like with BERT.

✏️ Try it out! Some Transformer models, like BigBird and Longformer, have a much longer context length than BERT and other early Transformer models. Instantiate the tokenizer for one of these checkpoints and verify that the model_max_length agrees with what’s quoted on its model card.

So, in order to run our experiments on GPUs like those found on Google Colab, we’ll pick something a bit smaller that can fit in memory:

chunk_size = 128

Note that using a small chunk size can be detrimental in real-world scenarios, so you should use a size that corresponds to the use case you will apply your model to.

Now comes the fun part. To show how the concatenation works, let’s take a few reviews from our tokenized training set and print out the number of tokens per review:

# Slicing produces a list of lists for each feature\ntokenized_samples = tokenized_datasets[\"train\"][:3]\n\nfor idx, sample in enumerate(tokenized_samples[\"input_ids\"]):\n    print(f\"'>>> Review {idx} length: {len(sample)}'\")
'>>> Review 0 length: 200'\n'>>> Review 1 length: 559'\n'>>> Review 2 length: 192'

We can then concatenate all these examples with a simple dictionary comprehension, as follows:

concatenated_examples = {\n    k: sum(tokenized_samples[k], []) for k in tokenized_samples.keys()\n}\ntotal_length = len(concatenated_examples[\"input_ids\"])\nprint(f\"'>>> Concatenated reviews length: {total_length}'\")
'>>> Concatenated reviews length: 951'

Great, the total length checks out — so now let’s split the concatenated reviews into chunks of the size given by block_size. To do so, we iterate over the features in concatenated_examples and use a list comprehension to create slices of each feature. The result is a dictionary of chunks for each feature:

chunks = {\n    k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]\n    for k, t in concatenated_examples.items()\n}\n\nfor chunk in chunks[\"input_ids\"]:\n    print(f\"'>>> Chunk length: {len(chunk)}'\")
'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 128'\n'>>> Chunk length: 55'

As you can see in this example, the last chunk will generally be smaller than the maximum chunk size. There are two main strategies for dealing with this:

  • Drop the last chunk if it’s smaller than chunk_size.
  • Pad the last chunk until its length equals chunk_size.

We’ll take the first approach here, so let’s wrap all of the above logic in a single function that we can apply to our tokenized datasets:

def group_texts(examples):\n    # Concatenate all texts\n    concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\n    # Compute length of concatenated texts\n    total_length = len(concatenated_examples[list(examples.keys())[0]])\n    # We drop the last chunk if it's smaller than chunk_size\n    total_length = (total_length // chunk_size) * chunk_size\n    # Split by chunks of max_len\n    result = {\n        k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]\n        for k, t in concatenated_examples.items()\n    }\n    # Create a new labels column\n    result[\"labels\"] = result[\"input_ids\"].copy()\n    return result

Note that in the last step of group_texts() we create a new labels column which is a copy of the input_ids one. As we’ll see shortly, that’s because in masked language modeling the objective is to predict randomly masked tokens in the input batch, and by creating a labels column we provide the ground truth for our language model to learn from.

Let’s now apply group_texts() to our tokenized datasets using our trusty Dataset.map() function:

lm_datasets = tokenized_datasets.map(group_texts, batched=True)\nlm_datasets
DatasetDict({\n    train: Dataset({\n        features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n        num_rows: 61289\n    })\n    test: Dataset({\n        features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n        num_rows: 59905\n    })\n    unsupervised: Dataset({\n        features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n        num_rows: 122963\n    })\n})

You can see that grouping and then chunking the texts has produced many more examples than our original 25,000 for the train and test splits. That’s because we now have examples involving contiguous tokens that span across multiple examples from the original corpus. You can see this explicitly by looking for the special [SEP] and [CLS] tokens in one of the chunks:

tokenizer.decode(lm_datasets[\"train\"][1][\"input_ids\"])
\".... at.......... high. a classic line : inspector : i'm here to sack one of your teachers. student : welcome to bromwell high. i expect that many adults of my age think that bromwell high is far fetched. what a pity that it isn't! [SEP] [CLS] homelessness ( or houselessness as george carlin stated ) has been an issue for years but never a plan to help those on the street that were once considered human who did everything from going to school, work, or vote for the matter. most people think of the homeless\"

In this example you can see two overlapping movie reviews, one about a high school movie and the other about homelessness. Let’s also check out what the labels look like for masked language modeling:

tokenizer.decode(lm_datasets[\"train\"][1][\"labels\"])
\".... at.......... high. a classic line : inspector : i'm here to sack one of your teachers. student : welcome to bromwell high. i expect that many adults of my age think that bromwell high is far fetched. what a pity that it isn't! [SEP] [CLS] homelessness ( or houselessness as george carlin stated ) has been an issue for years but never a plan to help those on the street that were once considered human who did everything from going to school, work, or vote for the matter. most people think of the homeless\"

As expected from our group_texts() function above, this looks identical to the decoded input_ids — but then how can our model possibly learn anything? We’re missing a key step: inserting [MASK] tokens at random positions in the inputs! Let’s see how we can do this on the fly during fine-tuning using a special data collator.

Fine-tuning DistilBERT with the Trainer API

Fine-tuning a masked language model is almost identical to fine-tuning a sequence classification model, like we did in Chapter 3. The only difference is that we need a special data collator that can randomly mask some of the tokens in each batch of texts. Fortunately, 🤗 Transformers comes prepared with a dedicated DataCollatorForLanguageModeling for just this task. We just have to pass it the tokenizer and an mlm_probability argument that specifies what fraction of the tokens to mask. We’ll pick 15%, which is the amount used for BERT and a common choice in the literature:

from transformers import DataCollatorForLanguageModeling\n\ndata_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)

To see how the random masking works, let’s feed a few examples to the data collator. Since it expects a list of dicts, where each dict represents a single chunk of contiguous text, we first iterate over the dataset before feeding the batch to the collator. We remove the \"word_ids\" key for this data collator as it does not expect it:

samples = [lm_datasets[\"train\"][i] for i in range(2)]\nfor sample in samples:\n    _ = sample.pop(\"word_ids\")\n\nfor chunk in data_collator(samples)[\"input_ids\"]:\n    print(f\"\\n'>>> {tokenizer.decode(chunk)}'\")
'>>> [CLS] bromwell [MASK] is a cartoon comedy. it ran at the same [MASK] as some other [MASK] about school life, [MASK] as \" teachers \". [MASK] [MASK] [MASK] in the teaching [MASK] lead [MASK] to believe that bromwell high\\'[MASK] satire is much closer to reality than is \" teachers \". the scramble [MASK] [MASK] financially, the [MASK]ful students whogn [MASK] right through [MASK] pathetic teachers\\'pomp, the pettiness of the whole situation, distinction remind me of the schools i knew and their students. when i saw [MASK] episode in [MASK] a student repeatedly tried to burn down the school, [MASK] immediately recalled. [MASK]...'\n\n'>>> .... at.. [MASK]... [MASK]... high. a classic line plucked inspector : i\\'[MASK] here to [MASK] one of your [MASK]. student : welcome to bromwell [MASK]. i expect that many adults of my age think that [MASK]mwell [MASK] is [MASK] fetched. what a pity that it isn\\'t! [SEP] [CLS] [MASK]ness ( or [MASK]lessness as george 宇in stated )公 been an issue for years but never [MASK] plan to help those on the street that were once considered human [MASK] did everything from going to school, [MASK], [MASK] vote for the matter. most people think [MASK] the homeless'

Nice, it worked! We can see that the [MASK] token has been randomly inserted at various locations in our text. These will be the tokens which our model will have to predict during training — and the beauty of the data collator is that it will randomize the [MASK] insertion with every batch!

✏️ Try it out! Run the code snippet above several times to see the random masking happen in front of your very eyes! Also replace the tokenizer.decode() method with tokenizer.convert_ids_to_tokens() to see that sometimes a single token from a given word is masked, and not the others.

One side effect of random masking is that our evaluation metrics will not be deterministic when using the Trainer, since we use the same data collator for the training and test sets. We’ll see later, when we look at fine-tuning with 🤗 Accelerate, how we can use the flexibility of a custom evaluation loop to freeze the randomness.

When training models for masked language modeling, one technique that can be used is to mask whole words together, not just individual tokens. This approach is called whole word masking. If we want to use whole word masking, we will need to build a data collator ourselves. A data collator is just a function that takes a list of samples and converts them into a batch, so let’s do this now! We’ll use the word IDs computed earlier to make a map between word indices and the corresponding tokens, then randomly decide which words to mask and apply that mask on the inputs. Note that the labels are all -100 except for the ones corresponding to mask words.

import collections\nimport numpy as np\n\nfrom transformers import default_data_collator\n\nwwm_probability = 0.2\n\n\ndef whole_word_masking_data_collator(features):\n    for feature in features:\n        word_ids = feature.pop(\"word_ids\")\n\n        # Create a map between words and corresponding token indices\n        mapping = collections.defaultdict(list)\n        current_word_index = -1\n        current_word = None\n        for idx, word_id in enumerate(word_ids):\n            if word_id is not None:\n                if word_id != current_word:\n                    current_word = word_id\n                    current_word_index += 1\n                mapping[current_word_index].append(idx)\n\n        # Randomly mask words\n        mask = np.random.binomial(1, wwm_probability, (len(mapping),))\n        input_ids = feature[\"input_ids\"]\n        labels = feature[\"labels\"]\n        new_labels = [-100] * len(labels)\n        for word_id in np.where(mask)[0]:\n            word_id = word_id.item()\n            for idx in mapping[word_id]:\n                new_labels[idx] = labels[idx]\n                input_ids[idx] = tokenizer.mask_token_id\n        feature[\"labels\"] = new_labels\n\n    return default_data_collator(features)

Next, we can try it on the same samples as before:

samples = [lm_datasets[\"train\"][i] for i in range(2)]\nbatch = whole_word_masking_data_collator(samples)\n\nfor chunk in batch[\"input_ids\"]:\n    print(f\"\\n'>>> {tokenizer.decode(chunk)}'\")
'>>> [CLS] bromwell high is a cartoon comedy [MASK] it ran at the same time as some other programs about school life, such as \" teachers \". my 35 years in the teaching profession lead me to believe that bromwell high\\'s satire is much closer to reality than is \" teachers \". the scramble to survive financially, the insightful students who can see right through their pathetic teachers\\'pomp, the pettiness of the whole situation, all remind me of the schools i knew and their students. when i saw the episode in which a student repeatedly tried to burn down the school, i immediately recalled.....'\n\n'>>> .... [MASK] [MASK] [MASK] [MASK]....... high. a classic line : inspector : i\\'m here to sack one of your teachers. student : welcome to bromwell high. i expect that many adults of my age think that bromwell high is far fetched. what a pity that it isn\\'t! [SEP] [CLS] homelessness ( or houselessness as george carlin stated ) has been an issue for years but never a plan to help those on the street that were once considered human who did everything from going to school, work, or vote for the matter. most people think of the homeless'

✏️ Try it out! Run the code snippet above several times to see the random masking happen in front of your very eyes! Also replace the tokenizer.decode() method with tokenizer.convert_ids_to_tokens() to see that the tokens from a given word are always masked together.

Now that we have two data collators, the rest of the fine-tuning steps are standard. Training can take a while on Google Colab if you’re not lucky enough to score a mythical P100 GPU 😭, so we’ll first downsample the size of the training set to a few thousand examples. Don’t worry, we’ll still get a pretty decent language model! A quick way to downsample a dataset in 🤗 Datasets is via the Dataset.train_test_split() function that we saw in Chapter 5:

train_size = 10_000\ntest_size = int(0.1 * train_size)\n\ndownsampled_dataset = lm_datasets[\"train\"].train_test_split(\n    train_size=train_size, test_size=test_size, seed=42\n)\ndownsampled_dataset
DatasetDict({\n    train: Dataset({\n        features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n        num_rows: 10000\n    })\n    test: Dataset({\n        features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n        num_rows: 1000\n    })\n})

This has automatically created new train and test splits, with the training set size set to 10,000 examples and the validation set to 10% of that — feel free to increase this if you have a beefy GPU! The next thing we need to do is log in to the Hugging Face Hub. If you’re running this code in a notebook, you can do so with the following utility function:

from huggingface_hub import notebook_login\n\nnotebook_login()

which will display a widget where you can enter your credentials. Alternatively, you can run:

huggingface-cli login

in your favorite terminal and log in there.

Once we’re logged in, we can specify the arguments for the Trainer:

from transformers import TrainingArguments\n\nbatch_size = 64\n# Show the training loss with every epoch\nlogging_steps = len(downsampled_dataset[\"train\"]) // batch_size\nmodel_name = model_checkpoint.split(\"/\")[-1]\n\ntraining_args = TrainingArguments(\n    output_dir=f\"{model_name}-finetuned-imdb\",\n    overwrite_output_dir=True,\n    evaluation_strategy=\"epoch\",\n    learning_rate=2e-5,\n    weight_decay=0.01,\n    per_device_train_batch_size=batch_size,\n    per_device_eval_batch_size=batch_size,\n    push_to_hub=True,\n    fp16=True,\n    logging_steps=logging_steps,\n)

Here we tweaked a few of the default options, including logging_steps to ensure we track the training loss with each epoch. We’ve also used fp16=True to enable mixed-precision training, which gives us another boost in speed. By default, the Trainer will remove any columns that are not part of the model’s forward() method. This means that if you’re using the whole word masking collator, you’ll also need to set remove_unused_columns=False to ensure we don’t lose the word_ids column during training.

Note that you can specify the name of the repository you want to push to with the hub_model_id argument (in particular, you will have to use this argument to push to an organization). For instance, when we pushed the model to the huggingface-course organization, we added hub_model_id=\"huggingface-course/distilbert-finetuned-imdb\" to TrainingArguments. By default, the repository used will be in your namespace and named after the output directory you set, so in our case it will be \"lewtun/distilbert-finetuned-imdb\".

We now have all the ingredients to instantiate the Trainer. Here we just use the standard data_collator, but you can try the whole word masking collator and compare the results as an exercise:

from transformers import Trainer\n\ntrainer = Trainer(\n    model=model,\n    args=training_args,\n    train_dataset=downsampled_dataset[\"train\"],\n    eval_dataset=downsampled_dataset[\"test\"],\n    data_collator=data_collator,\n    tokenizer=tokenizer,\n)

We’re now ready to run trainer.train() — but before doing so let’s briefly look at perplexity, which is a common metric to evaluate the performance of language models.

Perplexity for language models

Unlike other tasks like text classification or question answering where we’re given a labeled corpus to train on, with language modeling we don’t have any explicit labels. So how do we determine what makes a good language model? Like with the autocorrect feature in your phone, a good language model is one that assigns high probabilities to sentences that are grammatically correct, and low probabilities to nonsense sentences. To give you a better idea of what this looks like, you can find whole sets of “autocorrect fails” online, where the model in a person’s phone has produced some rather funny (and often inappropriate) completions!

Assuming our test set consists mostly of sentences that are grammatically correct, then one way to measure the quality of our language model is to calculate the probabilities it assigns to the next word in all the sentences of the test set. High probabilities indicates that the model is not “surprised” or “perplexed” by the unseen examples, and suggests it has learned the basic patterns of grammar in the language. There are various mathematical definitions of perplexity, but the one we’ll use defines it as the exponential of the cross-entropy loss. Thus, we can calculate the perplexity of our pretrained model by using the Trainer.evaluate() function to compute the cross-entropy loss on the test set and then taking the exponential of the result:

import math\n\neval_results = trainer.evaluate()\nprint(f\">>> Perplexity: {math.exp(eval_results['eval_loss']):.2f}\")
>>> Perplexity: 21.75

A lower perplexity score means a better language model, and we can see here that our starting model has a somewhat large value. Let’s see if we can lower it by fine-tuning! To do that, we first run the training loop:

trainer.train()

and then compute the resulting perplexity on the test set as before:

eval_results = trainer.evaluate()\nprint(f\">>> Perplexity: {math.exp(eval_results['eval_loss']):.2f}\")
>>> Perplexity: 11.32

Nice — this is quite a reduction in perplexity, which tells us the model has learned something about the domain of movie reviews!

Once training is finished, we can push the model card with the training information to the Hub (the checkpoints are saved during training itself):

trainer.push_to_hub()

✏️ Your turn! Run the training above after changing the data collator to the whole word masking collator. Do you get better results?

In our use case we didn’t need to do anything special with the training loop, but in some cases you might need to implement some custom logic. For these applications, you can use 🤗 Accelerate — let’s take a look!

Fine-tuning DistilBERT with 🤗 Accelerate

As we saw with the Trainer, fine-tuning a masked language model is very similar to the text classification example from Chapter 3. In fact, the only subtlety is the use of a special data collator, and we’ve already covered that earlier in this section!

However, we saw that DataCollatorForLanguageModeling also applies random masking with each evaluation, so we’ll see some fluctuations in our perplexity scores with each training run. One way to eliminate this source of randomness is to apply the masking once on the whole test set, and then use the default data collator in 🤗 Transformers to collect the batches during evaluation. To see how this works, let’s implement a simple function that applies the masking on a batch, similar to our first encounter with DataCollatorForLanguageModeling:

def insert_random_mask(batch):\n    features = [dict(zip(batch, t)) for t in zip(*batch.values())]\n    masked_inputs = data_collator(features)\n    # Create a new \"masked\" column for each column in the dataset\n    return {\"masked_\" + k: v.numpy() for k, v in masked_inputs.items()}

Next, we’ll apply this function to our test set and drop the unmasked columns so we can replace them with the masked ones. You can use whole word masking by replacing the data_collator above with the appropriate one, in which case you should remove the first line here:

downsampled_dataset = downsampled_dataset.remove_columns([\"word_ids\"])\neval_dataset = downsampled_dataset[\"test\"].map(\n    insert_random_mask,\n    batched=True,\n    remove_columns=downsampled_dataset[\"test\"].column_names,\n)\neval_dataset = eval_dataset.rename_columns(\n    {\n        \"masked_input_ids\": \"input_ids\",\n        \"masked_attention_mask\": \"attention_mask\",\n        \"masked_labels\": \"labels\",\n    }\n)

We can then set up the dataloaders as usual, but we’ll use the default_data_collator from 🤗 Transformers for the evaluation set:

from torch.utils.data import DataLoader\nfrom transformers import default_data_collator\n\nbatch_size = 64\ntrain_dataloader = DataLoader(\n    downsampled_dataset[\"train\"],\n    shuffle=True,\n    batch_size=batch_size,\n    collate_fn=data_collator,\n)\neval_dataloader = DataLoader(\n    eval_dataset, batch_size=batch_size, collate_fn=default_data_collator\n)

Form here, we follow the standard steps with 🤗 Accelerate. The first order of business is to load a fresh version of the pretrained model:

model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)

Then we need to specify the optimizer; we’ll use the standard AdamW:

from torch.optim import AdamW\n\noptimizer = AdamW(model.parameters(), lr=5e-5)

With these objects, we can now prepare everything for training with the Accelerator object:

from accelerate import Accelerator\n\naccelerator = Accelerator()\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n    model, optimizer, train_dataloader, eval_dataloader\n)

Now that our model, optimizer, and dataloaders are configured, we can specify the learning rate scheduler as follows:

from transformers import get_scheduler\n\nnum_train_epochs = 3\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n    \"linear\",\n    optimizer=optimizer,\n    num_warmup_steps=0,\n    num_training_steps=num_training_steps,\n)

There is just one last thing to do before training: create a model repository on the Hugging Face Hub! We can use the 🤗 Hub library to first generate the full name of our repo:

from huggingface_hub import get_full_repo_name\n\nmodel_name = \"distilbert-base-uncased-finetuned-imdb-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name
'lewtun/distilbert-base-uncased-finetuned-imdb-accelerate'

then create and clone the repository using the Repository class from 🤗 Hub:

from huggingface_hub import Repository\n\noutput_dir = model_name\nrepo = Repository(output_dir, clone_from=repo_name)

With that done, it’s just a simple matter of writing out the full training and evaluation loop:

from tqdm.auto import tqdm\nimport torch\nimport math\n\nprogress_bar = tqdm(range(num_training_steps))\n\nfor epoch in range(num_train_epochs):\n    # Training\n    model.train()\n    for batch in train_dataloader:\n        outputs = model(**batch)\n        loss = outputs.loss\n        accelerator.backward(loss)\n\n        optimizer.step()\n        lr_scheduler.step()\n        optimizer.zero_grad()\n        progress_bar.update(1)\n\n    # Evaluation\n    model.eval()\n    losses = []\n    for step, batch in enumerate(eval_dataloader):\n        with torch.no_grad():\n            outputs = model(**batch)\n\n        loss = outputs.loss\n        losses.append(accelerator.gather(loss.repeat(batch_size)))\n\n    losses = torch.cat(losses)\n    losses = losses[: len(eval_dataset)]\n    try:\n        perplexity = math.exp(torch.mean(losses))\n    except OverflowError:\n        perplexity = float(\"inf\")\n\n    print(f\">>> Epoch {epoch}: Perplexity: {perplexity}\")\n\n    # Save and upload\n    accelerator.wait_for_everyone()\n    unwrapped_model = accelerator.unwrap_model(model)\n    unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n    if accelerator.is_main_process:\n        tokenizer.save_pretrained(output_dir)\n        repo.push_to_hub(\n            commit_message=f\"Training in progress epoch {epoch}\", blocking=False\n        )
>>> Epoch 0: Perplexity: 11.397545307900472\n>>> Epoch 1: Perplexity: 10.904909330983092\n>>> Epoch 2: Perplexity: 10.729503505340409

Cool, we’ve been able to evaluate perplexity with each epoch and ensure that multiple training runs are reproducible!

Using our fine-tuned model

You can interact with your fine-tuned model either by using its widget on the Hub or locally with the pipeline from 🤗 Transformers. Let’s use the latter to download our model using the fill-mask pipeline:

from transformers import pipeline\n\nmask_filler = pipeline(\n    \"fill-mask\", model=\"huggingface-course/distilbert-base-uncased-finetuned-imdb\"\n)

We can then feed the pipeline our sample text of “This is a great [MASK]” and see what the top 5 predictions are:

preds = mask_filler(text)\n\nfor pred in preds:\n    print(f\">>> {pred['sequence']}\")
'>>> this is a great movie.'\n'>>> this is a great film.'\n'>>> this is a great story.'\n'>>> this is a great movies.'\n'>>> this is a great character.'

Neat — our model has clearly adapted its weights to predict words that are more strongly associated with movies!

This wraps up our first experiment with training a language model. In section 6 you’ll learn how to train an auto-regressive model like GPT-2 from scratch; head over there if you’d like to see how you can pretrain your very own Transformer model!

✏️ Try it out! To quantify the benefits of domain adaptation, fine-tune a classifier on the IMDb labels for both the pretrained and fine-tuned DistilBERT checkpoints. If you need a refresher on text classification, check out Chapter 3.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:28.507Z"} {"title":"Translation - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter7/4?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#translation)Translation\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-7-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter7/section4_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter7/section4_pt.ipynb)\n\nLet’s now dive into translation. This is another [sequence-to-sequence task](/course/chapter1/7), which means it’s a problem that can be formulated as going from one sequence to another. In that sense the problem is pretty close to [summarization](/course/chapter7/6), and you could adapt what we will see here to other sequence-to-sequence problems such as:\n\n- **Style transfer**: Creating a model that _translates_ texts written in a certain style to another (e.g., formal to casual or Shakespearean English to modern English)\n- **Generative question answering**: Creating a model that generates answers to questions, given a context\n\nIf you have a big enough corpus of texts in two (or more) languages, you can train a new translation model from scratch like we will in the section on [causal language modeling](/course/chapter7/6). It will be faster, however, to fine-tune an existing translation model, be it a multilingual one like mT5 or mBART that you want to fine-tune to a specific language pair, or even a model specialized for translation from one language to another that you want to fine-tune to your specific corpus.\n\nIn this section, we will fine-tune a Marian model pretrained to translate from English to French (since a lot of Hugging Face employees speak both those languages) on the [KDE4 dataset](https://huggingface.co/datasets/kde4), which is a dataset of localized files for the [KDE apps](https://apps.kde.org/). The model we will use has been pretrained on a large corpus of French and English texts taken from the [Opus dataset](https://opus.nlpl.eu/), which actually contains the KDE4 dataset. But even if the pretrained model we use has seen that data during its pretraining, we will see that we can get a better version of it after fine-tuning.\n\nOnce we’re finished, we will have a model able to make predictions like this one:\n\n [![One-hot encoded labels for question answering.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/modeleval-marian-finetuned-kde4-en-to-fr.png) ![One-hot encoded labels for question answering.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/modeleval-marian-finetuned-kde4-en-to-fr-dark.png)](/huggingface-course/marian-finetuned-kde4-en-to-fr) \n\nAs in the previous sections, you can find the actual model that we’ll train and upload to the Hub using the code below and double-check its predictions [here](https://huggingface.co/huggingface-course/marian-finetuned-kde4-en-to-fr?text=This+plugin+allows+you+to+automatically+translate+web+pages+between+several+languages.).\n\n## [](#preparing-the-data)Preparing the data\n\nTo fine-tune or train a translation model from scratch, we will need a dataset suitable for the task. As mentioned previously, we’ll use the [KDE4 dataset](https://huggingface.co/datasets/kde4) in this section, but you can adapt the code to use your own data quite easily, as long as you have pairs of sentences in the two languages you want to translate from and into. Refer back to [Chapter 5](/course/chapter5) if you need a reminder of how to load your custom data in a `Dataset`.\n\n### [](#the-kde4-dataset)The KDE4 dataset\n\nAs usual, we download our dataset using the `load_dataset()` function:\n\n```\nfrom datasets import load_dataset\n\nraw_datasets = load_dataset(\"kde4\", lang1=\"en\", lang2=\"fr\")```\n\nIf you want to work with a different pair of languages, you can specify them by their codes. A total of 92 languages are available for this dataset; you can see them all by expanding the language tags on its [dataset card](https://huggingface.co/datasets/kde4).\n\n![Language available for the KDE4 dataset.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/language_tags.png)\n\nLet’s have a look at the dataset:\n\n```\nDatasetDict({\n train: Dataset({\n features: ['id', 'translation'],\n num_rows: 210173\n })\n})```\n\nWe have 210,173 pairs of sentences, but in one single split, so we will need to create our own validation set. As we saw in [Chapter 5](/course/chapter5), a `Dataset` has a `train_test_split()` method that can help us. We’ll provide a seed for reproducibility:\n\n```\nsplit_datasets = raw_datasets[\"train\"].train_test_split(train_size=0.9, seed=20)\nsplit_datasets```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['id', 'translation'],\n num_rows: 189155\n })\n test: Dataset({\n features: ['id', 'translation'],\n num_rows: 21018\n })\n})```\n\nWe can rename the `\"test\"` key to `\"validation\"` like this:\n\n```\nsplit_datasets[\"validation\"] = split_datasets.pop(\"test\")```\n\nNow let’s take a look at one element of the dataset:\n\n```\nsplit_datasets[\"train\"][1][\"translation\"]```\n\n```\n{'en': 'Default to expanded threads',\n 'fr': 'Par défaut, développer les fils de discussion'}```\n\nWe get a dictionary with two sentences in the pair of languages we requested. One particularity of this dataset full of technical computer science terms is that they are all fully translated in French. However, French engineers are often lazy and leave most computer science-specific words in English when they talk. Here, for instance, the word “threads” might well appear in a French sentence, especially in a technical conversation; but in this dataset it has been translated into the more correct “fils de discussion.” The pretrained model we use, which has been pretrained on a larger corpus of French and English sentences, takes the easier option of leaving the word as is:\n\n```\nfrom transformers import pipeline\n\nmodel_checkpoint = \"Helsinki-NLP/opus-mt-en-fr\"\ntranslator = pipeline(\"translation\", model=model_checkpoint)\ntranslator(\"Default to expanded threads\")```\n\n```\n[{'translation_text': 'Par défaut pour les threads élargis'}]```\n\nAnother example of this behavior can be seen with the word “plugin,” which isn’t officially a French word but which most native speakers will understand and not bother to translate. In the KDE4 dataset this word has been translated in French into the more official “module d’extension”:\n\n```\nsplit_datasets[\"train\"][172][\"translation\"]```\n\n```\n{'en': 'Unable to import %1 using the OFX importer plugin. This file is not the correct format.',\n 'fr': \"Impossible d'importer %1 en utilisant le module d'extension d'importation OFX. Ce fichier n'a pas un format correct.\"}```\n\nOur pretrained model, however, sticks with the compact and familiar English word:\n\n```\ntranslator(\n \"Unable to import %1 using the OFX importer plugin. This file is not the correct format.\"\n)```\n\n```\n[{'translation_text': \"Impossible d'importer %1 en utilisant le plugin d'importateur OFX. Ce fichier n'est pas le bon format.\"}]```\n\nIt will be interesting to see if our fine-tuned model picks up on those particularities of the dataset (spoiler alert: it will).\n\n✏️ **Your turn!** Another English word that is often used in French is “email.” Find the first sample in the training dataset that uses this word. How is it translated? How does the pretrained model translate the same English sentence?\n\n### [](#processing-the-data)Processing the data\n\nYou should know the drill by now: the texts all need to be converted into sets of token IDs so the model can make sense of them. For this task, we’ll need to tokenize both the inputs and the targets. Our first task is to create our `tokenizer` object. As noted earlier, we’ll be using a Marian English to French pretrained model. If you are trying this code with another pair of languages, make sure to adapt the model checkpoint. The [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) organization provides more than a thousand models in multiple languages.\n\n```\nfrom transformers import AutoTokenizer\n\nmodel_checkpoint = \"Helsinki-NLP/opus-mt-en-fr\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint, return_tensors=\"pt\")```\n\nYou can also replace the `model_checkpoint` with any other model you prefer from the [Hub](https://huggingface.co/models), or a local folder where you’ve saved a pretrained model and a tokenizer.\n\n💡 If you are using a multilingual tokenizer such as mBART, mBART-50, or M2M100, you will need to set the language codes of your inputs and targets in the tokenizer by setting `tokenizer.src_lang` and `tokenizer.tgt_lang` to the right values.\n\nThe preparation of our data is pretty straightforward. There’s just one thing to remember; you need to ensure that the tokenizer processes the targets in the output language (here, French). You can do this by passing the targets to the `text_targets` argument of the tokenizer’s `__call__` method.\n\nTo see how this works, let’s process one sample of each language in the training set:\n\n```\nen_sentence = split_datasets[\"train\"][1][\"translation\"][\"en\"]\nfr_sentence = split_datasets[\"train\"][1][\"translation\"][\"fr\"]\n\ninputs = tokenizer(en_sentence, text_target=fr_sentence)\ninputs```\n\n```\n{'input_ids': [47591, 12, 9842, 19634, 9, 0], 'attention_mask': [1, 1, 1, 1, 1, 1], 'labels': [577, 5891, 2, 3184, 16, 2542, 5, 1710, 0]}```\n\nAs we can see, the output contains the input IDs associated with the English sentence, while the IDs associated with the French one are stored in the `labels` field. If you forget to indicate that you are tokenizing labels, they will be tokenized by the input tokenizer, which in the case of a Marian model is not going to go well at all:\n\n```\nwrong_targets = tokenizer(fr_sentence)\nprint(tokenizer.convert_ids_to_tokens(wrong_targets[\"input_ids\"]))\nprint(tokenizer.convert_ids_to_tokens(inputs[\"labels\"]))```\n\n```\n['▁Par', '▁dé', 'f', 'aut', ',', '▁dé', 've', 'lop', 'per', '▁les', '▁fil', 's', '▁de', '▁discussion', '']\n['▁Par', '▁défaut', ',', '▁développer', '▁les', '▁fils', '▁de', '▁discussion', '']```\n\nAs we can see, using the English tokenizer to preprocess a French sentence results in a lot more tokens, since the tokenizer doesn’t know any French words (except those that also appear in the English language, like “discussion”).\n\nSince `inputs` is a dictionary with our usual keys (input IDs, attention mask, etc.), the last step is to define the preprocessing function we will apply on the datasets:\n\n```\nmax_length = 128\n\n\ndef preprocess_function(examples):\n inputs = [ex[\"en\"] for ex in examples[\"translation\"]]\n targets = [ex[\"fr\"] for ex in examples[\"translation\"]]\n model_inputs = tokenizer(\n inputs, text_target=targets, max_length=max_length, truncation=True\n )\n return model_inputs```\n\nNote that we set the same maximum length for our inputs and outputs. Since the texts we’re dealing with seem pretty short, we use 128.\n\n💡 If you are using a T5 model (more specifically, one of the `t5-xxx` checkpoints), the model will expect the text inputs to have a prefix indicating the task at hand, such as `translate: English to French:`.\n\n⚠️ We don’t pay attention to the attention mask of the targets, as the model won’t expect it. Instead, the labels corresponding to a padding token should be set to `-100` so they are ignored in the loss computation. This will be done by our data collator later on since we are applying dynamic padding, but if you use padding here, you should adapt the preprocessing function to set all labels that correspond to the padding token to `-100`.\n\nWe can now apply that preprocessing in one go on all the splits of our dataset:\n\n```\ntokenized_datasets = split_datasets.map(\n preprocess_function,\n batched=True,\n remove_columns=split_datasets[\"train\"].column_names,\n)```\n\nNow that the data has been preprocessed, we are ready to fine-tune our pretrained model!\n\n## [](#fine-tuning-the-model-with-the-trainer-api)Fine-tuning the model with the `Trainer` API\n\nThe actual code using the `Trainer` will be the same as before, with just one little change: we use a [`Seq2SeqTrainer`](https://huggingface.co/transformers/main_classes/trainer.html#seq2seqtrainer) here, which is a subclass of `Trainer` that will allow us to properly deal with the evaluation, using the `generate()` method to predict outputs from the inputs. We’ll dive into that in more detail when we talk about the metric computation.\n\nFirst things first, we need an actual model to fine-tune. We’ll use the usual `AutoModel` API:\n\n```\nfrom transformers import AutoModelForSeq2SeqLM\n\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)```\n\nNote that this time we are using a model that was trained on a translation task and can actually be used already, so there is no warning about missing weights or newly initialized ones.\n\n### [](#data-collation)Data collation\n\nWe’ll need a data collator to deal with the padding for dynamic batching. We can’t just use a `DataCollatorWithPadding` like in [Chapter 3](/course/chapter3) in this case, because that only pads the inputs (input IDs, attention mask, and token type IDs). Our labels should also be padded to the maximum length encountered in the labels. And, as mentioned previously, the padding value used to pad the labels should be `-100` and not the padding token of the tokenizer, to make sure those padded values are ignored in the loss computation.\n\nThis is all done by a [`DataCollatorForSeq2Seq`](https://huggingface.co/transformers/main_classes/data_collator.html#datacollatorforseq2seq). Like the `DataCollatorWithPadding`, it takes the `tokenizer` used to preprocess the inputs, but it also takes the `model`. This is because this data collator will also be responsible for preparing the decoder input IDs, which are shifted versions of the labels with a special token at the beginning. Since this shift is done slightly differently for different architectures, the `DataCollatorForSeq2Seq` needs to know the `model` object:\n\n```\nfrom transformers import DataCollatorForSeq2Seq\n\ndata_collator = DataCollatorForSeq2Seq(tokenizer, model=model)```\n\nTo test this on a few samples, we just call it on a list of examples from our tokenized training set:\n\n```\nbatch = data_collator([tokenized_datasets[\"train\"][i] for i in range(1, 3)])\nbatch.keys()```\n\n```\ndict_keys(['attention_mask', 'input_ids', 'labels', 'decoder_input_ids'])```\n\nWe can check our labels have been padded to the maximum length of the batch, using `-100`:\n\n```\ntensor([[ 577, 5891, 2, 3184, 16, 2542, 5, 1710, 0, -100,\n -100, -100, -100, -100, -100, -100],\n [ 1211, 3, 49, 9409, 1211, 3, 29140, 817, 3124, 817,\n 550, 7032, 5821, 7907, 12649, 0]])```\n\nAnd we can also have a look at the decoder input IDs, to see that they are shifted versions of the labels:\n\n```\nbatch[\"decoder_input_ids\"]```\n\n```\ntensor([[59513, 577, 5891, 2, 3184, 16, 2542, 5, 1710, 0,\n 59513, 59513, 59513, 59513, 59513, 59513],\n [59513, 1211, 3, 49, 9409, 1211, 3, 29140, 817, 3124,\n 817, 550, 7032, 5821, 7907, 12649]])```\n\nHere are the labels for the first and second elements in our dataset:\n\n```\nfor i in range(1, 3):\n print(tokenized_datasets[\"train\"][i][\"labels\"])```\n\n```\n[577, 5891, 2, 3184, 16, 2542, 5, 1710, 0]\n[1211, 3, 49, 9409, 1211, 3, 29140, 817, 3124, 817, 550, 7032, 5821, 7907, 12649, 0]```\n\nWe will pass this `data_collator` along to the `Seq2SeqTrainer`. Next, let’s have a look at the metric.\n\n### [](#metrics)Metrics\n\nThe feature that `Seq2SeqTrainer` adds to its superclass `Trainer` is the ability to use the `generate()` method during evaluation or prediction. During training, the model will use the `decoder_input_ids` with an attention mask ensuring it does not use the tokens after the token it’s trying to predict, to speed up training. During inference we won’t be able to use those since we won’t have labels, so it’s a good idea to evaluate our model with the same setup.\n\nAs we saw in [Chapter 1](/course/chapter1/6), the decoder performs inference by predicting tokens one by one — something that’s implemented behind the scenes in 🤗 Transformers by the `generate()` method. The `Seq2SeqTrainer` will let us use that method for evaluation if we set `predict_with_generate=True`.\n\nThe traditional metric used for translation is the [BLEU score](https://en.wikipedia.org/wiki/BLEU), introduced in [a 2002 article](https://aclanthology.org/P02-1040.pdf) by Kishore Papineni et al. The BLEU score evaluates how close the translations are to their labels. It does not measure the intelligibility or grammatical correctness of the model’s generated outputs, but uses statistical rules to ensure that all the words in the generated outputs also appear in the targets. In addition, there are rules that penalize repetitions of the same words if they are not also repeated in the targets (to avoid the model outputting sentences like `\"the the the the the\"`) and output sentences that are shorter than those in the targets (to avoid the model outputting sentences like `\"the\"`).\n\nOne weakness with BLEU is that it expects the text to already be tokenized, which makes it difficult to compare scores between models that use different tokenizers. So instead, the most commonly used metric for benchmarking translation models today is [SacreBLEU](https://github.com/mjpost/sacrebleu), which addresses this weakness (and others) by standardizing the tokenization step. To use this metric, we first need to install the SacreBLEU library:\n\nWe can then load it via `evaluate.load()` like we did in [Chapter 3](/course/chapter3):\n\n```\nimport evaluate\n\nmetric = evaluate.load(\"sacrebleu\")```\n\nThis metric will take texts as inputs and targets. It is designed to accept several acceptable targets, as there are often multiple acceptable translations of the same sentence — the dataset we’re using only provides one, but it’s not uncommon in NLP to find datasets that give several sentences as labels. So, the predictions should be a list of sentences, but the references should be a list of lists of sentences.\n\nLet’s try an example:\n\n```\npredictions = [\n \"This plugin lets you translate web pages between several languages automatically.\"\n]\nreferences = [\n [\n \"This plugin allows you to automatically translate web pages between several languages.\"\n ]\n]\nmetric.compute(predictions=predictions, references=references)```\n\n```\n{'score': 46.750469682990165,\n 'counts': [11, 6, 4, 3],\n 'totals': [12, 11, 10, 9],\n 'precisions': [91.67, 54.54, 40.0, 33.33],\n 'bp': 0.9200444146293233,\n 'sys_len': 12,\n 'ref_len': 13}```\n\nThis gets a BLEU score of 46.75, which is rather good — for reference, the original Transformer model in the [“Attention Is All You Need” paper](https://arxiv.org/pdf/1706.03762.pdf) achieved a BLEU score of 41.8 on a similar translation task between English and French! (For more information about the individual metrics, like `counts` and `bp`, see the [SacreBLEU repository](https://github.com/mjpost/sacrebleu/blob/078c440168c6adc89ba75fe6d63f0d922d42bcfe/sacrebleu/metrics/bleu.py#L74).) On the other hand, if we try with the two bad types of predictions (lots of repetitions or too short) that often come out of translation models, we will get rather bad BLEU scores:\n\n```\npredictions = [\"This This This This\"]\nreferences = [\n [\n \"This plugin allows you to automatically translate web pages between several languages.\"\n ]\n]\nmetric.compute(predictions=predictions, references=references)```\n\n```\n{'score': 1.683602693167689,\n 'counts': [1, 0, 0, 0],\n 'totals': [4, 3, 2, 1],\n 'precisions': [25.0, 16.67, 12.5, 12.5],\n 'bp': 0.10539922456186433,\n 'sys_len': 4,\n 'ref_len': 13}```\n\n```\npredictions = [\"This plugin\"]\nreferences = [\n [\n \"This plugin allows you to automatically translate web pages between several languages.\"\n ]\n]\nmetric.compute(predictions=predictions, references=references)```\n\n```\n{'score': 0.0,\n 'counts': [2, 1, 0, 0],\n 'totals': [2, 1, 0, 0],\n 'precisions': [100.0, 100.0, 0.0, 0.0],\n 'bp': 0.004086771438464067,\n 'sys_len': 2,\n 'ref_len': 13}```\n\nThe score can go from 0 to 100, and higher is better.\n\nTo get from the model outputs to texts the metric can use, we will use the `tokenizer.batch_decode()` method. We just have to clean up all the `-100`s in the labels (the tokenizer will automatically do the same for the padding token):\n\n```\nimport numpy as np\n\n\ndef compute_metrics(eval_preds):\n preds, labels = eval_preds\n \n if isinstance(preds, tuple):\n preds = preds[0]\n\n decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)\n\n \n labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n\n \n decoded_preds = [pred.strip() for pred in decoded_preds]\n decoded_labels = [[label.strip()] for label in decoded_labels]\n\n result = metric.compute(predictions=decoded_preds, references=decoded_labels)\n return {\"bleu\": result[\"score\"]}```\n\nNow that this is done, we are ready to fine-tune our model!\n\n### [](#fine-tuning-the-model)Fine-tuning the model\n\nThe first step is to log in to Hugging Face, so you’re able to upload your results to the Model Hub. There’s a convenience function to help you with this in a notebook:\n\n```\nfrom huggingface_hub import notebook_login\n\nnotebook_login()```\n\nThis will display a widget where you can enter your Hugging Face login credentials.\n\nIf you aren’t working in a notebook, just type the following line in your terminal:\n\nOnce this is done, we can define our `Seq2SeqTrainingArguments`. Like for the `Trainer`, we use a subclass of `TrainingArguments` that contains a few more fields:\n\n```\nfrom transformers import Seq2SeqTrainingArguments\n\nargs = Seq2SeqTrainingArguments(\n f\"marian-finetuned-kde4-en-to-fr\",\n evaluation_strategy=\"no\",\n save_strategy=\"epoch\",\n learning_rate=2e-5,\n per_device_train_batch_size=32,\n per_device_eval_batch_size=64,\n weight_decay=0.01,\n save_total_limit=3,\n num_train_epochs=3,\n predict_with_generate=True,\n fp16=True,\n push_to_hub=True,\n)```\n\nApart from the usual hyperparameters (like learning rate, number of epochs, batch size, and some weight decay), here are a few changes compared to what we saw in the previous sections:\n\n- We don’t set any regular evaluation, as evaluation takes a while; we will just evaluate our model once before training and after.\n- We set `fp16=True`, which speeds up training on modern GPUs.\n- We set `predict_with_generate=True`, as discussed above.\n- We use `push_to_hub=True` to upload the model to the Hub at the end of each epoch.\n\nNote that you can specify the full name of the repository you want to push to with the `hub_model_id` argument (in particular, you will have to use this argument to push to an organization). For instance, when we pushed the model to the [`huggingface-course` organization](https://huggingface.co/huggingface-course), we added `hub_model_id=\"huggingface-course/marian-finetuned-kde4-en-to-fr\"` to `Seq2SeqTrainingArguments`. By default, the repository used will be in your namespace and named after the output directory you set, so in our case it will be `\"sgugger/marian-finetuned-kde4-en-to-fr\"` (which is the model we linked to at the beginning of this section).\n\n💡 If the output directory you are using already exists, it needs to be a local clone of the repository you want to push to. If it isn’t, you’ll get an error when defining your `Seq2SeqTrainer` and will need to set a new name.\n\nFinally, we just pass everything to the `Seq2SeqTrainer`:\n\n```\nfrom transformers import Seq2SeqTrainer\n\ntrainer = Seq2SeqTrainer(\n model,\n args,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"validation\"],\n data_collator=data_collator,\n tokenizer=tokenizer,\n compute_metrics=compute_metrics,\n)```\n\nBefore training, we’ll first look at the score our model gets, to double-check that we’re not making things worse with our fine-tuning. This command will take a bit of time, so you can grab a coffee while it executes:\n\n```\ntrainer.evaluate(max_length=max_length)```\n\n```\n{'eval_loss': 1.6964408159255981,\n 'eval_bleu': 39.26865061007616,\n 'eval_runtime': 965.8884,\n 'eval_samples_per_second': 21.76,\n 'eval_steps_per_second': 0.341}```\n\nA BLEU score of 39 is not too bad, which reflects the fact that our model is already good at translating English sentences to French ones.\n\nNext is the training, which will also take a bit of time:\n\nNote that while the training happens, each time the model is saved (here, every epoch) it is uploaded to the Hub in the background. This way, you will be able to to resume your training on another machine if necessary.\n\nOnce training is done, we evaluate our model again — hopefully we will see some amelioration in the BLEU score!\n\n```\ntrainer.evaluate(max_length=max_length)```\n\n```\n{'eval_loss': 0.8558505773544312,\n 'eval_bleu': 52.94161337775576,\n 'eval_runtime': 714.2576,\n 'eval_samples_per_second': 29.426,\n 'eval_steps_per_second': 0.461,\n 'epoch': 3.0}```\n\nThat’s a nearly 14-point improvement, which is great.\n\nFinally, we use the `push_to_hub()` method to make sure we upload the latest version of the model. The `Trainer` also drafts a model card with all the evaluation results and uploads it. This model card contains metadata that helps the Model Hub pick the widget for the inference demo. Usually, there is no need to say anything as it can infer the right widget from the model class, but in this case, the same model class can be used for all kinds of sequence-to-sequence problems, so we specify it’s a translation model:\n\n```\ntrainer.push_to_hub(tags=\"translation\", commit_message=\"Training complete\")```\n\nThis command returns the URL of the commit it just did, if you want to inspect it:\n\n```\n'https://huggingface.co/sgugger/marian-finetuned-kde4-en-to-fr/commit/3601d621e3baae2bc63d3311452535f8f58f6ef3'```\n\nAt this stage, you can use the inference widget on the Model Hub to test your model and share it with your friends. You have successfully fine-tuned a model on a translation task — congratulations!\n\nIf you want to dive a bit more deeply into the training loop, we will now show you how to do the same thing using 🤗 Accelerate.\n\n## [](#a-custom-training-loop)A custom training loop\n\nLet’s now take a look at the full training loop, so you can easily customize the parts you need. It will look a lot like what we did in [section 2](/course/chapter7/2) and [Chapter 3](/course/chapter3/4).\n\n### [](#preparing-everything-for-training)Preparing everything for training\n\nYou’ve seen all of this a few times now, so we’ll go through the code quite quickly. First we’ll build the `DataLoader`s from our datasets, after setting the datasets to the `\"torch\"` format so we get PyTorch tensors:\n\n```\nfrom torch.utils.data import DataLoader\n\ntokenized_datasets.set_format(\"torch\")\ntrain_dataloader = DataLoader(\n tokenized_datasets[\"train\"],\n shuffle=True,\n collate_fn=data_collator,\n batch_size=8,\n)\neval_dataloader = DataLoader(\n tokenized_datasets[\"validation\"], collate_fn=data_collator, batch_size=8\n)```\n\nNext we reinstantiate our model, to make sure we’re not continuing the fine-tuning from before but starting from the pretrained model again:\n\n```\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)```\n\nThen we will need an optimizer:\n\n```\nfrom transformers import AdamW\n\noptimizer = AdamW(model.parameters(), lr=2e-5)```\n\nOnce we have all those objects, we can send them to the `accelerator.prepare()` method. Remember that if you want to train on TPUs in a Colab notebook, you will need to move all of this code into a training function, and that shouldn’t execute any cell that instantiates an `Accelerator`.\n\n```\nfrom accelerate import Accelerator\n\naccelerator = Accelerator()\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n model, optimizer, train_dataloader, eval_dataloader\n)```\n\nNow that we have sent our `train_dataloader` to `accelerator.prepare()`, we can use its length to compute the number of training steps. Remember we should always do this after preparing the dataloader, as that method will change the length of the `DataLoader`. We use a classic linear schedule from the learning rate to 0:\n\n```\nfrom transformers import get_scheduler\n\nnum_train_epochs = 3\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n \"linear\",\n optimizer=optimizer,\n num_warmup_steps=0,\n num_training_steps=num_training_steps,\n)```\n\nLastly, to push our model to the Hub, we will need to create a `Repository` object in a working folder. First log in to the Hugging Face Hub, if you’re not logged in already. We’ll determine the repository name from the model ID we want to give our model (feel free to replace the `repo_name` with your own choice; it just needs to contain your username, which is what the function `get_full_repo_name()` does):\n\n```\nfrom huggingface_hub import Repository, get_full_repo_name\n\nmodel_name = \"marian-finetuned-kde4-en-to-fr-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name```\n\n```\n'sgugger/marian-finetuned-kde4-en-to-fr-accelerate'```\n\nThen we can clone that repository in a local folder. If it already exists, this local folder should be a clone of the repository we are working with:\n\n```\noutput_dir = \"marian-finetuned-kde4-en-to-fr-accelerate\"\nrepo = Repository(output_dir, clone_from=repo_name)```\n\nWe can now upload anything we save in `output_dir` by calling the `repo.push_to_hub()` method. This will help us upload the intermediate models at the end of each epoch.\n\n### [](#training-loop)Training loop\n\nWe are now ready to write the full training loop. To simplify its evaluation part, we define this `postprocess()` function that takes predictions and labels and converts them to the lists of strings our `metric` object will expect:\n\n```\ndef postprocess(predictions, labels):\n predictions = predictions.cpu().numpy()\n labels = labels.cpu().numpy()\n\n decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)\n\n \n labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n\n \n decoded_preds = [pred.strip() for pred in decoded_preds]\n decoded_labels = [[label.strip()] for label in decoded_labels]\n return decoded_preds, decoded_labels```\n\nThe training loop looks a lot like the ones in [section 2](/course/chapter7/2) and [Chapter 3](/course/chapter3), with a few differences in the evaluation part — so let’s focus on that!\n\nThe first thing to note is that we use the `generate()` method to compute predictions, but this is a method on our base model, not the wrapped model 🤗 Accelerate created in the `prepare()` method. That’s why we unwrap the model first, then call this method.\n\nThe second thing is that, like with [token classification](/course/chapter7/2), two processes may have padded the inputs and labels to different shapes, so we use `accelerator.pad_across_processes()` to make the predictions and labels the same shape before calling the `gather()` method. If we don’t do this, the evaluation will either error out or hang forever.\n\n```\nfrom tqdm.auto import tqdm\nimport torch\n\nprogress_bar = tqdm(range(num_training_steps))\n\nfor epoch in range(num_train_epochs):\n \n model.train()\n for batch in train_dataloader:\n outputs = model(**batch)\n loss = outputs.loss\n accelerator.backward(loss)\n\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n progress_bar.update(1)\n\n \n model.eval()\n for batch in tqdm(eval_dataloader):\n with torch.no_grad():\n generated_tokens = accelerator.unwrap_model(model).generate(\n batch[\"input_ids\"],\n attention_mask=batch[\"attention_mask\"],\n max_length=128,\n )\n labels = batch[\"labels\"]\n\n \n generated_tokens = accelerator.pad_across_processes(\n generated_tokens, dim=1, pad_index=tokenizer.pad_token_id\n )\n labels = accelerator.pad_across_processes(labels, dim=1, pad_index=-100)\n\n predictions_gathered = accelerator.gather(generated_tokens)\n labels_gathered = accelerator.gather(labels)\n\n decoded_preds, decoded_labels = postprocess(predictions_gathered, labels_gathered)\n metric.add_batch(predictions=decoded_preds, references=decoded_labels)\n\n results = metric.compute()\n print(f\"epoch {epoch}, BLEU score: {results['score']:.2f}\")\n\n \n accelerator.wait_for_everyone()\n unwrapped_model = accelerator.unwrap_model(model)\n unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n if accelerator.is_main_process:\n tokenizer.save_pretrained(output_dir)\n repo.push_to_hub(\n commit_message=f\"Training in progress epoch {epoch}\", blocking=False\n )```\n\n```\nepoch 0, BLEU score: 53.47\nepoch 1, BLEU score: 54.24\nepoch 2, BLEU score: 54.44```\n\nOnce this is done, you should have a model that has results pretty similar to the one trained with the `Seq2SeqTrainer`. You can check the one we trained using this code at [_huggingface-course/marian-finetuned-kde4-en-to-fr-accelerate_](https://huggingface.co/huggingface-course/marian-finetuned-kde4-en-to-fr-accelerate). And if you want to test out any tweaks to the training loop, you can directly implement them by editing the code shown above!\n\n## [](#using-the-fine-tuned-model)Using the fine-tuned model\n\nWe’ve already shown you how you can use the model we fine-tuned on the Model Hub with the inference widget. To use it locally in a `pipeline`, we just have to specify the proper model identifier:\n\n```\nfrom transformers import pipeline\n\n\nmodel_checkpoint = \"huggingface-course/marian-finetuned-kde4-en-to-fr\"\ntranslator = pipeline(\"translation\", model=model_checkpoint)\ntranslator(\"Default to expanded threads\")```\n\n```\n[{'translation_text': 'Par défaut, développer les fils de discussion'}]```\n\nAs expected, our pretrained model adapted its knowledge to the corpus we fine-tuned it on, and instead of leaving the English word “threads” alone, it now translates it to the French official version. It’s the same for “plugin”:\n\n```\ntranslator(\n \"Unable to import %1 using the OFX importer plugin. This file is not the correct format.\"\n)```\n\n```\n[{'translation_text': \"Impossible d'importer %1 en utilisant le module externe d'importation OFX. Ce fichier n'est pas le bon format.\"}]```\n\nAnother great example of domain adaptation!\n\n✏️ **Your turn!** What does the model return on the sample with the word “email” you identified earlier?","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tTranslation - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Translation

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Translation

\"Ask \"Open \"Open

Let’s now dive into translation. This is another sequence-to-sequence task, which means it’s a problem that can be formulated as going from one sequence to another. In that sense the problem is pretty close to summarization, and you could adapt what we will see here to other sequence-to-sequence problems such as:

  • Style transfer: Creating a model that translates texts written in a certain style to another (e.g., formal to casual or Shakespearean English to modern English)
  • Generative question answering: Creating a model that generates answers to questions, given a context

If you have a big enough corpus of texts in two (or more) languages, you can train a new translation model from scratch like we will in the section on causal language modeling. It will be faster, however, to fine-tune an existing translation model, be it a multilingual one like mT5 or mBART that you want to fine-tune to a specific language pair, or even a model specialized for translation from one language to another that you want to fine-tune to your specific corpus.

In this section, we will fine-tune a Marian model pretrained to translate from English to French (since a lot of Hugging Face employees speak both those languages) on the KDE4 dataset, which is a dataset of localized files for the KDE apps. The model we will use has been pretrained on a large corpus of French and English texts taken from the Opus dataset, which actually contains the KDE4 dataset. But even if the pretrained model we use has seen that data during its pretraining, we will see that we can get a better version of it after fine-tuning.

Once we’re finished, we will have a model able to make predictions like this one:

\"One-hot \"One-hot

As in the previous sections, you can find the actual model that we’ll train and upload to the Hub using the code below and double-check its predictions here.

Preparing the data

To fine-tune or train a translation model from scratch, we will need a dataset suitable for the task. As mentioned previously, we’ll use the KDE4 dataset in this section, but you can adapt the code to use your own data quite easily, as long as you have pairs of sentences in the two languages you want to translate from and into. Refer back to Chapter 5 if you need a reminder of how to load your custom data in a Dataset.

The KDE4 dataset

As usual, we download our dataset using the load_dataset() function:

from datasets import load_dataset\n\nraw_datasets = load_dataset(\"kde4\", lang1=\"en\", lang2=\"fr\")

If you want to work with a different pair of languages, you can specify them by their codes. A total of 92 languages are available for this dataset; you can see them all by expanding the language tags on its dataset card.

\"Language

Let’s have a look at the dataset:

raw_datasets
DatasetDict({\n    train: Dataset({\n        features: ['id', 'translation'],\n        num_rows: 210173\n    })\n})

We have 210,173 pairs of sentences, but in one single split, so we will need to create our own validation set. As we saw in Chapter 5, a Dataset has a train_test_split() method that can help us. We’ll provide a seed for reproducibility:

split_datasets = raw_datasets[\"train\"].train_test_split(train_size=0.9, seed=20)\nsplit_datasets
DatasetDict({\n    train: Dataset({\n        features: ['id', 'translation'],\n        num_rows: 189155\n    })\n    test: Dataset({\n        features: ['id', 'translation'],\n        num_rows: 21018\n    })\n})

We can rename the \"test\" key to \"validation\" like this:

split_datasets[\"validation\"] = split_datasets.pop(\"test\")

Now let’s take a look at one element of the dataset:

split_datasets[\"train\"][1][\"translation\"]
{'en': 'Default to expanded threads',\n 'fr': 'Par défaut, développer les fils de discussion'}

We get a dictionary with two sentences in the pair of languages we requested. One particularity of this dataset full of technical computer science terms is that they are all fully translated in French. However, French engineers are often lazy and leave most computer science-specific words in English when they talk. Here, for instance, the word “threads” might well appear in a French sentence, especially in a technical conversation; but in this dataset it has been translated into the more correct “fils de discussion.” The pretrained model we use, which has been pretrained on a larger corpus of French and English sentences, takes the easier option of leaving the word as is:

from transformers import pipeline\n\nmodel_checkpoint = \"Helsinki-NLP/opus-mt-en-fr\"\ntranslator = pipeline(\"translation\", model=model_checkpoint)\ntranslator(\"Default to expanded threads\")
[{'translation_text': 'Par défaut pour les threads élargis'}]

Another example of this behavior can be seen with the word “plugin,” which isn’t officially a French word but which most native speakers will understand and not bother to translate.\nIn the KDE4 dataset this word has been translated in French into the more official “module d’extension”:

split_datasets[\"train\"][172][\"translation\"]
{'en': 'Unable to import %1 using the OFX importer plugin. This file is not the correct format.',\n 'fr': \"Impossible d'importer %1 en utilisant le module d'extension d'importation OFX. Ce fichier n'a pas un format correct.\"}

Our pretrained model, however, sticks with the compact and familiar English word:

translator(\n    \"Unable to import %1 using the OFX importer plugin. This file is not the correct format.\"\n)
[{'translation_text': \"Impossible d'importer %1 en utilisant le plugin d'importateur OFX. Ce fichier n'est pas le bon format.\"}]

It will be interesting to see if our fine-tuned model picks up on those particularities of the dataset (spoiler alert: it will).

✏️ Your turn! Another English word that is often used in French is “email.” Find the first sample in the training dataset that uses this word. How is it translated? How does the pretrained model translate the same English sentence?

Processing the data

You should know the drill by now: the texts all need to be converted into sets of token IDs so the model can make sense of them. For this task, we’ll need to tokenize both the inputs and the targets. Our first task is to create our tokenizer object. As noted earlier, we’ll be using a Marian English to French pretrained model. If you are trying this code with another pair of languages, make sure to adapt the model checkpoint. The Helsinki-NLP organization provides more than a thousand models in multiple languages.

from transformers import AutoTokenizer\n\nmodel_checkpoint = \"Helsinki-NLP/opus-mt-en-fr\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint, return_tensors=\"pt\")

You can also replace the model_checkpoint with any other model you prefer from the Hub, or a local folder where you’ve saved a pretrained model and a tokenizer.

💡 If you are using a multilingual tokenizer such as mBART, mBART-50, or M2M100, you will need to set the language codes of your inputs and targets in the tokenizer by setting tokenizer.src_lang and tokenizer.tgt_lang to the right values.

The preparation of our data is pretty straightforward. There’s just one thing to remember; you need to ensure that the tokenizer processes the targets in the output language (here, French). You can do this by passing the targets to the text_targets argument of the tokenizer’s __call__ method.

To see how this works, let’s process one sample of each language in the training set:

en_sentence = split_datasets[\"train\"][1][\"translation\"][\"en\"]\nfr_sentence = split_datasets[\"train\"][1][\"translation\"][\"fr\"]\n\ninputs = tokenizer(en_sentence, text_target=fr_sentence)\ninputs
{'input_ids': [47591, 12, 9842, 19634, 9, 0], 'attention_mask': [1, 1, 1, 1, 1, 1], 'labels': [577, 5891, 2, 3184, 16, 2542, 5, 1710, 0]}

As we can see, the output contains the input IDs associated with the English sentence, while the IDs associated with the French one are stored in the labels field. If you forget to indicate that you are tokenizing labels, they will be tokenized by the input tokenizer, which in the case of a Marian model is not going to go well at all:

wrong_targets = tokenizer(fr_sentence)\nprint(tokenizer.convert_ids_to_tokens(wrong_targets[\"input_ids\"]))\nprint(tokenizer.convert_ids_to_tokens(inputs[\"labels\"]))
['▁Par', '▁dé', 'f', 'aut', ',', '▁dé', 've', 'lop', 'per', '▁les', '▁fil', 's', '▁de', '▁discussion', '</s>']\n['▁Par', '▁défaut', ',', '▁développer', '▁les', '▁fils', '▁de', '▁discussion', '</s>']

As we can see, using the English tokenizer to preprocess a French sentence results in a lot more tokens, since the tokenizer doesn’t know any French words (except those that also appear in the English language, like “discussion”).

Since inputs is a dictionary with our usual keys (input IDs, attention mask, etc.), the last step is to define the preprocessing function we will apply on the datasets:

max_length = 128\n\n\ndef preprocess_function(examples):\n    inputs = [ex[\"en\"] for ex in examples[\"translation\"]]\n    targets = [ex[\"fr\"] for ex in examples[\"translation\"]]\n    model_inputs = tokenizer(\n        inputs, text_target=targets, max_length=max_length, truncation=True\n    )\n    return model_inputs

Note that we set the same maximum length for our inputs and outputs. Since the texts we’re dealing with seem pretty short, we use 128.

💡 If you are using a T5 model (more specifically, one of the t5-xxx checkpoints), the model will expect the text inputs to have a prefix indicating the task at hand, such as translate: English to French:.

⚠️ We don’t pay attention to the attention mask of the targets, as the model won’t expect it. Instead, the labels corresponding to a padding token should be set to -100 so they are ignored in the loss computation. This will be done by our data collator later on since we are applying dynamic padding, but if you use padding here, you should adapt the preprocessing function to set all labels that correspond to the padding token to -100.

We can now apply that preprocessing in one go on all the splits of our dataset:

tokenized_datasets = split_datasets.map(\n    preprocess_function,\n    batched=True,\n    remove_columns=split_datasets[\"train\"].column_names,\n)

Now that the data has been preprocessed, we are ready to fine-tune our pretrained model!

Fine-tuning the model with the Trainer API

The actual code using the Trainer will be the same as before, with just one little change: we use a Seq2SeqTrainer here, which is a subclass of Trainer that will allow us to properly deal with the evaluation, using the generate() method to predict outputs from the inputs. We’ll dive into that in more detail when we talk about the metric computation.

First things first, we need an actual model to fine-tune. We’ll use the usual AutoModel API:

from transformers import AutoModelForSeq2SeqLM\n\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)

Note that this time we are using a model that was trained on a translation task and can actually be used already, so there is no warning about missing weights or newly initialized ones.

Data collation

We’ll need a data collator to deal with the padding for dynamic batching. We can’t just use a DataCollatorWithPadding like in Chapter 3 in this case, because that only pads the inputs (input IDs, attention mask, and token type IDs). Our labels should also be padded to the maximum length encountered in the labels. And, as mentioned previously, the padding value used to pad the labels should be -100 and not the padding token of the tokenizer, to make sure those padded values are ignored in the loss computation.

This is all done by a DataCollatorForSeq2Seq. Like the DataCollatorWithPadding, it takes the tokenizer used to preprocess the inputs, but it also takes the model. This is because this data collator will also be responsible for preparing the decoder input IDs, which are shifted versions of the labels with a special token at the beginning. Since this shift is done slightly differently for different architectures, the DataCollatorForSeq2Seq needs to know the model object:

from transformers import DataCollatorForSeq2Seq\n\ndata_collator = DataCollatorForSeq2Seq(tokenizer, model=model)

To test this on a few samples, we just call it on a list of examples from our tokenized training set:

batch = data_collator([tokenized_datasets[\"train\"][i] for i in range(1, 3)])\nbatch.keys()
dict_keys(['attention_mask', 'input_ids', 'labels', 'decoder_input_ids'])

We can check our labels have been padded to the maximum length of the batch, using -100:

batch[\"labels\"]
tensor([[  577,  5891,     2,  3184,    16,  2542,     5,  1710,     0,  -100,\n          -100,  -100,  -100,  -100,  -100,  -100],\n        [ 1211,     3,    49,  9409,  1211,     3, 29140,   817,  3124,   817,\n           550,  7032,  5821,  7907, 12649,     0]])

And we can also have a look at the decoder input IDs, to see that they are shifted versions of the labels:

batch[\"decoder_input_ids\"]
tensor([[59513,   577,  5891,     2,  3184,    16,  2542,     5,  1710,     0,\n         59513, 59513, 59513, 59513, 59513, 59513],\n        [59513,  1211,     3,    49,  9409,  1211,     3, 29140,   817,  3124,\n           817,   550,  7032,  5821,  7907, 12649]])

Here are the labels for the first and second elements in our dataset:

for i in range(1, 3):\n    print(tokenized_datasets[\"train\"][i][\"labels\"])
[577, 5891, 2, 3184, 16, 2542, 5, 1710, 0]\n[1211, 3, 49, 9409, 1211, 3, 29140, 817, 3124, 817, 550, 7032, 5821, 7907, 12649, 0]

We will pass this data_collator along to the Seq2SeqTrainer. Next, let’s have a look at the metric.

Metrics

The feature that Seq2SeqTrainer adds to its superclass Trainer is the ability to use the generate() method during evaluation or prediction. During training, the model will use the decoder_input_ids with an attention mask ensuring it does not use the tokens after the token it’s trying to predict, to speed up training. During inference we won’t be able to use those since we won’t have labels, so it’s a good idea to evaluate our model with the same setup.

As we saw in Chapter 1, the decoder performs inference by predicting tokens one by one — something that’s implemented behind the scenes in 🤗 Transformers by the generate() method. The Seq2SeqTrainer will let us use that method for evaluation if we set predict_with_generate=True.

The traditional metric used for translation is the BLEU score, introduced in a 2002 article by Kishore Papineni et al. The BLEU score evaluates how close the translations are to their labels. It does not measure the intelligibility or grammatical correctness of the model’s generated outputs, but uses statistical rules to ensure that all the words in the generated outputs also appear in the targets. In addition, there are rules that penalize repetitions of the same words if they are not also repeated in the targets (to avoid the model outputting sentences like \"the the the the the\") and output sentences that are shorter than those in the targets (to avoid the model outputting sentences like \"the\").

One weakness with BLEU is that it expects the text to already be tokenized, which makes it difficult to compare scores between models that use different tokenizers. So instead, the most commonly used metric for benchmarking translation models today is SacreBLEU, which addresses this weakness (and others) by standardizing the tokenization step. To use this metric, we first need to install the SacreBLEU library:

!pip install sacrebleu

We can then load it via evaluate.load() like we did in Chapter 3:

import evaluate\n\nmetric = evaluate.load(\"sacrebleu\")

This metric will take texts as inputs and targets. It is designed to accept several acceptable targets, as there are often multiple acceptable translations of the same sentence — the dataset we’re using only provides one, but it’s not uncommon in NLP to find datasets that give several sentences as labels. So, the predictions should be a list of sentences, but the references should be a list of lists of sentences.

Let’s try an example:

predictions = [\n    \"This plugin lets you translate web pages between several languages automatically.\"\n]\nreferences = [\n    [\n        \"This plugin allows you to automatically translate web pages between several languages.\"\n    ]\n]\nmetric.compute(predictions=predictions, references=references)
{'score': 46.750469682990165,\n 'counts': [11, 6, 4, 3],\n 'totals': [12, 11, 10, 9],\n 'precisions': [91.67, 54.54, 40.0, 33.33],\n 'bp': 0.9200444146293233,\n 'sys_len': 12,\n 'ref_len': 13}

This gets a BLEU score of 46.75, which is rather good — for reference, the original Transformer model in the “Attention Is All You Need” paper achieved a BLEU score of 41.8 on a similar translation task between English and French! (For more information about the individual metrics, like counts and bp, see the SacreBLEU repository.) On the other hand, if we try with the two bad types of predictions (lots of repetitions or too short) that often come out of translation models, we will get rather bad BLEU scores:

predictions = [\"This This This This\"]\nreferences = [\n    [\n        \"This plugin allows you to automatically translate web pages between several languages.\"\n    ]\n]\nmetric.compute(predictions=predictions, references=references)
{'score': 1.683602693167689,\n 'counts': [1, 0, 0, 0],\n 'totals': [4, 3, 2, 1],\n 'precisions': [25.0, 16.67, 12.5, 12.5],\n 'bp': 0.10539922456186433,\n 'sys_len': 4,\n 'ref_len': 13}
predictions = [\"This plugin\"]\nreferences = [\n    [\n        \"This plugin allows you to automatically translate web pages between several languages.\"\n    ]\n]\nmetric.compute(predictions=predictions, references=references)
{'score': 0.0,\n 'counts': [2, 1, 0, 0],\n 'totals': [2, 1, 0, 0],\n 'precisions': [100.0, 100.0, 0.0, 0.0],\n 'bp': 0.004086771438464067,\n 'sys_len': 2,\n 'ref_len': 13}

The score can go from 0 to 100, and higher is better.

To get from the model outputs to texts the metric can use, we will use the tokenizer.batch_decode() method. We just have to clean up all the -100s in the labels (the tokenizer will automatically do the same for the padding token):

import numpy as np\n\n\ndef compute_metrics(eval_preds):\n    preds, labels = eval_preds\n    # In case the model returns more than the prediction logits\n    if isinstance(preds, tuple):\n        preds = preds[0]\n\n    decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)\n\n    # Replace -100s in the labels as we can't decode them\n    labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n    decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n\n    # Some simple post-processing\n    decoded_preds = [pred.strip() for pred in decoded_preds]\n    decoded_labels = [[label.strip()] for label in decoded_labels]\n\n    result = metric.compute(predictions=decoded_preds, references=decoded_labels)\n    return {\"bleu\": result[\"score\"]}

Now that this is done, we are ready to fine-tune our model!

Fine-tuning the model

The first step is to log in to Hugging Face, so you’re able to upload your results to the Model Hub. There’s a convenience function to help you with this in a notebook:

from huggingface_hub import notebook_login\n\nnotebook_login()

This will display a widget where you can enter your Hugging Face login credentials.

If you aren’t working in a notebook, just type the following line in your terminal:

huggingface-cli login

Once this is done, we can define our Seq2SeqTrainingArguments. Like for the Trainer, we use a subclass of TrainingArguments that contains a few more fields:

from transformers import Seq2SeqTrainingArguments\n\nargs = Seq2SeqTrainingArguments(\n    f\"marian-finetuned-kde4-en-to-fr\",\n    evaluation_strategy=\"no\",\n    save_strategy=\"epoch\",\n    learning_rate=2e-5,\n    per_device_train_batch_size=32,\n    per_device_eval_batch_size=64,\n    weight_decay=0.01,\n    save_total_limit=3,\n    num_train_epochs=3,\n    predict_with_generate=True,\n    fp16=True,\n    push_to_hub=True,\n)

Apart from the usual hyperparameters (like learning rate, number of epochs, batch size, and some weight decay), here are a few changes compared to what we saw in the previous sections:

  • We don’t set any regular evaluation, as evaluation takes a while; we will just evaluate our model once before training and after.
  • We set fp16=True, which speeds up training on modern GPUs.
  • We set predict_with_generate=True, as discussed above.
  • We use push_to_hub=True to upload the model to the Hub at the end of each epoch.

Note that you can specify the full name of the repository you want to push to with the hub_model_id argument (in particular, you will have to use this argument to push to an organization). For instance, when we pushed the model to the huggingface-course organization, we added hub_model_id=\"huggingface-course/marian-finetuned-kde4-en-to-fr\" to Seq2SeqTrainingArguments. By default, the repository used will be in your namespace and named after the output directory you set, so in our case it will be \"sgugger/marian-finetuned-kde4-en-to-fr\" (which is the model we linked to at the beginning of this section).

💡 If the output directory you are using already exists, it needs to be a local clone of the repository you want to push to. If it isn’t, you’ll get an error when defining your Seq2SeqTrainer and will need to set a new name.

Finally, we just pass everything to the Seq2SeqTrainer:

from transformers import Seq2SeqTrainer\n\ntrainer = Seq2SeqTrainer(\n    model,\n    args,\n    train_dataset=tokenized_datasets[\"train\"],\n    eval_dataset=tokenized_datasets[\"validation\"],\n    data_collator=data_collator,\n    tokenizer=tokenizer,\n    compute_metrics=compute_metrics,\n)

Before training, we’ll first look at the score our model gets, to double-check that we’re not making things worse with our fine-tuning. This command will take a bit of time, so you can grab a coffee while it executes:

trainer.evaluate(max_length=max_length)
{'eval_loss': 1.6964408159255981,\n 'eval_bleu': 39.26865061007616,\n 'eval_runtime': 965.8884,\n 'eval_samples_per_second': 21.76,\n 'eval_steps_per_second': 0.341}

A BLEU score of 39 is not too bad, which reflects the fact that our model is already good at translating English sentences to French ones.

Next is the training, which will also take a bit of time:

trainer.train()

Note that while the training happens, each time the model is saved (here, every epoch) it is uploaded to the Hub in the background. This way, you will be able to to resume your training on another machine if necessary.

Once training is done, we evaluate our model again — hopefully we will see some amelioration in the BLEU score!

trainer.evaluate(max_length=max_length)
{'eval_loss': 0.8558505773544312,\n 'eval_bleu': 52.94161337775576,\n 'eval_runtime': 714.2576,\n 'eval_samples_per_second': 29.426,\n 'eval_steps_per_second': 0.461,\n 'epoch': 3.0}

That’s a nearly 14-point improvement, which is great.

Finally, we use the push_to_hub() method to make sure we upload the latest version of the model. The Trainer also drafts a model card with all the evaluation results and uploads it. This model card contains metadata that helps the Model Hub pick the widget for the inference demo. Usually, there is no need to say anything as it can infer the right widget from the model class, but in this case, the same model class can be used for all kinds of sequence-to-sequence problems, so we specify it’s a translation model:

trainer.push_to_hub(tags=\"translation\", commit_message=\"Training complete\")

This command returns the URL of the commit it just did, if you want to inspect it:

'https://huggingface.co/sgugger/marian-finetuned-kde4-en-to-fr/commit/3601d621e3baae2bc63d3311452535f8f58f6ef3'

At this stage, you can use the inference widget on the Model Hub to test your model and share it with your friends. You have successfully fine-tuned a model on a translation task — congratulations!

If you want to dive a bit more deeply into the training loop, we will now show you how to do the same thing using 🤗 Accelerate.

A custom training loop

Let’s now take a look at the full training loop, so you can easily customize the parts you need. It will look a lot like what we did in section 2 and Chapter 3.

Preparing everything for training

You’ve seen all of this a few times now, so we’ll go through the code quite quickly. First we’ll build the DataLoaders from our datasets, after setting the datasets to the \"torch\" format so we get PyTorch tensors:

from torch.utils.data import DataLoader\n\ntokenized_datasets.set_format(\"torch\")\ntrain_dataloader = DataLoader(\n    tokenized_datasets[\"train\"],\n    shuffle=True,\n    collate_fn=data_collator,\n    batch_size=8,\n)\neval_dataloader = DataLoader(\n    tokenized_datasets[\"validation\"], collate_fn=data_collator, batch_size=8\n)

Next we reinstantiate our model, to make sure we’re not continuing the fine-tuning from before but starting from the pretrained model again:

model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)

Then we will need an optimizer:

from transformers import AdamW\n\noptimizer = AdamW(model.parameters(), lr=2e-5)

Once we have all those objects, we can send them to the accelerator.prepare() method. Remember that if you want to train on TPUs in a Colab notebook, you will need to move all of this code into a training function, and that shouldn’t execute any cell that instantiates an Accelerator.

from accelerate import Accelerator\n\naccelerator = Accelerator()\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n    model, optimizer, train_dataloader, eval_dataloader\n)

Now that we have sent our train_dataloader to accelerator.prepare(), we can use its length to compute the number of training steps. Remember we should always do this after preparing the dataloader, as that method will change the length of the DataLoader. We use a classic linear schedule from the learning rate to 0:

from transformers import get_scheduler\n\nnum_train_epochs = 3\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n    \"linear\",\n    optimizer=optimizer,\n    num_warmup_steps=0,\n    num_training_steps=num_training_steps,\n)

Lastly, to push our model to the Hub, we will need to create a Repository object in a working folder. First log in to the Hugging Face Hub, if you’re not logged in already. We’ll determine the repository name from the model ID we want to give our model (feel free to replace the repo_name with your own choice; it just needs to contain your username, which is what the function get_full_repo_name() does):

from huggingface_hub import Repository, get_full_repo_name\n\nmodel_name = \"marian-finetuned-kde4-en-to-fr-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name
'sgugger/marian-finetuned-kde4-en-to-fr-accelerate'

Then we can clone that repository in a local folder. If it already exists, this local folder should be a clone of the repository we are working with:

output_dir = \"marian-finetuned-kde4-en-to-fr-accelerate\"\nrepo = Repository(output_dir, clone_from=repo_name)

We can now upload anything we save in output_dir by calling the repo.push_to_hub() method. This will help us upload the intermediate models at the end of each epoch.

Training loop

We are now ready to write the full training loop. To simplify its evaluation part, we define this postprocess() function that takes predictions and labels and converts them to the lists of strings our metric object will expect:

def postprocess(predictions, labels):\n    predictions = predictions.cpu().numpy()\n    labels = labels.cpu().numpy()\n\n    decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)\n\n    # Replace -100 in the labels as we can't decode them.\n    labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n    decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n\n    # Some simple post-processing\n    decoded_preds = [pred.strip() for pred in decoded_preds]\n    decoded_labels = [[label.strip()] for label in decoded_labels]\n    return decoded_preds, decoded_labels

The training loop looks a lot like the ones in section 2 and Chapter 3, with a few differences in the evaluation part — so let’s focus on that!

The first thing to note is that we use the generate() method to compute predictions, but this is a method on our base model, not the wrapped model 🤗 Accelerate created in the prepare() method. That’s why we unwrap the model first, then call this method.

The second thing is that, like with token classification, two processes may have padded the inputs and labels to different shapes, so we use accelerator.pad_across_processes() to make the predictions and labels the same shape before calling the gather() method. If we don’t do this, the evaluation will either error out or hang forever.

from tqdm.auto import tqdm\nimport torch\n\nprogress_bar = tqdm(range(num_training_steps))\n\nfor epoch in range(num_train_epochs):\n    # Training\n    model.train()\n    for batch in train_dataloader:\n        outputs = model(**batch)\n        loss = outputs.loss\n        accelerator.backward(loss)\n\n        optimizer.step()\n        lr_scheduler.step()\n        optimizer.zero_grad()\n        progress_bar.update(1)\n\n    # Evaluation\n    model.eval()\n    for batch in tqdm(eval_dataloader):\n        with torch.no_grad():\n            generated_tokens = accelerator.unwrap_model(model).generate(\n                batch[\"input_ids\"],\n                attention_mask=batch[\"attention_mask\"],\n                max_length=128,\n            )\n        labels = batch[\"labels\"]\n\n        # Necessary to pad predictions and labels for being gathered\n        generated_tokens = accelerator.pad_across_processes(\n            generated_tokens, dim=1, pad_index=tokenizer.pad_token_id\n        )\n        labels = accelerator.pad_across_processes(labels, dim=1, pad_index=-100)\n\n        predictions_gathered = accelerator.gather(generated_tokens)\n        labels_gathered = accelerator.gather(labels)\n\n        decoded_preds, decoded_labels = postprocess(predictions_gathered, labels_gathered)\n        metric.add_batch(predictions=decoded_preds, references=decoded_labels)\n\n    results = metric.compute()\n    print(f\"epoch {epoch}, BLEU score: {results['score']:.2f}\")\n\n    # Save and upload\n    accelerator.wait_for_everyone()\n    unwrapped_model = accelerator.unwrap_model(model)\n    unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n    if accelerator.is_main_process:\n        tokenizer.save_pretrained(output_dir)\n        repo.push_to_hub(\n            commit_message=f\"Training in progress epoch {epoch}\", blocking=False\n        )
epoch 0, BLEU score: 53.47\nepoch 1, BLEU score: 54.24\nepoch 2, BLEU score: 54.44

Once this is done, you should have a model that has results pretty similar to the one trained with the Seq2SeqTrainer. You can check the one we trained using this code at huggingface-course/marian-finetuned-kde4-en-to-fr-accelerate. And if you want to test out any tweaks to the training loop, you can directly implement them by editing the code shown above!

Using the fine-tuned model

We’ve already shown you how you can use the model we fine-tuned on the Model Hub with the inference widget. To use it locally in a pipeline, we just have to specify the proper model identifier:

from transformers import pipeline\n\n# Replace this with your own checkpoint\nmodel_checkpoint = \"huggingface-course/marian-finetuned-kde4-en-to-fr\"\ntranslator = pipeline(\"translation\", model=model_checkpoint)\ntranslator(\"Default to expanded threads\")
[{'translation_text': 'Par défaut, développer les fils de discussion'}]

As expected, our pretrained model adapted its knowledge to the corpus we fine-tuned it on, and instead of leaving the English word “threads” alone, it now translates it to the French official version. It’s the same for “plugin”:

translator(\n    \"Unable to import %1 using the OFX importer plugin. This file is not the correct format.\"\n)
[{'translation_text': \"Impossible d'importer %1 en utilisant le module externe d'importation OFX. Ce fichier n'est pas le bon format.\"}]

Another great example of domain adaptation!

✏️ Your turn! What does the model return on the sample with the word “email” you identified earlier?

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:28.863Z"} {"title":"Training a causal language model from scratch - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter7/6?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#training-a-causal-language-model-from-scratch)Training a causal language model from scratch\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-7-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter7/section6_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter7/section6_pt.ipynb)\n\nUp until now, we’ve mostly been using pretrained models and fine-tuning them for new use cases by reusing the weights from pretraining. As we saw in [Chapter 1](/course/chapter1), this is commonly referred to as _transfer learning_, and it’s a very successful strategy for applying Transformer models to most real-world use cases where labeled data is sparse. In this chapter, we’ll take a different approach and train a completely new model from scratch. This is a good approach to take if you have a lot of data and it is very different from the pretraining data used for the available models. However, it also requires considerably more compute resources to pretrain a language model than just to fine-tune an existing one. Examples where it can make sense to train a new model include for datasets consisting of musical notes, molecular sequences such as DNA, or programming languages. The latter have recently gained traction thanks to tools such as TabNine and GitHub’s Copilot, powered by OpenAI’s Codex model, that can generate long sequences of code. This task of text generation is best addressed with auto-regressive or causal language models such as GPT-2.\n\nIn this section we will build a scaled-down version of a code generation model: we’ll focus on one-line completions instead of full functions or classes, using a subset of Python code. When working with data in Python you are in frequent contact with the Python data science stack, consisting of the `matplotlib`, `seaborn`, `pandas`, and `scikit-learn` libraries. When using those frameworks it’s common to need to look up specific commands, so it would be nice if we could use a model to complete these calls for us.\n\nIn [Chapter 6](/course/chapter6) we created an efficient tokenizer to process Python source code, but what we still need is a large-scale dataset to pretrain a model on. Here, we’ll apply our tokenizer to a corpus of Python code derived from GitHub repositories. We will then use the `Trainer` API and 🤗 Accelerate to train the model. Let’s get to it!\n\nThis is actually showcasing the model that was trained and uploaded to the Hub using the code shown in this section. You can find it [here](https://huggingface.co/huggingface-course/codeparrot-ds?text=plt.imshow%28). Note that since there is some randomization happening in the text generation, you will probably get a slightly different result.\n\n## [](#gathering-the-data)Gathering the data\n\nPython code is abundantly available from code repositories such as GitHub, which we can use to create a dataset by scraping for every Python repository. This was the approach taken in the [Transformers textbook](https://learning.oreilly.com/library/view/natural-language-processing/9781098136789/) to pretrain a large GPT-2 model. Using a GitHub dump of about 180 GB containing roughly 20 million Python files called `codeparrot`, the authors built a dataset that they then shared on the [Hugging Face Hub](https://huggingface.co/datasets/transformersbook/codeparrot).\n\nHowever, training on the full corpus is time- and compute-consuming, and we only need the subset of the dataset concerned with the Python data science stack. So, let’s start by filtering the `codeparrot` dataset for all files that include any of the libraries in this stack. Because of the dataset’s size, we want to avoid downloading it; instead, we’ll use the streaming feature to filter it on the fly. To help us filter the code samples using the libraries we mentioned earlier, we’ll use the following function:\n\n```\ndef any_keyword_in_string(string, keywords):\n for keyword in keywords:\n if keyword in string:\n return True\n return False```\n\nLet’s test it on two examples:\n\n```\nfilters = [\"pandas\", \"sklearn\", \"matplotlib\", \"seaborn\"]\nexample_1 = \"import numpy as np\"\nexample_2 = \"import pandas as pd\"\n\nprint(\n any_keyword_in_string(example_1, filters), any_keyword_in_string(example_2, filters)\n)```\n\nWe can use this to create a function that will stream the dataset and filter the elements we want:\n\n```\nfrom collections import defaultdict\nfrom tqdm import tqdm\nfrom datasets import Dataset\n\n\ndef filter_streaming_dataset(dataset, filters):\n filtered_dict = defaultdict(list)\n total = 0\n for sample in tqdm(iter(dataset)):\n total += 1\n if any_keyword_in_string(sample[\"content\"], filters):\n for k, v in sample.items():\n filtered_dict[k].append(v)\n print(f\"{len(filtered_dict['content'])/total:.2%} of data after filtering.\")\n return Dataset.from_dict(filtered_dict)```\n\nThen we can simply apply this function to the streaming dataset:\n\n```\n\nfrom datasets import load_dataset\n\nsplit = \"train\" \nfilters = [\"pandas\", \"sklearn\", \"matplotlib\", \"seaborn\"]\n\ndata = load_dataset(f\"transformersbook/codeparrot-{split}\", split=split, streaming=True)\nfiltered_data = filter_streaming_dataset(data, filters)```\n\n```\n3.26% of data after filtering.```\n\nThis leaves us with about 3% of the original dataset, which is still quite sizable — the resulting dataset is 6 GB and consists of 600,000 Python scripts!\n\nFiltering the full dataset can take 2-3h depending on your machine and bandwidth. If you don’t want to go through this lengthy process yourself, we provide the filtered dataset on the Hub for you to download:\n\n```\nfrom datasets import load_dataset, DatasetDict\n\nds_train = load_dataset(\"huggingface-course/codeparrot-ds-train\", split=\"train\")\nds_valid = load_dataset(\"huggingface-course/codeparrot-ds-valid\", split=\"validation\")\n\nraw_datasets = DatasetDict(\n {\n \"train\": ds_train, \n \"valid\": ds_valid, \n }\n)\n\nraw_datasets```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['repo_name', 'path', 'copies', 'size', 'content', 'license'],\n num_rows: 606720\n })\n valid: Dataset({\n features: ['repo_name', 'path', 'copies', 'size', 'content', 'license'],\n num_rows: 3322\n })\n})```\n\nPretraining the language model will take a while. We suggest that you first run the training loop on a sample of the data by uncommenting the two partial lines above, and make sure that the training successfully completes and the models are stored. Nothing is more frustrating than a training run failing at the last step because you forgot to create a folder or because there’s a typo at the end of the training loop!\n\nLet’s look at an example from the dataset. We’ll just show the first 200 characters of each field:\n\n```\nfor key in raw_datasets[\"train\"][0]:\n print(f\"{key.upper()}: {raw_datasets['train'][0][key][:200]}\")```\n\n```\n'REPO_NAME: kmike/scikit-learn'\n'PATH: sklearn/utils/__init__.py'\n'COPIES: 3'\n'SIZE: 10094'\n'''CONTENT: \"\"\"\nThe :mod:`sklearn.utils` module includes various utilites.\n\"\"\"\n\nfrom collections import Sequence\n\nimport numpy as np\nfrom scipy.sparse import issparse\nimport warnings\n\nfrom .murmurhash import murm\nLICENSE: bsd-3-clause'''```\n\nWe can see that the `content` field contains the code that we want our model to train on. Now that we have a dataset, we need to prepare the texts so they’re in a format suitable for pretraining.\n\n## [](#preparing-the-dataset)Preparing the dataset\n\nThe first step will be to tokenize the data, so we can use it for training. Since our goal is to mainly autocomplete short function calls, we can keep the context size relatively small. This has the benefit that we can train the model much faster and it requires significantly less memory. If it is important for your application to have more context (for example, if you want the model to write unit tests based on a file with the function definition), make sure you increase that number, but also keep in mind that this comes with a greater GPU memory footprint. For now, let’s fix the context size at 128 tokens, as opposed to the 1,024 or 2,048 used in GPT-2 or GPT-3, respectively.\n\nMost documents contain many more than 128 tokens, so simply truncating the inputs to the maximum length would eliminate a large fraction of our dataset. Instead, we’ll use the `return_overflowing_tokens` option to tokenize the whole input and split it into several chunks, as we did in [Chapter 6](/course/chapter6/4). We’ll also use the `return_length` option to return the length of each created chunk automatically. Often the last chunk will be smaller than the context size, and we’ll get rid of these pieces to avoid padding issues; we don’t really need them as we have plenty of data anyway.\n\n![Chunking a large texts in several pieces.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/chunking_texts.svg) ![Chunking a large texts in several pieces.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/chunking_texts-dark.svg)\n\nLet’s see exactly how this works by looking at the first two examples:\n\n```\nfrom transformers import AutoTokenizer\n\ncontext_length = 128\ntokenizer = AutoTokenizer.from_pretrained(\"huggingface-course/code-search-net-tokenizer\")\n\noutputs = tokenizer(\n raw_datasets[\"train\"][:2][\"content\"],\n truncation=True,\n max_length=context_length,\n return_overflowing_tokens=True,\n return_length=True,\n)\n\nprint(f\"Input IDs length: {len(outputs['input_ids'])}\")\nprint(f\"Input chunk lengths: {(outputs['length'])}\")\nprint(f\"Chunk mapping: {outputs['overflow_to_sample_mapping']}\")```\n\n```\nInput IDs length: 34\nInput chunk lengths: [128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 117, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 41]\nChunk mapping: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]```\n\nWe can see that we get 34 segments in total from those two examples. Looking at the chunk lengths, we can see that the chunks at the ends of both documents have less than 128 tokens (117 and 41, respectively). These represent just a small fraction of the total chunks that we have, so we can safely throw them away. With the `overflow_to_sample_mapping` field, we can also reconstruct which chunks belonged to which input samples.\n\nWith this operation we’re using a handy feature of the `Dataset.map()` function in 🤗 Datasets, which is that it does not require one-to-one maps; as we saw in [section 3](/course/chapter7/3), we can create batches with more or fewer elements than the input batch. This is useful when doing operations like data augmentation or data filtering that change the number of elements. In our case, when tokenizing each element into chunks of the specified context size, we create many samples from each document. We just need to make sure to delete the existing columns, since they have a conflicting size. If we wanted to keep them, we could repeat them appropriately and return them within the `Dataset.map()` call:\n\n```\ndef tokenize(element):\n outputs = tokenizer(\n element[\"content\"],\n truncation=True,\n max_length=context_length,\n return_overflowing_tokens=True,\n return_length=True,\n )\n input_batch = []\n for length, input_ids in zip(outputs[\"length\"], outputs[\"input_ids\"]):\n if length == context_length:\n input_batch.append(input_ids)\n return {\"input_ids\": input_batch}\n\n\ntokenized_datasets = raw_datasets.map(\n tokenize, batched=True, remove_columns=raw_datasets[\"train\"].column_names\n)\ntokenized_datasets```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['input_ids'],\n num_rows: 16702061\n })\n valid: Dataset({\n features: ['input_ids'],\n num_rows: 93164\n })\n})```\n\nWe now have 16.7 million examples with 128 tokens each, which corresponds to about 2.1 billion tokens in total. For reference, OpenAI’s GPT-3 and Codex models are trained on 300 and 100 billion tokens, respectively, where the Codex models are initialized from the GPT-3 checkpoints. Our goal in this section is not to compete with these models, which can generate long, coherent texts, but to create a scaled-down version providing a quick autocomplete function for data scientists.\n\nNow that we have the dataset ready, let’s set up the model!\n\n✏️ **Try it out!** Getting rid of all the chunks that are smaller than the context size wasn’t a big issue here because we’re using small context windows. As you increase the context size (or if you have a corpus of short documents), the fraction of chunks that are thrown away will also grow. A more efficient way to prepare the data is to join all the tokenized samples in a batch with an `eos_token_id` token in between, and then perform the chunking on the concatenated sequences. As an exercise, modify the `tokenize()` function to make use of that approach. Note that you’ll want to set `truncation=False` and remove the other arguments from the tokenizer to get the full sequence of token IDs.\n\n## [](#initializing-a-new-model)Initializing a new model\n\nOur first step is to freshly initialize a GPT-2 model. We’ll use the same configuration for our model as for the small GPT-2 model, so we load the pretrained configuration, make sure that the tokenizer size matches the model vocabulary size and pass the `bos` and `eos` (beginning and end of sequence) token IDs:\n\n```\nfrom transformers import AutoTokenizer, GPT2LMHeadModel, AutoConfig\n\nconfig = AutoConfig.from_pretrained(\n \"gpt2\",\n vocab_size=len(tokenizer),\n n_ctx=context_length,\n bos_token_id=tokenizer.bos_token_id,\n eos_token_id=tokenizer.eos_token_id,\n)```\n\nWith that configuration, we can load a new model. Note that this is the first time we don’t use the `from_pretrained()` function, since we’re actually initializing a model ourself:\n\n```\nmodel = GPT2LMHeadModel(config)\nmodel_size = sum(t.numel() for t in model.parameters())\nprint(f\"GPT-2 size: {model_size/1000**2:.1f}M parameters\")```\n\n```\nGPT-2 size: 124.2M parameters```\n\nOur model has 124M parameters that we’ll have to tune. Before we can start training, we need to set up a data collator that will take care of creating the batches. We can use the `DataCollatorForLanguageModeling` collator, which is designed specifically for language modeling (as the name subtly suggests). Besides stacking and padding batches, it also takes care of creating the language model labels — in causal language modeling the inputs serve as labels too (just shifted by one element), and this data collator creates them on the fly during training so we don’t need to duplicate the `input_ids`.\n\nNote that `DataCollatorForLanguageModeling` supports both masked language modeling (MLM) and causal language modeling (CLM). By default it prepares data for MLM, but we can switch to CLM by setting the argument `mlm=False`:\n\n```\nfrom transformers import DataCollatorForLanguageModeling\n\ntokenizer.pad_token = tokenizer.eos_token\ndata_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)```\n\nLet’s have a look at an example:\n\n```\nout = data_collator([tokenized_datasets[\"train\"][i] for i in range(5)])\nfor key in out:\n print(f\"{key} shape: {out[key].shape}\")```\n\n```\ninput_ids shape: torch.Size([5, 128])\nattention_mask shape: torch.Size([5, 128])\nlabels shape: torch.Size([5, 128])```\n\nWe can see that the examples have been stacked and all the tensors have the same shape.\n\n⚠️ Shifting the inputs and labels to align them happens inside the model, so the data collator just copies the inputs to create the labels.\n\nNow we have everything in place to actually train our model — that wasn’t so much work after all! Before we start training we should log in to Hugging Face. If you’re working in a notebook, you can do so with the following utility function:\n\n```\nfrom huggingface_hub import notebook_login\n\nnotebook_login()```\n\nThis will display a widget where you can enter your Hugging Face login credentials.\n\nIf you aren’t working in a notebook, just type the following line in your terminal:\n\nAll that’s left to do is configure the training arguments and fire up the `Trainer`. We’ll use a cosine learning rate schedule with some warmup and an effective batch size of 256 (`per_device_train_batch_size` \\* `gradient_accumulation_steps`). Gradient accumulation is used when a single batch does not fit into memory, and incrementally builds up the gradient through several forward/backward passes. We’ll see this in action when we create the training loop with 🤗 Accelerate.\n\n```\nfrom transformers import Trainer, TrainingArguments\n\nargs = TrainingArguments(\n output_dir=\"codeparrot-ds\",\n per_device_train_batch_size=32,\n per_device_eval_batch_size=32,\n evaluation_strategy=\"steps\",\n eval_steps=5_000,\n logging_steps=5_000,\n gradient_accumulation_steps=8,\n num_train_epochs=1,\n weight_decay=0.1,\n warmup_steps=1_000,\n lr_scheduler_type=\"cosine\",\n learning_rate=5e-4,\n save_steps=5_000,\n fp16=True,\n push_to_hub=True,\n)\n\ntrainer = Trainer(\n model=model,\n tokenizer=tokenizer,\n args=args,\n data_collator=data_collator,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"valid\"],\n)```\n\nNow we can just start the `Trainer` and wait for training to finish. Depending on whether you run it on the full or a subset of the training set this will take 20 or 2 hours, respectively, so grab a few coffees and a good book to read!\n\nAfter training completes, we can push the model and tokenizer to the Hub:\n\n✏️ **Try it out!** It only took us about 30 lines of code in addition to the `TrainingArguments` to get from raw texts to training GPT-2. Try it out with your own dataset and see if you can get good results!\n\n💡 If you have access to a machine with multiple GPUs, try to run the code there. The `Trainer` automatically manages multiple machines, and this can speed up training tremendously.\n\n## [](#code-generation-with-a-pipeline)Code generation with a pipeline\n\nNow is the moment of truth: let’s see how well the trained model actually works! We can see in the logs that the loss went down steadily, but to put the model to the test let’s take a look at how well it works on some prompts. To do that we’ll wrap the model in a text generation `pipeline`, and we’ll put it on the GPU for fast generations if there is one available:\n\n```\nimport torch\nfrom transformers import pipeline\n\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\npipe = pipeline(\n \"text-generation\", model=\"huggingface-course/codeparrot-ds\", device=device\n)```\n\nLet’s start with the simple task of creating a scatter plot:\n\n```\ntxt = \"\"\"\\\n# create some data\nx = np.random.randn(100)\ny = np.random.randn(100)\n\n# create scatter plot with x, y\n\"\"\"\nprint(pipe(txt, num_return_sequences=1)[0][\"generated_text\"])```\n\n```\nx = np.random.randn(100)\ny = np.random.randn(100)\n\n\nplt.scatter(x, y)\n\n```\n\nThe result looks correct. Does it also work for a `pandas` operation? Let’s see if we can create a `DataFrame` from two arrays:\n\n```\ntxt = \"\"\"\\\n# create some data\nx = np.random.randn(100)\ny = np.random.randn(100)\n\n# create dataframe from x and y\n\"\"\"\nprint(pipe(txt, num_return_sequences=1)[0][\"generated_text\"])```\n\n```\nx = np.random.randn(100)\ny = np.random.randn(100)\n\n\ndf = pd.DataFrame({'x': x, 'y': y})\ndf.insert(0,'x', x)\nfor```\n\nNice, that’s the correct answer — although it then inserts the column `x` again. Since the number of generated tokens is limited, the following `for` loop is cut off. Let’s see if we can do something a bit more complex and have the model help us use the `groupby` operation:\n\n```\ntxt = \"\"\"\\\n# dataframe with profession, income and name\ndf = pd.DataFrame({'profession': x, 'income':y, 'name': z})\n\n# calculate the mean income per profession\n\"\"\"\nprint(pipe(txt, num_return_sequences=1)[0][\"generated_text\"])```\n\n```\ndf = pd.DataFrame({'profession': x, 'income':y, 'name': z})\n\n\nprofession = df.groupby(['profession']).mean()\n\n```\n\nNot bad; that’s the right way to do it. Finally, let’s see if we can also use it for `scikit-learn` and set up a Random Forest model:\n\n```\ntxt = \"\"\"\n# import random forest regressor from scikit-learn\nfrom sklearn.ensemble import RandomForestRegressor\n\n# fit random forest model with 300 estimators on X, y:\n\"\"\"\nprint(pipe(txt, num_return_sequences=1)[0][\"generated_text\"])```\n\n```\nfrom sklearn.ensemble import RandomForestRegressor\n\n\nrf = RandomForestRegressor(n_estimators=300, random_state=random_state, max_depth=3)\nrf.fit(X, y)\nrf```\n\nLooking at these few examples, it seems that the model has learned some of the syntax of the Python data science stack (of course, we would need to evaluate it more thoroughly before deploying the model in the real world). Sometimes it requires more customization of the model training to achieve the necessary performance for a given use case, however. For example, what if we would like to dynamically update the batch size or have a conditional training loop that skips bad examples on the fly? One option would be to subclass the `Trainer` and add the necessary changes, but sometimes it’s simpler to write the training loop from scratch. That’s where 🤗 Accelerate comes in.\n\n## [](#training-with-accelerate)Training with 🤗 Accelerate\n\nWe’ve seen how to train a model with the `Trainer`, which can allow for some customization. However, sometimes we want full control over the training loop, or we want to make some exotic changes. In this case 🤗 Accelerate is a great choice, and in this section we’ll go through the steps to use it to train our model. To make things more interesting, we’ll also add a twist to the training loop.\n\nSince we are mainly interested in sensible autocompletion for the the data science libraries, it makes sense to give more weight to training samples that make more use of these libraries. We can easily identify these examples through the use of keywords such as `plt`, `pd`, `sk`, `fit`, and `predict`, which are the most frequent import names for `matplotlib.pyplot`, `pandas`, and `sklearn` as well as the fit/predict pattern of the latter. If these are each represented as a single token, we can easily check if they occur in the input sequence. Tokens can have a whitespace prefix, so we’ll also check for those versions in the tokenizer vocabulary. To verify that it works, we’ll add one test token which should be split into multiple tokens:\n\n```\nkeytoken_ids = []\nfor keyword in [\n \"plt\",\n \"pd\",\n \"sk\",\n \"fit\",\n \"predict\",\n \" plt\",\n \" pd\",\n \" sk\",\n \" fit\",\n \" predict\",\n \"testtest\",\n]:\n ids = tokenizer([keyword]).input_ids[0]\n if len(ids) == 1:\n keytoken_ids.append(ids[0])\n else:\n print(f\"Keyword has not single token: {keyword}\")```\n\n```\n'Keyword has not single token: testtest'```\n\nGreat, that seems to work nicely! We can now write a custom loss function that takes the input sequence, the logits, and the key tokens we just selected as inputs. First we need to align the logits and inputs: the input sequence shifted by one to the right forms the labels, since the next token is the label for the current token. We can achieve this by starting the labels from the second token of the input sequence, since the model does not make a prediction for the first token anyway. Then we cut off the last logit, as we don’t have a label for the token that follows the full input sequence. With that we can compute the loss per sample and count the occurrences of all keywords in each sample. Finally, we calculate the weighted average over all samples using the occurrences as weights. Since we don’t want to throw away all the samples that have no keywords, we add 1 to the weights:\n\n```\nfrom torch.nn import CrossEntropyLoss\nimport torch\n\n\ndef keytoken_weighted_loss(inputs, logits, keytoken_ids, alpha=1.0):\n \n shift_labels = inputs[..., 1:].contiguous()\n shift_logits = logits[..., :-1, :].contiguous()\n \n loss_fct = CrossEntropyLoss(reduce=False)\n loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))\n \n loss_per_sample = loss.view(shift_logits.size(0), shift_logits.size(1)).mean(axis=1)\n \n weights = torch.stack([(inputs == kt).float() for kt in keytoken_ids]).sum(\n axis=[0, 2]\n )\n weights = alpha * (1.0 + weights)\n \n weighted_loss = (loss_per_sample * weights).mean()\n return weighted_loss```\n\nBefore we can start training with this awesome new loss function, we need to prepare a few things:\n\n- We need dataloaders to load the data in batches.\n- We need to set up weight decay parameters.\n- From time to time we want to evaluate, so it makes sense to wrap the evaluation code in a function.\n\nLet’s start with the dataloaders. We only need to set the dataset’s format to `\"torch\"`, and then we can pass it to a PyTorch `DataLoader` with the appropriate batch size:\n\n```\nfrom torch.utils.data.dataloader import DataLoader\n\ntokenized_dataset.set_format(\"torch\")\ntrain_dataloader = DataLoader(tokenized_dataset[\"train\"], batch_size=32, shuffle=True)\neval_dataloader = DataLoader(tokenized_dataset[\"valid\"], batch_size=32)```\n\nNext, we group the parameters so that the optimizer knows which ones will get an additional weight decay. Usually, all bias and LayerNorm weights terms are exempt from this; here’s how we can do this:\n\n```\nweight_decay = 0.1\n\n\ndef get_grouped_params(model, no_decay=[\"bias\", \"LayerNorm.weight\"]):\n params_with_wd, params_without_wd = [], []\n for n, p in model.named_parameters():\n if any(nd in n for nd in no_decay):\n params_without_wd.append(p)\n else:\n params_with_wd.append(p)\n return [\n {\"params\": params_with_wd, \"weight_decay\": weight_decay},\n {\"params\": params_without_wd, \"weight_decay\": 0.0},\n ]```\n\nSince we want to evaluate the model regularly on the validation set during training, let’s write a function for that as well. It just runs through the evaluation dataloader and gathers all the losses across processes:\n\n```\ndef evaluate():\n model.eval()\n losses = []\n for step, batch in enumerate(eval_dataloader):\n with torch.no_grad():\n outputs = model(batch[\"input_ids\"], labels=batch[\"input_ids\"])\n\n losses.append(accelerator.gather(outputs.loss))\n loss = torch.mean(torch.cat(losses))\n try:\n perplexity = torch.exp(loss)\n except OverflowError:\n perplexity = float(\"inf\")\n return loss.item(), perplexity.item()```\n\nWith the `evaluate()` function we can report loss and [perplexity](/course/chapter7/3) at regular intervals. Next, we redefine our model to make sure we train from scratch again:\n\n```\nmodel = GPT2LMHeadModel(config)```\n\nWe can then define our optimizer, using the function from before to split the parameters for weight decay:\n\n```\nfrom torch.optim import AdamW\n\noptimizer = AdamW(get_grouped_params(model), lr=5e-4)```\n\nNow let’s prepare the model, optimizer, and dataloaders so we can start training:\n\n```\nfrom accelerate import Accelerator\n\naccelerator = Accelerator(fp16=True)\n\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n model, optimizer, train_dataloader, eval_dataloader\n)```\n\n🚨 If you’re training on a TPU, you’ll need to move all the code starting at the cell above into a dedicated training function. See [Chapter 3](/course/chapter3) for more details.\n\nNow that we have sent our `train_dataloader` to `accelerator.prepare()`, we can use its length to compute the number of training steps. Remember that we should always do this after preparing the dataloader, as that method will change its length. We use a classic linear schedule from the learning rate to 0:\n\n```\nfrom transformers import get_scheduler\n\nnum_train_epochs = 1\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n name=\"linear\",\n optimizer=optimizer,\n num_warmup_steps=1_000,\n num_training_steps=num_training_steps,\n)```\n\nLastly, to push our model to the Hub, we will need to create a `Repository` object in a working folder. First log in to the Hugging Face Hub, if you aren’t logged in already. We’ll determine the repository name from the model ID we want to give our model (feel free to replace the `repo_name` with your own choice; it just needs to contain your username, which is what the function `get_full_repo_name()` does):\n\n```\nfrom huggingface_hub import Repository, get_full_repo_name\n\nmodel_name = \"codeparrot-ds-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name```\n\n```\n'sgugger/codeparrot-ds-accelerate'```\n\nThen we can clone that repository in a local folder. If it already exists, this local folder should be an existing clone of the repository we are working with:\n\n```\noutput_dir = \"codeparrot-ds-accelerate\"\nrepo = Repository(output_dir, clone_from=repo_name)```\n\nWe can now upload anything we save in `output_dir` by calling the `repo.push_to_hub()` method. This will help us upload the intermediate models at the end of each epoch.\n\nBefore we train, let’s run a quick test to see if the evaluation function works properly:\n\n```\n(10.934126853942871, 56057.14453125)```\n\nThose are very high values for loss and perplexity, but that’s not surprising as we haven’t trained the model yet. With that, we have everything prepared to write the core part of the training script: the training loop. In the training loop we iterate over the dataloader and pass the batches to the model. With the logits, we can then evaluate our custom loss function. We scale the loss by the number of gradient accumulation steps so as not to create larger losses when aggregating more steps. Before we optimize, we also clip the gradients for better convergence. Finally, every few steps we evaluate the model on the evaluation set with our new `evaluate()` function:\n\n```\nfrom tqdm.notebook import tqdm\n\ngradient_accumulation_steps = 8\neval_steps = 5_000\n\nmodel.train()\ncompleted_steps = 0\nfor epoch in range(num_train_epochs):\n for step, batch in tqdm(\n enumerate(train_dataloader, start=1), total=num_training_steps\n ):\n logits = model(batch[\"input_ids\"]).logits\n loss = keytoken_weighted_loss(batch[\"input_ids\"], logits, keytoken_ids)\n if step % 100 == 0:\n accelerator.print(\n {\n \"lr\": get_lr(),\n \"samples\": step * samples_per_step,\n \"steps\": completed_steps,\n \"loss/train\": loss.item() * gradient_accumulation_steps,\n }\n )\n loss = loss / gradient_accumulation_steps\n accelerator.backward(loss)\n if step % gradient_accumulation_steps == 0:\n accelerator.clip_grad_norm_(model.parameters(), 1.0)\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n completed_steps += 1\n if (step % (eval_steps * gradient_accumulation_steps)) == 0:\n eval_loss, perplexity = evaluate()\n accelerator.print({\"loss/eval\": eval_loss, \"perplexity\": perplexity})\n model.train()\n accelerator.wait_for_everyone()\n unwrapped_model = accelerator.unwrap_model(model)\n unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n if accelerator.is_main_process:\n tokenizer.save_pretrained(output_dir)\n repo.push_to_hub(\n commit_message=f\"Training in progress step {step}\", blocking=False\n )```\n\nAnd that’s it — you now have your own custom training loop for causal language models such as GPT-2 that you can further customize to your needs.\n\n✏️ **Try it out!** Either create your own custom loss function tailored to your use case, or add another custom step into the training loop.\n\n✏️ **Try it out!** When running long training experiments it’s a good idea to log important metrics using tools such as TensorBoard or Weights & Biases. Add proper logging to the training loop so you can always check how the training is going.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tTraining a causal language model from scratch - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Training a causal language model from scratch

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Training a causal language model from scratch

\"Ask \"Open \"Open

Up until now, we’ve mostly been using pretrained models and fine-tuning them for new use cases by reusing the weights from pretraining. As we saw in Chapter 1, this is commonly referred to as transfer learning, and it’s a very successful strategy for applying Transformer models to most real-world use cases where labeled data is sparse. In this chapter, we’ll take a different approach and train a completely new model from scratch. This is a good approach to take if you have a lot of data and it is very different from the pretraining data used for the available models. However, it also requires considerably more compute resources to pretrain a language model than just to fine-tune an existing one. Examples where it can make sense to train a new model include for datasets consisting of musical notes, molecular sequences such as DNA, or programming languages. The latter have recently gained traction thanks to tools such as TabNine and GitHub’s Copilot, powered by OpenAI’s Codex model, that can generate long sequences of code. This task of text generation is best addressed with auto-regressive or causal language models such as GPT-2.

In this section we will build a scaled-down version of a code generation model: we’ll focus on one-line completions instead of full functions or classes, using a subset of Python code. When working with data in Python you are in frequent contact with the Python data science stack, consisting of the matplotlib, seaborn, pandas, and scikit-learn libraries. When using those frameworks it’s common to need to look up specific commands, so it would be nice if we could use a model to complete these calls for us.

In Chapter 6 we created an efficient tokenizer to process Python source code, but what we still need is a large-scale dataset to pretrain a model on. Here, we’ll apply our tokenizer to a corpus of Python code derived from GitHub repositories. We will then use the Trainer API and 🤗 Accelerate to train the model. Let’s get to it!

This is actually showcasing the model that was trained and uploaded to the Hub using the code shown in this section. You can find it here. Note that since there is some randomization happening in the text generation, you will probably get a slightly different result.

Gathering the data

Python code is abundantly available from code repositories such as GitHub, which we can use to create a dataset by scraping for every Python repository. This was the approach taken in the Transformers textbook to pretrain a large GPT-2 model. Using a GitHub dump of about 180 GB containing roughly 20 million Python files called codeparrot, the authors built a dataset that they then shared on the Hugging Face Hub.

However, training on the full corpus is time- and compute-consuming, and we only need the subset of the dataset concerned with the Python data science stack. So, let’s start by filtering the codeparrot dataset for all files that include any of the libraries in this stack. Because of the dataset’s size, we want to avoid downloading it; instead, we’ll use the streaming feature to filter it on the fly. To help us filter the code samples using the libraries we mentioned earlier, we’ll use the following function:

def any_keyword_in_string(string, keywords):\n    for keyword in keywords:\n        if keyword in string:\n            return True\n    return False

Let’s test it on two examples:

filters = [\"pandas\", \"sklearn\", \"matplotlib\", \"seaborn\"]\nexample_1 = \"import numpy as np\"\nexample_2 = \"import pandas as pd\"\n\nprint(\n    any_keyword_in_string(example_1, filters), any_keyword_in_string(example_2, filters)\n)
False True

We can use this to create a function that will stream the dataset and filter the elements we want:

from collections import defaultdict\nfrom tqdm import tqdm\nfrom datasets import Dataset\n\n\ndef filter_streaming_dataset(dataset, filters):\n    filtered_dict = defaultdict(list)\n    total = 0\n    for sample in tqdm(iter(dataset)):\n        total += 1\n        if any_keyword_in_string(sample[\"content\"], filters):\n            for k, v in sample.items():\n                filtered_dict[k].append(v)\n    print(f\"{len(filtered_dict['content'])/total:.2%} of data after filtering.\")\n    return Dataset.from_dict(filtered_dict)

Then we can simply apply this function to the streaming dataset:

# This cell will take a very long time to execute, so you should skip it and go to\n# the next one!\nfrom datasets import load_dataset\n\nsplit = \"train\"  # \"valid\"\nfilters = [\"pandas\", \"sklearn\", \"matplotlib\", \"seaborn\"]\n\ndata = load_dataset(f\"transformersbook/codeparrot-{split}\", split=split, streaming=True)\nfiltered_data = filter_streaming_dataset(data, filters)
3.26% of data after filtering.

This leaves us with about 3% of the original dataset, which is still quite sizable — the resulting dataset is 6 GB and consists of 600,000 Python scripts!

Filtering the full dataset can take 2-3h depending on your machine and bandwidth. If you don’t want to go through this lengthy process yourself, we provide the filtered dataset on the Hub for you to download:

from datasets import load_dataset, DatasetDict\n\nds_train = load_dataset(\"huggingface-course/codeparrot-ds-train\", split=\"train\")\nds_valid = load_dataset(\"huggingface-course/codeparrot-ds-valid\", split=\"validation\")\n\nraw_datasets = DatasetDict(\n    {\n        \"train\": ds_train,  # .shuffle().select(range(50000)),\n        \"valid\": ds_valid,  # .shuffle().select(range(500))\n    }\n)\n\nraw_datasets
DatasetDict({\n    train: Dataset({\n        features: ['repo_name', 'path', 'copies', 'size', 'content', 'license'],\n        num_rows: 606720\n    })\n    valid: Dataset({\n        features: ['repo_name', 'path', 'copies', 'size', 'content', 'license'],\n        num_rows: 3322\n    })\n})

Pretraining the language model will take a while. We suggest that you first run the training loop on a sample of the data by uncommenting the two partial lines above, and make sure that the training successfully completes and the models are stored. Nothing is more frustrating than a training run failing at the last step because you forgot to create a folder or because there’s a typo at the end of the training loop!

Let’s look at an example from the dataset. We’ll just show the first 200 characters of each field:

for key in raw_datasets[\"train\"][0]:\n    print(f\"{key.upper()}: {raw_datasets['train'][0][key][:200]}\")
'REPO_NAME: kmike/scikit-learn'\n'PATH: sklearn/utils/__init__.py'\n'COPIES: 3'\n'SIZE: 10094'\n'''CONTENT: \"\"\"\nThe :mod:`sklearn.utils` module includes various utilites.\n\"\"\"\n\nfrom collections import Sequence\n\nimport numpy as np\nfrom scipy.sparse import issparse\nimport warnings\n\nfrom .murmurhash import murm\nLICENSE: bsd-3-clause'''

We can see that the content field contains the code that we want our model to train on. Now that we have a dataset, we need to prepare the texts so they’re in a format suitable for pretraining.

Preparing the dataset

The first step will be to tokenize the data, so we can use it for training. Since our goal is to mainly autocomplete short function calls, we can keep the context size relatively small. This has the benefit that we can train the model much faster and it requires significantly less memory. If it is important for your application to have more context (for example, if you want the model to write unit tests based on a file with the function definition), make sure you increase that number, but also keep in mind that this comes with a greater GPU memory footprint. For now, let’s fix the context size at 128 tokens, as opposed to the 1,024 or 2,048 used in GPT-2 or GPT-3, respectively.

Most documents contain many more than 128 tokens, so simply truncating the inputs to the maximum length would eliminate a large fraction of our dataset. Instead, we’ll use the return_overflowing_tokens option to tokenize the whole input and split it into several chunks, as we did in Chapter 6. We’ll also use the return_length option to return the length of each created chunk automatically. Often the last chunk will be smaller than the context size, and we’ll get rid of these pieces to avoid padding issues; we don’t really need them as we have plenty of data anyway.

\"Chunking \"Chunking

Let’s see exactly how this works by looking at the first two examples:

from transformers import AutoTokenizer\n\ncontext_length = 128\ntokenizer = AutoTokenizer.from_pretrained(\"huggingface-course/code-search-net-tokenizer\")\n\noutputs = tokenizer(\n    raw_datasets[\"train\"][:2][\"content\"],\n    truncation=True,\n    max_length=context_length,\n    return_overflowing_tokens=True,\n    return_length=True,\n)\n\nprint(f\"Input IDs length: {len(outputs['input_ids'])}\")\nprint(f\"Input chunk lengths: {(outputs['length'])}\")\nprint(f\"Chunk mapping: {outputs['overflow_to_sample_mapping']}\")
Input IDs length: 34\nInput chunk lengths: [128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 117, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 41]\nChunk mapping: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]

We can see that we get 34 segments in total from those two examples. Looking at the chunk lengths, we can see that the chunks at the ends of both documents have less than 128 tokens (117 and 41, respectively). These represent just a small fraction of the total chunks that we have, so we can safely throw them away. With the overflow_to_sample_mapping field, we can also reconstruct which chunks belonged to which input samples.

With this operation we’re using a handy feature of the Dataset.map() function in 🤗 Datasets, which is that it does not require one-to-one maps; as we saw in section 3, we can create batches with more or fewer elements than the input batch. This is useful when doing operations like data augmentation or data filtering that change the number of elements. In our case, when tokenizing each element into chunks of the specified context size, we create many samples from each document. We just need to make sure to delete the existing columns, since they have a conflicting size. If we wanted to keep them, we could repeat them appropriately and return them within the Dataset.map() call:

def tokenize(element):\n    outputs = tokenizer(\n        element[\"content\"],\n        truncation=True,\n        max_length=context_length,\n        return_overflowing_tokens=True,\n        return_length=True,\n    )\n    input_batch = []\n    for length, input_ids in zip(outputs[\"length\"], outputs[\"input_ids\"]):\n        if length == context_length:\n            input_batch.append(input_ids)\n    return {\"input_ids\": input_batch}\n\n\ntokenized_datasets = raw_datasets.map(\n    tokenize, batched=True, remove_columns=raw_datasets[\"train\"].column_names\n)\ntokenized_datasets
DatasetDict({\n    train: Dataset({\n        features: ['input_ids'],\n        num_rows: 16702061\n    })\n    valid: Dataset({\n        features: ['input_ids'],\n        num_rows: 93164\n    })\n})

We now have 16.7 million examples with 128 tokens each, which corresponds to about 2.1 billion tokens in total. For reference, OpenAI’s GPT-3 and Codex models are trained on 300 and 100 billion tokens, respectively, where the Codex models are initialized from the GPT-3 checkpoints. Our goal in this section is not to compete with these models, which can generate long, coherent texts, but to create a scaled-down version providing a quick autocomplete function for data scientists.

Now that we have the dataset ready, let’s set up the model!

✏️ Try it out! Getting rid of all the chunks that are smaller than the context size wasn’t a big issue here because we’re using small context windows. As you increase the context size (or if you have a corpus of short documents), the fraction of chunks that are thrown away will also grow. A more efficient way to prepare the data is to join all the tokenized samples in a batch with an eos_token_id token in between, and then perform the chunking on the concatenated sequences. As an exercise, modify the tokenize() function to make use of that approach. Note that you’ll want to set truncation=False and remove the other arguments from the tokenizer to get the full sequence of token IDs.

Initializing a new model

Our first step is to freshly initialize a GPT-2 model. We’ll use the same configuration for our model as for the small GPT-2 model, so we load the pretrained configuration, make sure that the tokenizer size matches the model vocabulary size and pass the bos and eos (beginning and end of sequence) token IDs:

from transformers import AutoTokenizer, GPT2LMHeadModel, AutoConfig\n\nconfig = AutoConfig.from_pretrained(\n    \"gpt2\",\n    vocab_size=len(tokenizer),\n    n_ctx=context_length,\n    bos_token_id=tokenizer.bos_token_id,\n    eos_token_id=tokenizer.eos_token_id,\n)

With that configuration, we can load a new model. Note that this is the first time we don’t use the from_pretrained() function, since we’re actually initializing a model ourself:

model = GPT2LMHeadModel(config)\nmodel_size = sum(t.numel() for t in model.parameters())\nprint(f\"GPT-2 size: {model_size/1000**2:.1f}M parameters\")
GPT-2 size: 124.2M parameters

Our model has 124M parameters that we’ll have to tune. Before we can start training, we need to set up a data collator that will take care of creating the batches. We can use the DataCollatorForLanguageModeling collator, which is designed specifically for language modeling (as the name subtly suggests). Besides stacking and padding batches, it also takes care of creating the language model labels — in causal language modeling the inputs serve as labels too (just shifted by one element), and this data collator creates them on the fly during training so we don’t need to duplicate the input_ids.

Note that DataCollatorForLanguageModeling supports both masked language modeling (MLM) and causal language modeling (CLM). By default it prepares data for MLM, but we can switch to CLM by setting the argument mlm=False:

from transformers import DataCollatorForLanguageModeling\n\ntokenizer.pad_token = tokenizer.eos_token\ndata_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)

Let’s have a look at an example:

out = data_collator([tokenized_datasets[\"train\"][i] for i in range(5)])\nfor key in out:\n    print(f\"{key} shape: {out[key].shape}\")
input_ids shape: torch.Size([5, 128])\nattention_mask shape: torch.Size([5, 128])\nlabels shape: torch.Size([5, 128])

We can see that the examples have been stacked and all the tensors have the same shape.

⚠️ Shifting the inputs and labels to align them happens inside the model, so the data collator just copies the inputs to create the labels.

Now we have everything in place to actually train our model — that wasn’t so much work after all! Before we start training we should log in to Hugging Face. If you’re working in a notebook, you can do so with the following utility function:

from huggingface_hub import notebook_login\n\nnotebook_login()

This will display a widget where you can enter your Hugging Face login credentials.

If you aren’t working in a notebook, just type the following line in your terminal:

huggingface-cli login

All that’s left to do is configure the training arguments and fire up the Trainer. We’ll use a cosine learning rate schedule with some warmup and an effective batch size of 256 (per_device_train_batch_size * gradient_accumulation_steps). Gradient accumulation is used when a single batch does not fit into memory, and incrementally builds up the gradient through several forward/backward passes. We’ll see this in action when we create the training loop with 🤗 Accelerate.

from transformers import Trainer, TrainingArguments\n\nargs = TrainingArguments(\n    output_dir=\"codeparrot-ds\",\n    per_device_train_batch_size=32,\n    per_device_eval_batch_size=32,\n    evaluation_strategy=\"steps\",\n    eval_steps=5_000,\n    logging_steps=5_000,\n    gradient_accumulation_steps=8,\n    num_train_epochs=1,\n    weight_decay=0.1,\n    warmup_steps=1_000,\n    lr_scheduler_type=\"cosine\",\n    learning_rate=5e-4,\n    save_steps=5_000,\n    fp16=True,\n    push_to_hub=True,\n)\n\ntrainer = Trainer(\n    model=model,\n    tokenizer=tokenizer,\n    args=args,\n    data_collator=data_collator,\n    train_dataset=tokenized_datasets[\"train\"],\n    eval_dataset=tokenized_datasets[\"valid\"],\n)

Now we can just start the Trainer and wait for training to finish. Depending on whether you run it on the full or a subset of the training set this will take 20 or 2 hours, respectively, so grab a few coffees and a good book to read!

trainer.train()

After training completes, we can push the model and tokenizer to the Hub:

trainer.push_to_hub()

✏️ Try it out! It only took us about 30 lines of code in addition to the TrainingArguments to get from raw texts to training GPT-2. Try it out with your own dataset and see if you can get good results!

💡 If you have access to a machine with multiple GPUs, try to run the code there. The Trainer automatically manages multiple machines, and this can speed up training tremendously.

Code generation with a pipeline

Now is the moment of truth: let’s see how well the trained model actually works! We can see in the logs that the loss went down steadily, but to put the model to the test let’s take a look at how well it works on some prompts. To do that we’ll wrap the model in a text generation pipeline, and we’ll put it on the GPU for fast generations if there is one available:

import torch\nfrom transformers import pipeline\n\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\npipe = pipeline(\n    \"text-generation\", model=\"huggingface-course/codeparrot-ds\", device=device\n)

Let’s start with the simple task of creating a scatter plot:

txt = \"\"\"\\\n# create some data\nx = np.random.randn(100)\ny = np.random.randn(100)\n\n# create scatter plot with x, y\n\"\"\"\nprint(pipe(txt, num_return_sequences=1)[0][\"generated_text\"])
# create some data\nx = np.random.randn(100)\ny = np.random.randn(100)\n\n# create scatter plot with x, y\nplt.scatter(x, y)\n\n# create scatter

The result looks correct. Does it also work for a pandas operation? Let’s see if we can create a DataFrame from two arrays:

txt = \"\"\"\\\n# create some data\nx = np.random.randn(100)\ny = np.random.randn(100)\n\n# create dataframe from x and y\n\"\"\"\nprint(pipe(txt, num_return_sequences=1)[0][\"generated_text\"])
# create some data\nx = np.random.randn(100)\ny = np.random.randn(100)\n\n# create dataframe from x and y\ndf = pd.DataFrame({'x': x, 'y': y})\ndf.insert(0,'x', x)\nfor

Nice, that’s the correct answer — although it then inserts the column x again. Since the number of generated tokens is limited, the following for loop is cut off. Let’s see if we can do something a bit more complex and have the model help us use the groupby operation:

txt = \"\"\"\\\n# dataframe with profession, income and name\ndf = pd.DataFrame({'profession': x, 'income':y, 'name': z})\n\n# calculate the mean income per profession\n\"\"\"\nprint(pipe(txt, num_return_sequences=1)[0][\"generated_text\"])
# dataframe with profession, income and name\ndf = pd.DataFrame({'profession': x, 'income':y, 'name': z})\n\n# calculate the mean income per profession\nprofession = df.groupby(['profession']).mean()\n\n# compute the

Not bad; that’s the right way to do it. Finally, let’s see if we can also use it for scikit-learn and set up a Random Forest model:

txt = \"\"\"\n# import random forest regressor from scikit-learn\nfrom sklearn.ensemble import RandomForestRegressor\n\n# fit random forest model with 300 estimators on X, y:\n\"\"\"\nprint(pipe(txt, num_return_sequences=1)[0][\"generated_text\"])
# import random forest regressor from scikit-learn\nfrom sklearn.ensemble import RandomForestRegressor\n\n# fit random forest model with 300 estimators on X, y:\nrf = RandomForestRegressor(n_estimators=300, random_state=random_state, max_depth=3)\nrf.fit(X, y)\nrf

Looking at these few examples, it seems that the model has learned some of the syntax of the Python data science stack (of course, we would need to evaluate it more thoroughly before deploying the model in the real world). Sometimes it requires more customization of the model training to achieve the necessary performance for a given use case, however. For example, what if we would like to dynamically update the batch size or have a conditional training loop that skips bad examples on the fly? One option would be to subclass the Trainer and add the necessary changes, but sometimes it’s simpler to write the training loop from scratch. That’s where 🤗 Accelerate comes in.

Training with 🤗 Accelerate

We’ve seen how to train a model with the Trainer, which can allow for some customization. However, sometimes we want full control over the training loop, or we want to make some exotic changes. In this case 🤗 Accelerate is a great choice, and in this section we’ll go through the steps to use it to train our model. To make things more interesting, we’ll also add a twist to the training loop.

Since we are mainly interested in sensible autocompletion for the the data science libraries, it makes sense to give more weight to training samples that make more use of these libraries. We can easily identify these examples through the use of keywords such as plt, pd, sk, fit, and predict, which are the most frequent import names for matplotlib.pyplot, pandas, and sklearn as well as the fit/predict pattern of the latter. If these are each represented as a single token, we can easily check if they occur in the input sequence. Tokens can have a whitespace prefix, so we’ll also check for those versions in the tokenizer vocabulary. To verify that it works, we’ll add one test token which should be split into multiple tokens:

keytoken_ids = []\nfor keyword in [\n    \"plt\",\n    \"pd\",\n    \"sk\",\n    \"fit\",\n    \"predict\",\n    \" plt\",\n    \" pd\",\n    \" sk\",\n    \" fit\",\n    \" predict\",\n    \"testtest\",\n]:\n    ids = tokenizer([keyword]).input_ids[0]\n    if len(ids) == 1:\n        keytoken_ids.append(ids[0])\n    else:\n        print(f\"Keyword has not single token: {keyword}\")
'Keyword has not single token: testtest'

Great, that seems to work nicely! We can now write a custom loss function that takes the input sequence, the logits, and the key tokens we just selected as inputs. First we need to align the logits and inputs: the input sequence shifted by one to the right forms the labels, since the next token is the label for the current token. We can achieve this by starting the labels from the second token of the input sequence, since the model does not make a prediction for the first token anyway. Then we cut off the last logit, as we don’t have a label for the token that follows the full input sequence. With that we can compute the loss per sample and count the occurrences of all keywords in each sample. Finally, we calculate the weighted average over all samples using the occurrences as weights. Since we don’t want to throw away all the samples that have no keywords, we add 1 to the weights:

from torch.nn import CrossEntropyLoss\nimport torch\n\n\ndef keytoken_weighted_loss(inputs, logits, keytoken_ids, alpha=1.0):\n    # Shift so that tokens < n predict n\n    shift_labels = inputs[..., 1:].contiguous()\n    shift_logits = logits[..., :-1, :].contiguous()\n    # Calculate per-token loss\n    loss_fct = CrossEntropyLoss(reduce=False)\n    loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))\n    # Resize and average loss per sample\n    loss_per_sample = loss.view(shift_logits.size(0), shift_logits.size(1)).mean(axis=1)\n    # Calculate and scale weighting\n    weights = torch.stack([(inputs == kt).float() for kt in keytoken_ids]).sum(\n        axis=[0, 2]\n    )\n    weights = alpha * (1.0 + weights)\n    # Calculate weighted average\n    weighted_loss = (loss_per_sample * weights).mean()\n    return weighted_loss

Before we can start training with this awesome new loss function, we need to prepare a few things:

  • We need dataloaders to load the data in batches.
  • We need to set up weight decay parameters.
  • From time to time we want to evaluate, so it makes sense to wrap the evaluation code in a function.

Let’s start with the dataloaders. We only need to set the dataset’s format to \"torch\", and then we can pass it to a PyTorch DataLoader with the appropriate batch size:

from torch.utils.data.dataloader import DataLoader\n\ntokenized_dataset.set_format(\"torch\")\ntrain_dataloader = DataLoader(tokenized_dataset[\"train\"], batch_size=32, shuffle=True)\neval_dataloader = DataLoader(tokenized_dataset[\"valid\"], batch_size=32)

Next, we group the parameters so that the optimizer knows which ones will get an additional weight decay. Usually, all bias and LayerNorm weights terms are exempt from this; here’s how we can do this:

weight_decay = 0.1\n\n\ndef get_grouped_params(model, no_decay=[\"bias\", \"LayerNorm.weight\"]):\n    params_with_wd, params_without_wd = [], []\n    for n, p in model.named_parameters():\n        if any(nd in n for nd in no_decay):\n            params_without_wd.append(p)\n        else:\n            params_with_wd.append(p)\n    return [\n        {\"params\": params_with_wd, \"weight_decay\": weight_decay},\n        {\"params\": params_without_wd, \"weight_decay\": 0.0},\n    ]

Since we want to evaluate the model regularly on the validation set during training, let’s write a function for that as well. It just runs through the evaluation dataloader and gathers all the losses across processes:

def evaluate():\n    model.eval()\n    losses = []\n    for step, batch in enumerate(eval_dataloader):\n        with torch.no_grad():\n            outputs = model(batch[\"input_ids\"], labels=batch[\"input_ids\"])\n\n        losses.append(accelerator.gather(outputs.loss))\n    loss = torch.mean(torch.cat(losses))\n    try:\n        perplexity = torch.exp(loss)\n    except OverflowError:\n        perplexity = float(\"inf\")\n    return loss.item(), perplexity.item()

With the evaluate() function we can report loss and perplexity at regular intervals. Next, we redefine our model to make sure we train from scratch again:

model = GPT2LMHeadModel(config)

We can then define our optimizer, using the function from before to split the parameters for weight decay:

from torch.optim import AdamW\n\noptimizer = AdamW(get_grouped_params(model), lr=5e-4)

Now let’s prepare the model, optimizer, and dataloaders so we can start training:

from accelerate import Accelerator\n\naccelerator = Accelerator(fp16=True)\n\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n    model, optimizer, train_dataloader, eval_dataloader\n)

🚨 If you’re training on a TPU, you’ll need to move all the code starting at the cell above into a dedicated training function. See Chapter 3 for more details.

Now that we have sent our train_dataloader to accelerator.prepare(), we can use its length to compute the number of training steps. Remember that we should always do this after preparing the dataloader, as that method will change its length. We use a classic linear schedule from the learning rate to 0:

from transformers import get_scheduler\n\nnum_train_epochs = 1\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n    name=\"linear\",\n    optimizer=optimizer,\n    num_warmup_steps=1_000,\n    num_training_steps=num_training_steps,\n)

Lastly, to push our model to the Hub, we will need to create a Repository object in a working folder. First log in to the Hugging Face Hub, if you aren’t logged in already. We’ll determine the repository name from the model ID we want to give our model (feel free to replace the repo_name with your own choice; it just needs to contain your username, which is what the function get_full_repo_name() does):

from huggingface_hub import Repository, get_full_repo_name\n\nmodel_name = \"codeparrot-ds-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name
'sgugger/codeparrot-ds-accelerate'

Then we can clone that repository in a local folder. If it already exists, this local folder should be an existing clone of the repository we are working with:

output_dir = \"codeparrot-ds-accelerate\"\nrepo = Repository(output_dir, clone_from=repo_name)

We can now upload anything we save in output_dir by calling the repo.push_to_hub() method. This will help us upload the intermediate models at the end of each epoch.

Before we train, let’s run a quick test to see if the evaluation function works properly:

evaluate()
(10.934126853942871, 56057.14453125)

Those are very high values for loss and perplexity, but that’s not surprising as we haven’t trained the model yet. With that, we have everything prepared to write the core part of the training script: the training loop. In the training loop we iterate over the dataloader and pass the batches to the model. With the logits, we can then evaluate our custom loss function. We scale the loss by the number of gradient accumulation steps so as not to create larger losses when aggregating more steps. Before we optimize, we also clip the gradients for better convergence. Finally, every few steps we evaluate the model on the evaluation set with our new evaluate() function:

from tqdm.notebook import tqdm\n\ngradient_accumulation_steps = 8\neval_steps = 5_000\n\nmodel.train()\ncompleted_steps = 0\nfor epoch in range(num_train_epochs):\n    for step, batch in tqdm(\n        enumerate(train_dataloader, start=1), total=num_training_steps\n    ):\n        logits = model(batch[\"input_ids\"]).logits\n        loss = keytoken_weighted_loss(batch[\"input_ids\"], logits, keytoken_ids)\n        if step % 100 == 0:\n            accelerator.print(\n                {\n                    \"lr\": get_lr(),\n                    \"samples\": step * samples_per_step,\n                    \"steps\": completed_steps,\n                    \"loss/train\": loss.item() * gradient_accumulation_steps,\n                }\n            )\n        loss = loss / gradient_accumulation_steps\n        accelerator.backward(loss)\n        if step % gradient_accumulation_steps == 0:\n            accelerator.clip_grad_norm_(model.parameters(), 1.0)\n            optimizer.step()\n            lr_scheduler.step()\n            optimizer.zero_grad()\n            completed_steps += 1\n        if (step % (eval_steps * gradient_accumulation_steps)) == 0:\n            eval_loss, perplexity = evaluate()\n            accelerator.print({\"loss/eval\": eval_loss, \"perplexity\": perplexity})\n            model.train()\n            accelerator.wait_for_everyone()\n            unwrapped_model = accelerator.unwrap_model(model)\n            unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n            if accelerator.is_main_process:\n                tokenizer.save_pretrained(output_dir)\n                repo.push_to_hub(\n                    commit_message=f\"Training in progress step {step}\", blocking=False\n                )

And that’s it — you now have your own custom training loop for causal language models such as GPT-2 that you can further customize to your needs.

✏️ Try it out! Either create your own custom loss function tailored to your use case, or add another custom step into the training loop.

✏️ Try it out! When running long training experiments it’s a good idea to log important metrics using tools such as TensorBoard or Weights & Biases. Add proper logging to the training loop so you can always check how the training is going.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:30.427Z"} {"title":"Summarization - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter7/5?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#summarization)Summarization\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-7-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter7/section5_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter7/section5_pt.ipynb)\n\nIn this section we’ll take a look at how Transformer models can be used to condense long documents into summaries, a task known as _text summarization_. This is one of the most challenging NLP tasks as it requires a range of abilities, such as understanding long passages and generating coherent text that captures the main topics in a document. However, when done well, text summarization is a powerful tool that can speed up various business processes by relieving the burden of domain experts to read long documents in detail.\n\nAlthough there already exist various fine-tuned models for summarization on the [Hugging Face Hub](https://huggingface.co/models?pipeline_tag=summarization&sort=downloads), almost all of these are only suitable for English documents. So, to add a twist in this section, we’ll train a bilingual model for English and Spanish. By the end of this section, you’ll have a [model](https://huggingface.co/huggingface-course/mt5-small-finetuned-amazon-en-es) that can summarize customer reviews like the one shown here:\n\nAs we’ll see, these summaries are concise because they’re learned from the titles that customers provide in their product reviews. Let’s start by putting together a suitable bilingual corpus for this task.\n\n## [](#preparing-a-multilingual-corpus)Preparing a multilingual corpus\n\nWe’ll use the [Multilingual Amazon Reviews Corpus](https://huggingface.co/datasets/amazon_reviews_multi) to create our bilingual summarizer. This corpus consists of Amazon product reviews in six languages and is typically used to benchmark multilingual classifiers. However, since each review is accompanied by a short title, we can use the titles as the target summaries for our model to learn from! To get started, let’s download the English and Spanish subsets from the Hugging Face Hub:\n\n```\nfrom datasets import load_dataset\n\nspanish_dataset = load_dataset(\"amazon_reviews_multi\", \"es\")\nenglish_dataset = load_dataset(\"amazon_reviews_multi\", \"en\")\nenglish_dataset```\n\n```\nDatasetDict({\n train: Dataset({\n features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'],\n num_rows: 200000\n })\n validation: Dataset({\n features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'],\n num_rows: 5000\n })\n test: Dataset({\n features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'],\n num_rows: 5000\n })\n})```\n\nAs you can see, for each language there are 200,000 reviews for the `train` split, and 5,000 reviews for each of the `validation` and `test` splits. The review information we are interested in is contained in the `review_body` and `review_title` columns. Let’s take a look at a few examples by creating a simple function that takes a random sample from the training set with the techniques we learned in [Chapter 5](/course/chapter5):\n\n```\ndef show_samples(dataset, num_samples=3, seed=42):\n sample = dataset[\"train\"].shuffle(seed=seed).select(range(num_samples))\n for example in sample:\n print(f\"\\n'>> Title: {example['review_title']}'\")\n print(f\"'>> Review: {example['review_body']}'\")\n\n\nshow_samples(english_dataset)```\n\n```\n'>> Title: Worked in front position, not rear'\n'>> Review: 3 stars because these are not rear brakes as stated in the item description. At least the mount adapter only worked on the front fork of the bike that I got it for.'\n\n'>> Title: meh'\n'>> Review: Does it’s job and it’s gorgeous but mine is falling apart, I had to basically put it together again with hot glue'\n\n'>> Title: Can\\'t beat these for the money'\n'>> Review: Bought this for handling miscellaneous aircraft parts and hanger \"stuff\" that I needed to organize; it really fit the bill. The unit arrived quickly, was well packaged and arrived intact (always a good sign). There are five wall mounts-- three on the top and two on the bottom. I wanted to mount it on the wall, so all I had to do was to remove the top two layers of plastic drawers, as well as the bottom corner drawers, place it when I wanted and mark it; I then used some of the new plastic screw in wall anchors (the 50 pound variety) and it easily mounted to the wall. Some have remarked that they wanted dividers for the drawers, and that they made those. Good idea. My application was that I needed something that I can see the contents at about eye level, so I wanted the fuller-sized drawers. I also like that these are the new plastic that doesn\\'t get brittle and split like my older plastic drawers did. I like the all-plastic construction. It\\'s heavy duty enough to hold metal parts, but being made of plastic it\\'s not as heavy as a metal frame, so you can easily mount it to the wall and still load it up with heavy stuff, or light stuff. No problem there. For the money, you can\\'t beat it. Best one of these I\\'ve bought to date-- and I\\'ve been using some version of these for over forty years.'```\n\n✏️ **Try it out!** Change the random seed in the `Dataset.shuffle()` command to explore other reviews in the corpus. If you’re a Spanish speaker, take a look at some of the reviews in `spanish_dataset` to see if the titles also seem like reasonable summaries.\n\nThis sample shows the diversity of reviews one typically finds online, ranging from positive to negative (and everything in between!). Although the example with the “meh” title is not very informative, the other titles look like decent summaries of the reviews themselves. Training a summarization model on all 400,000 reviews would take far too long on a single GPU, so instead we’ll focus on generating summaries for a single domain of products. To get a feel for what domains we can choose from, let’s convert `english_dataset` to a `pandas.DataFrame` and compute the number of reviews per product category:\n\n```\nenglish_dataset.set_format(\"pandas\")\nenglish_df = english_dataset[\"train\"][:]\n\nenglish_df[\"product_category\"].value_counts()[:20]```\n\n```\nhome 17679\napparel 15951\nwireless 15717\nother 13418\nbeauty 12091\ndrugstore 11730\nkitchen 10382\ntoy 8745\nsports 8277\nautomotive 7506\nlawn_and_garden 7327\nhome_improvement 7136\npet_products 7082\ndigital_ebook_purchase 6749\npc 6401\nelectronics 6186\noffice_product 5521\nshoes 5197\ngrocery 4730\nbook 3756\nName: product_category, dtype: int64```\n\nThe most popular products in the English dataset are about household items, clothing, and wireless electronics. To stick with the Amazon theme, though, let’s focus on summarizing book reviews — after all, this is what the company was founded on! We can see two product categories that fit the bill (`book` and `digital_ebook_purchase`), so let’s filter the datasets in both languages for just these products. As we saw in [Chapter 5](/course/chapter5), the `Dataset.filter()` function allows us to slice a dataset very efficiently, so we can define a simple function to do this:\n\n```\ndef filter_books(example):\n return (\n example[\"product_category\"] == \"book\"\n or example[\"product_category\"] == \"digital_ebook_purchase\"\n )```\n\nNow when we apply this function to `english_dataset` and `spanish_dataset`, the result will contain just those rows involving the book categories. Before applying the filter, let’s switch the format of `english_dataset` from `\"pandas\"` back to `\"arrow\"`:\n\n```\nenglish_dataset.reset_format()```\n\nWe can then apply the filter function, and as a sanity check let’s inspect a sample of reviews to see if they are indeed about books:\n\n```\nspanish_books = spanish_dataset.filter(filter_books)\nenglish_books = english_dataset.filter(filter_books)\nshow_samples(english_books)```\n\n```\n'>> Title: I\\'m dissapointed.'\n'>> Review: I guess I had higher expectations for this book from the reviews. I really thought I\\'d at least like it. The plot idea was great. I loved Ash but, it just didnt go anywhere. Most of the book was about their radio show and talking to callers. I wanted the author to dig deeper so we could really get to know the characters. All we know about Grace is that she is attractive looking, Latino and is kind of a brat. I\\'m dissapointed.'\n\n'>> Title: Good art, good price, poor design'\n'>> Review: I had gotten the DC Vintage calendar the past two years, but it was on backorder forever this year and I saw they had shrunk the dimensions for no good reason. This one has good art choices but the design has the fold going through the picture, so it\\'s less aesthetically pleasing, especially if you want to keep a picture to hang. For the price, a good calendar'\n\n'>> Title: Helpful'\n'>> Review: Nearly all the tips useful and. I consider myself an intermediate to advanced user of OneNote. I would highly recommend.'```\n\nOkay, we can see that the reviews are not strictly about books and might refer to things like calendars and electronic applications such as OneNote. Nevertheless, the domain seems about right to train a summarization model on. Before we look at various models that are suitable for this task, we have one last bit of data preparation to do: combining the English and Spanish reviews as a single `DatasetDict` object. 🤗 Datasets provides a handy `concatenate_datasets()` function that (as the name suggests) will stack two `Dataset` objects on top of each other. So, to create our bilingual dataset, we’ll loop over each split, concatenate the datasets for that split, and shuffle the result to ensure our model doesn’t overfit to a single language:\n\n```\nfrom datasets import concatenate_datasets, DatasetDict\n\nbooks_dataset = DatasetDict()\n\nfor split in english_books.keys():\n books_dataset[split] = concatenate_datasets(\n [english_books[split], spanish_books[split]]\n )\n books_dataset[split] = books_dataset[split].shuffle(seed=42)\n\n\nshow_samples(books_dataset)```\n\n```\n'>> Title: Easy to follow!!!!'\n'>> Review: I loved The dash diet weight loss Solution. Never hungry. I would recommend this diet. Also the menus are well rounded. Try it. Has lots of the information need thanks.'\n\n'>> Title: PARCIALMENTE DAÑADO'\n'>> Review: Me llegó el día que tocaba, junto a otros libros que pedí, pero la caja llegó en mal estado lo cual dañó las esquinas de los libros porque venían sin protección (forro).'\n\n'>> Title: no lo he podido descargar'\n'>> Review: igual que el anterior'```\n\nThis certainly looks like a mix of English and Spanish reviews! Now that we have a training corpus, one final thing to check is the distribution of words in the reviews and their titles. This is especially important for summarization tasks, where short reference summaries in the data can bias the model to only output one or two words in the generated summaries. The plots below show the word distributions, and we can see that the titles are heavily skewed toward just 1-2 words:\n\n![Word count distributions for the review titles and texts.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/review-lengths.svg) ![Word count distributions for the review titles and texts.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/review-lengths-dark.svg)\n\nTo deal with this, we’ll filter out the examples with very short titles so that our model can produce more interesting summaries. Since we’re dealing with English and Spanish texts, we can use a rough heuristic to split the titles on whitespace and then use our trusty `Dataset.filter()` method as follows:\n\n```\nbooks_dataset = books_dataset.filter(lambda x: len(x[\"review_title\"].split()) > 2)```\n\nNow that we’ve prepared our corpus, let’s take a look at a few possible Transformer models that one might fine-tune on it!\n\n## [](#models-for-text-summarization)Models for text summarization\n\nIf you think about it, text summarization is a similar sort of task to machine translation: we have a body of text like a review that we’d like to “translate” into a shorter version that captures the salient features of the input. Accordingly, most Transformer models for summarization adopt the encoder-decoder architecture that we first encountered in [Chapter 1](/course/chapter1), although there are some exceptions like the GPT family of models which can also be used for summarization in few-shot settings. The following table lists some popular pretrained models that can be fine-tuned for summarization.\n\n| Transformer model | Description | Multilingual? |\n| --- | --- | --- |\n| [GPT-2](https://huggingface.co/gpt2-xl) | Although trained as an auto-regressive language model, you can make GPT-2 generate summaries by appending “TL;DR” at the end of the input text. | ❌ |\n| [PEGASUS](https://huggingface.co/google/pegasus-large) | Uses a pretraining objective to predict masked sentences in multi-sentence texts. This pretraining objective is closer to summarization than vanilla language modeling and scores highly on popular benchmarks. | ❌ |\n| [T5](https://huggingface.co/t5-base) | A universal Transformer architecture that formulates all tasks in a text-to-text framework; e.g., the input format for the model to summarize a document is `summarize: ARTICLE`. | ❌ |\n| [mT5](https://huggingface.co/google/mt5-base) | A multilingual version of T5, pretrained on the multilingual Common Crawl corpus (mC4), covering 101 languages. | ✅ |\n| [BART](https://huggingface.co/facebook/bart-base) | A novel Transformer architecture with both an encoder and a decoder stack trained to reconstruct corrupted input that combines the pretraining schemes of BERT and GPT-2. | ❌ |\n| [mBART-50](https://huggingface.co/facebook/mbart-large-50) | A multilingual version of BART, pretrained on 50 languages. | ✅ |\n\nAs you can see from this table, the majority of Transformer models for summarization (and indeed most NLP tasks) are monolingual. This is great if your task is in a “high-resource” language like English or German, but less so for the thousands of other languages in use across the world. Fortunately, there is a class of multilingual Transformer models, like mT5 and mBART, that come to the rescue. These models are pretrained using language modeling, but with a twist: instead of training on a corpus of one language, they are trained jointly on texts in over 50 languages at once!\n\nWe’ll focus on mT5, an interesting architecture based on T5 that was pretrained in a text-to-text framework. In T5, every NLP task is formulated in terms of a prompt prefix like `summarize:` which conditions the model to adapt the generated text to the prompt. As shown in the figure below, this makes T5 extremely versatile, as you can solve many tasks with a single model!\n\n![Different tasks performed by the T5 architecture.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/t5.svg) ![Different tasks performed by the T5 architecture.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/t5-dark.svg)\n\nmT5 doesn’t use prefixes, but shares much of the versatility of T5 and has the advantage of being multilingual. Now that we’ve picked a model, let’s take a look at preparing our data for training.\n\n✏️ **Try it out!** Once you’ve worked through this section, see how well mT5 compares to mBART by fine-tuning the latter with the same techniques. For bonus points, you can also try fine-tuning T5 on just the English reviews. Since T5 has a special prefix prompt, you’ll need to prepend `summarize:` to the input examples in the preprocessing steps below.\n\n## [](#preprocessing-the-data)Preprocessing the data\n\nOur next task is to tokenize and encode our reviews and their titles. As usual, we begin by loading the tokenizer associated with the pretrained model checkpoint. We’ll use `mt5-small` as our checkpoint so we can fine-tune the model in a reasonable amount of time:\n\n```\nfrom transformers import AutoTokenizer\n\nmodel_checkpoint = \"google/mt5-small\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)```\n\n💡 In the early stages of your NLP projects, a good practice is to train a class of “small” models on a small sample of data. This allows you to debug and iterate faster toward an end-to-end workflow. Once you are confident in the results, you can always scale up the model by simply changing the model checkpoint!\n\nLet’s test out the mT5 tokenizer on a small example:\n\n```\ninputs = tokenizer(\"I loved reading the Hunger Games!\")\ninputs```\n\n```\n{'input_ids': [336, 259, 28387, 11807, 287, 62893, 295, 12507, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}```\n\nHere we can see the familiar `input_ids` and `attention_mask` that we encountered in our first fine-tuning experiments back in [Chapter 3](/course/chapter3). Let’s decode these input IDs with the tokenizer’s `convert_ids_to_tokens()` function to see what kind of tokenizer we’re dealing with:\n\n```\ntokenizer.convert_ids_to_tokens(inputs.input_ids)```\n\n```\n['▁I', '▁', 'loved', '▁reading', '▁the', '▁Hung', 'er', '▁Games', '']```\n\nThe special Unicode character `▁` and end-of-sequence token `` indicate that we’re dealing with the SentencePiece tokenizer, which is based on the Unigram segmentation algorithm discussed in [Chapter 6](/course/chapter6). Unigram is especially useful for multilingual corpora since it allows SentencePiece to be agnostic about accents, punctuation, and the fact that many languages, like Japanese, do not have whitespace characters.\n\nTo tokenize our corpus, we have to deal with a subtlety associated with summarization: because our labels are also text, it is possible that they exceed the model’s maximum context size. This means we need to apply truncation to both the reviews and their titles to ensure we don’t pass excessively long inputs to our model. The tokenizers in 🤗 Transformers provide a nifty `text_target` argument that allows you to tokenize the labels in parallel to the inputs. Here is an example of how the inputs and targets are processed for mT5:\n\n```\nmax_input_length = 512\nmax_target_length = 30\n\n\ndef preprocess_function(examples):\n model_inputs = tokenizer(\n examples[\"review_body\"],\n max_length=max_input_length,\n truncation=True,\n )\n labels = tokenizer(\n examples[\"review_title\"], max_length=max_target_length, truncation=True\n )\n model_inputs[\"labels\"] = labels[\"input_ids\"]\n return model_inputs```\n\nLet’s walk through this code to understand what’s happening. The first thing we’ve done is define values for `max_input_length` and `max_target_length`, which set the upper limits for how long our reviews and titles can be. Since the review body is typically much larger than the title, we’ve scaled these values accordingly.\n\nWith `preprocess_function()`, it is then a simple matter to tokenize the whole corpus using the handy `Dataset.map()` function we’ve used extensively throughout this course:\n\n```\ntokenized_datasets = books_dataset.map(preprocess_function, batched=True)```\n\nNow that the corpus has been preprocessed, let’s take a look at some metrics that are commonly used for summarization. As we’ll see, there is no silver bullet when it comes to measuring the quality of machine-generated text.\n\n💡 You may have noticed that we used `batched=True` in our `Dataset.map()` function above. This encodes the examples in batches of 1,000 (the default) and allows you to make use of the multithreading capabilities of the fast tokenizers in 🤗 Transformers. Where possible, try using `batched=True` to get the most out of your preprocessing!\n\n## [](#metrics-for-text-summarization)Metrics for text summarization\n\nIn comparison to most of the other tasks we’ve covered in this course, measuring the performance of text generation tasks like summarization or translation is not as straightforward. For example, given a review like “I loved reading the Hunger Games”, there are multiple valid summaries, like “I loved the Hunger Games” or “Hunger Games is a great read”. Clearly, applying some sort of exact match between the generated summary and the label is not a good solution — even humans would fare poorly under such a metric, because we all have our own writing style.\n\nFor summarization, one of the most commonly used metrics is the [ROUGE score](https://en.wikipedia.org/wiki/ROUGE_(metric)) (short for Recall-Oriented Understudy for Gisting Evaluation). The basic idea behind this metric is to compare a generated summary against a set of reference summaries that are typically created by humans. To make this more precise, suppose we want to compare the following two summaries:\n\n```\ngenerated_summary = \"I absolutely loved reading the Hunger Games\"\nreference_summary = \"I loved reading the Hunger Games\"```\n\nOne way to compare them could be to count the number of overlapping words, which in this case would be 6. However, this is a bit crude, so instead ROUGE is based on computing the _precision_ and _recall_ scores for the overlap.\n\n🙋 Don’t worry if this is the first time you’ve heard of precision and recall — we’ll go through some explicit examples together to make it all clear. These metrics are usually encountered in classification tasks, so if you want to understand how precision and recall are defined in that context, we recommend checking out the `scikit-learn` [guides](https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html).\n\nFor ROUGE, recall measures how much of the reference summary is captured by the generated one. If we are just comparing words, recall can be calculated according to the following formula: Recall\\=Number of overlapping wordsTotal number of words in reference summary \\\\mathrm{Recall} = \\\\frac{\\\\mathrm{Number\\\\,of\\\\,overlapping\\\\, words}}{\\\\mathrm{Total\\\\, number\\\\, of\\\\, words\\\\, in\\\\, reference\\\\, summary}}\n\nFor our simple example above, this formula gives a perfect recall of 6/6 = 1; i.e., all the words in the reference summary have been produced by the model. This may sound great, but imagine if our generated summary had been “I really really loved reading the Hunger Games all night”. This would also have perfect recall, but is arguably a worse summary since it is verbose. To deal with these scenarios we also compute the precision, which in the ROUGE context measures how much of the generated summary was relevant: Precision\\=Number of overlapping wordsTotal number of words in generated summary \\\\mathrm{Precision} = \\\\frac{\\\\mathrm{Number\\\\,of\\\\,overlapping\\\\, words}}{\\\\mathrm{Total\\\\, number\\\\, of\\\\, words\\\\, in\\\\, generated\\\\, summary}}\n\nApplying this to our verbose summary gives a precision of 6/10 = 0.6, which is considerably worse than the precision of 6/7 = 0.86 obtained by our shorter one. In practice, both precision and recall are usually computed, and then the F1-score (the harmonic mean of precision and recall) is reported. We can do this easily in 🤗 Datasets by first installing the `rouge_score` package:\n\nand then loading the ROUGE metric as follows:\n\n```\nimport evaluate\n\nrouge_score = evaluate.load(\"rouge\")```\n\nThen we can use the `rouge_score.compute()` function to calculate all the metrics at once:\n\n```\nscores = rouge_score.compute(\n predictions=[generated_summary], references=[reference_summary]\n)\nscores```\n\n```\n{'rouge1': AggregateScore(low=Score(precision=0.86, recall=1.0, fmeasure=0.92), mid=Score(precision=0.86, recall=1.0, fmeasure=0.92), high=Score(precision=0.86, recall=1.0, fmeasure=0.92)),\n 'rouge2': AggregateScore(low=Score(precision=0.67, recall=0.8, fmeasure=0.73), mid=Score(precision=0.67, recall=0.8, fmeasure=0.73), high=Score(precision=0.67, recall=0.8, fmeasure=0.73)),\n 'rougeL': AggregateScore(low=Score(precision=0.86, recall=1.0, fmeasure=0.92), mid=Score(precision=0.86, recall=1.0, fmeasure=0.92), high=Score(precision=0.86, recall=1.0, fmeasure=0.92)),\n 'rougeLsum': AggregateScore(low=Score(precision=0.86, recall=1.0, fmeasure=0.92), mid=Score(precision=0.86, recall=1.0, fmeasure=0.92), high=Score(precision=0.86, recall=1.0, fmeasure=0.92))}```\n\nWhoa, there’s a lot of information in that output — what does it all mean? First, 🤗 Datasets actually computes confidence intervals for precision, recall, and F1-score; these are the `low`, `mid`, and `high` attributes you can see here. Moreover, 🤗 Datasets computes a variety of ROUGE scores which are based on different types of text granularity when comparing the generated and reference summaries. The `rouge1` variant is the overlap of unigrams — this is just a fancy way of saying the overlap of words and is exactly the metric we’ve discussed above. To verify this, let’s pull out the `mid` value of our scores:\n\n```\nScore(precision=0.86, recall=1.0, fmeasure=0.92)```\n\nGreat, the precision and recall numbers match up! Now what about those other ROUGE scores? `rouge2` measures the overlap between bigrams (think the overlap of pairs of words), while `rougeL` and `rougeLsum` measure the longest matching sequences of words by looking for the longest common substrings in the generated and reference summaries. The “sum” in `rougeLsum` refers to the fact that this metric is computed over a whole summary, while `rougeL` is computed as the average over individual sentences.\n\n✏️ **Try it out!** Create your own example of a generated and reference summary and see if the resulting ROUGE scores agree with a manual calculation based on the formulas for precision and recall. For bonus points, split the text into bigrams and compare the precision and recall for the `rouge2` metric.\n\nWe’ll use these ROUGE scores to track the performance of our model, but before doing that let’s do something every good NLP practitioner should do: create a strong, yet simple baseline!\n\n### [](#creating-a-strong-baseline)Creating a strong baseline\n\nA common baseline for text summarization is to simply take the first three sentences of an article, often called the _lead-3_ baseline. We could use full stops to track the sentence boundaries, but this will fail on acronyms like “U.S.” or “U.N.” — so instead we’ll use the `nltk` library, which includes a better algorithm to handle these cases. You can install the package using `pip` as follows:\n\nand then download the punctuation rules:\n\n```\nimport nltk\n\nnltk.download(\"punkt\")```\n\nNext, we import the sentence tokenizer from `nltk` and create a simple function to extract the first three sentences in a review. The convention in text summarization is to separate each summary with a newline, so let’s also include this and test it on a training example:\n\n```\nfrom nltk.tokenize import sent_tokenize\n\n\ndef three_sentence_summary(text):\n return \"\\n\".join(sent_tokenize(text)[:3])\n\n\nprint(three_sentence_summary(books_dataset[\"train\"][1][\"review_body\"]))```\n\n```\n'I grew up reading Koontz, and years ago, I stopped,convinced i had \"outgrown\" him.'\n'Still,when a friend was looking for something suspenseful too read, I suggested Koontz.'\n'She found Strangers.'```\n\nThis seems to work, so let’s now implement a function that extracts these “summaries” from a dataset and computes the ROUGE scores for the baseline:\n\n```\ndef evaluate_baseline(dataset, metric):\n summaries = [three_sentence_summary(text) for text in dataset[\"review_body\"]]\n return metric.compute(predictions=summaries, references=dataset[\"review_title\"])```\n\nWe can then use this function to compute the ROUGE scores over the validation set and prettify them a bit using Pandas:\n\n```\nimport pandas as pd\n\nscore = evaluate_baseline(books_dataset[\"validation\"], rouge_score)\nrouge_names = [\"rouge1\", \"rouge2\", \"rougeL\", \"rougeLsum\"]\nrouge_dict = dict((rn, round(score[rn].mid.fmeasure * 100, 2)) for rn in rouge_names)\nrouge_dict```\n\n```\n{'rouge1': 16.74, 'rouge2': 8.83, 'rougeL': 15.6, 'rougeLsum': 15.96}```\n\nWe can see that the `rouge2` score is significantly lower than the rest; this likely reflects the fact that review titles are typically concise and so the lead-3 baseline is too verbose. Now that we have a good baseline to work from, let’s turn our attention toward fine-tuning mT5!\n\n## [](#fine-tuning-mt5-with-the-trainer-api)Fine-tuning mT5 with the `Trainer` API\n\nFine-tuning a model for summarization is very similar to the other tasks we’ve covered in this chapter. The first thing we need to do is load the pretrained model from the `mt5-small` checkpoint. Since summarization is a sequence-to-sequence task, we can load the model with the `AutoModelForSeq2SeqLM` class, which will automatically download and cache the weights:\n\n```\nfrom transformers import AutoModelForSeq2SeqLM\n\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)```\n\n💡 If you’re wondering why you don’t see any warnings about fine-tuning the model on a downstream task, that’s because for sequence-to-sequence tasks we keep all the weights of the network. Compare this to our text classification model in [Chapter 3](/course/chapter3), where the head of the pretrained model was replaced with a randomly initialized network.\n\nThe next thing we need to do is log in to the Hugging Face Hub. If you’re running this code in a notebook, you can do so with the following utility function:\n\n```\nfrom huggingface_hub import notebook_login\n\nnotebook_login()```\n\nwhich will display a widget where you can enter your credentials. Alternatively, you can run this command in your terminal and log in there:\n\nWe’ll need to generate summaries in order to compute ROUGE scores during training. Fortunately, 🤗 Transformers provides dedicated `Seq2SeqTrainingArguments` and `Seq2SeqTrainer` classes that can do this for us automatically! To see how this works, let’s first define the hyperparameters and other arguments for our experiments:\n\n```\nfrom transformers import Seq2SeqTrainingArguments\n\nbatch_size = 8\nnum_train_epochs = 8\n\nlogging_steps = len(tokenized_datasets[\"train\"]) // batch_size\nmodel_name = model_checkpoint.split(\"/\")[-1]\n\nargs = Seq2SeqTrainingArguments(\n output_dir=f\"{model_name}-finetuned-amazon-en-es\",\n evaluation_strategy=\"epoch\",\n learning_rate=5.6e-5,\n per_device_train_batch_size=batch_size,\n per_device_eval_batch_size=batch_size,\n weight_decay=0.01,\n save_total_limit=3,\n num_train_epochs=num_train_epochs,\n predict_with_generate=True,\n logging_steps=logging_steps,\n push_to_hub=True,\n)```\n\nHere, the `predict_with_generate` argument has been set to indicate that we should generate summaries during evaluation so that we can compute ROUGE scores for each epoch. As discussed in [Chapter 1](/course/chapter1), the decoder performs inference by predicting tokens one by one, and this is implemented by the model’s `generate()` method. Setting `predict_with_generate=True` tells the `Seq2SeqTrainer` to use that method for evaluation. We’ve also adjusted some of the default hyperparameters, like the learning rate, number of epochs, and weight decay, and we’ve set the `save_total_limit` option to only save up to 3 checkpoints during training — this is because even the “small” version of mT5 uses around a GB of hard drive space, and we can save a bit of room by limiting the number of copies we save.\n\nThe `push_to_hub=True` argument will allow us to push the model to the Hub after training; you’ll find the repository under your user profile in the location defined by `output_dir`. Note that you can specify the name of the repository you want to push to with the `hub_model_id` argument (in particular, you will have to use this argument to push to an organization). For instance, when we pushed the model to the [`huggingface-course` organization](https://huggingface.co/huggingface-course), we added `hub_model_id=\"huggingface-course/mt5-finetuned-amazon-en-es\"` to `Seq2SeqTrainingArguments`.\n\nThe next thing we need to do is provide the trainer with a `compute_metrics()` function so that we can evaluate our model during training. For summarization this is a bit more involved than simply calling `rouge_score.compute()` on the model’s predictions, since we need to _decode_ the outputs and labels into text before we can compute the ROUGE scores. The following function does exactly that, and also makes use of the `sent_tokenize()` function from `nltk` to separate the summary sentences with newlines:\n\n```\nimport numpy as np\n\n\ndef compute_metrics(eval_pred):\n predictions, labels = eval_pred\n \n decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)\n \n labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n \n decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n \n decoded_preds = [\"\\n\".join(sent_tokenize(pred.strip())) for pred in decoded_preds]\n decoded_labels = [\"\\n\".join(sent_tokenize(label.strip())) for label in decoded_labels]\n \n result = rouge_score.compute(\n predictions=decoded_preds, references=decoded_labels, use_stemmer=True\n )\n \n result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\n return {k: round(v, 4) for k, v in result.items()}```\n\nNext, we need to define a data collator for our sequence-to-sequence task. Since mT5 is an encoder-decoder Transformer model, one subtlety with preparing our batches is that during decoding we need to shift the labels to the right by one. This is required to ensure that the decoder only sees the previous ground truth labels and not the current or future ones, which would be easy for the model to memorize. This is similar to how masked self-attention is applied to the inputs in a task like [causal language modeling](/course/chapter7/6).\n\nLuckily, 🤗 Transformers provides a `DataCollatorForSeq2Seq` collator that will dynamically pad the inputs and the labels for us. To instantiate this collator, we simply need to provide the `tokenizer` and `model`:\n\n```\nfrom transformers import DataCollatorForSeq2Seq\n\ndata_collator = DataCollatorForSeq2Seq(tokenizer, model=model)```\n\nLet’s see what this collator produces when fed a small batch of examples. First, we need to remove the columns with strings because the collator won’t know how to pad these elements:\n\n```\ntokenized_datasets = tokenized_datasets.remove_columns(\n books_dataset[\"train\"].column_names\n)```\n\nSince the collator expects a list of `dict`s, where each `dict` represents a single example in the dataset, we also need to wrangle the data into the expected format before passing it to the data collator:\n\n```\nfeatures = [tokenized_datasets[\"train\"][i] for i in range(2)]\ndata_collator(features)```\n\n```\n{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],\n [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'input_ids': tensor([[ 1494, 259, 8622, 390, 259, 262, 2316, 3435, 955,\n 772, 281, 772, 1617, 263, 305, 14701, 260, 1385,\n 3031, 259, 24146, 332, 1037, 259, 43906, 305, 336,\n 260, 1, 0, 0, 0, 0, 0, 0],\n [ 259, 27531, 13483, 259, 7505, 260, 112240, 15192, 305,\n 53198, 276, 259, 74060, 263, 260, 459, 25640, 776,\n 2119, 336, 259, 2220, 259, 18896, 288, 4906, 288,\n 1037, 3931, 260, 7083, 101476, 1143, 260, 1]]), 'labels': tensor([[ 7483, 259, 2364, 15695, 1, -100],\n [ 259, 27531, 13483, 259, 7505, 1]]), 'decoder_input_ids': tensor([[ 0, 7483, 259, 2364, 15695, 1],\n [ 0, 259, 27531, 13483, 259, 7505]])}```\n\nThe main thing to notice here is that the first example is longer than the second one, so the `input_ids` and `attention_mask` of the second example have been padded on the right with a `[PAD]` token (whose ID is `0`). Similarly, we can see that the `labels` have been padded with `-100`s, to make sure the padding tokens are ignored by the loss function. And finally, we can see a new `decoder_input_ids` which has shifted the labels to the right by inserting a `[PAD]` token in the first entry.\n\nWe finally have all the ingredients we need to train with! We now simply need to instantiate the trainer with the standard arguments:\n\n```\nfrom transformers import Seq2SeqTrainer\n\ntrainer = Seq2SeqTrainer(\n model,\n args,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"validation\"],\n data_collator=data_collator,\n tokenizer=tokenizer,\n compute_metrics=compute_metrics,\n)```\n\nand launch our training run:\n\nDuring training, you should see the training loss decrease and the ROUGE scores increase with each epoch. Once the training is complete, you can see the final ROUGE scores by running `Trainer.evaluate()`:\n\n```\n{'eval_loss': 3.028524398803711,\n 'eval_rouge1': 16.9728,\n 'eval_rouge2': 8.2969,\n 'eval_rougeL': 16.8366,\n 'eval_rougeLsum': 16.851,\n 'eval_gen_len': 10.1597,\n 'eval_runtime': 6.1054,\n 'eval_samples_per_second': 38.982,\n 'eval_steps_per_second': 4.914}```\n\nFrom the scores we can see that our model has handily outperformed our lead-3 baseline — nice! The final thing to do is push the model weights to the Hub, as follows:\n\n```\ntrainer.push_to_hub(commit_message=\"Training complete\", tags=\"summarization\")```\n\n```\n'https://huggingface.co/huggingface-course/mt5-finetuned-amazon-en-es/commit/aa0536b829b28e73e1e4b94b8a5aacec420d40e0'```\n\nThis will save the checkpoint and configuration files to `output_dir`, before uploading all the files to the Hub. By specifying the `tags` argument, we also ensure that the widget on the Hub will be one for a summarization pipeline instead of the default text generation one associated with the mT5 architecture (for more information about model tags, see the [🤗 Hub documentation](https://huggingface.co/docs/hub/main#how-is-a-models-type-of-inference-api-and-widget-determined)). The output from `trainer.push_to_hub()` is a URL to the Git commit hash, so you can easily see the changes that were made to the model repository!\n\nTo wrap up this section, let’s take a look at how we can also fine-tune mT5 using the low-level features provided by 🤗 Accelerate.\n\n## [](#fine-tuning-mt5-with-accelerate)Fine-tuning mT5 with 🤗 Accelerate\n\nFine-tuning our model with 🤗 Accelerate is very similar to the text classification example we encountered in [Chapter 3](/course/chapter3). The main differences will be the need to explicitly generate our summaries during training and define how we compute the ROUGE scores (recall that the `Seq2SeqTrainer` took care of the generation for us). Let’s take a look how we can implement these two requirements within 🤗 Accelerate!\n\n### [](#preparing-everything-for-training)Preparing everything for training\n\nThe first thing we need to do is create a `DataLoader` for each of our splits. Since the PyTorch dataloaders expect batches of tensors, we need to set the format to `\"torch\"` in our datasets:\n\n```\ntokenized_datasets.set_format(\"torch\")```\n\nNow that we’ve got datasets consisting of just tensors, the next thing to do is instantiate the `DataCollatorForSeq2Seq` again. For this we need to provide a fresh version of the model, so let’s load it again from our cache:\n\n```\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)```\n\nWe can then instantiate the data collator and use this to define our dataloaders:\n\n```\nfrom torch.utils.data import DataLoader\n\nbatch_size = 8\ntrain_dataloader = DataLoader(\n tokenized_datasets[\"train\"],\n shuffle=True,\n collate_fn=data_collator,\n batch_size=batch_size,\n)\neval_dataloader = DataLoader(\n tokenized_datasets[\"validation\"], collate_fn=data_collator, batch_size=batch_size\n)```\n\nThe next thing to do is define the optimizer we want to use. As in our other examples, we’ll use `AdamW`, which works well for most problems:\n\n```\nfrom torch.optim import AdamW\n\noptimizer = AdamW(model.parameters(), lr=2e-5)```\n\nFinally, we feed our model, optimizer, and dataloaders to the `accelerator.prepare()` method:\n\n```\nfrom accelerate import Accelerator\n\naccelerator = Accelerator()\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n model, optimizer, train_dataloader, eval_dataloader\n)```\n\n🚨 If you’re training on a TPU, you’ll need to move all the code above into a dedicated training function. See [Chapter 3](/course/chapter3) for more details.\n\nNow that we’ve prepared our objects, there are three remaining things to do:\n\n- Define the learning rate schedule.\n- Implement a function to post-process the summaries for evaluation.\n- Create a repository on the Hub that we can push our model to.\n\nFor the learning rate schedule, we’ll use the standard linear one from previous sections:\n\n```\nfrom transformers import get_scheduler\n\nnum_train_epochs = 10\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n \"linear\",\n optimizer=optimizer,\n num_warmup_steps=0,\n num_training_steps=num_training_steps,\n)```\n\nFor post-processing, we need a function that splits the generated summaries into sentences that are separated by newlines. This is the format the ROUGE metric expects, and we can achieve this with the following snippet of code:\n\n```\ndef postprocess_text(preds, labels):\n preds = [pred.strip() for pred in preds]\n labels = [label.strip() for label in labels]\n\n \n preds = [\"\\n\".join(nltk.sent_tokenize(pred)) for pred in preds]\n labels = [\"\\n\".join(nltk.sent_tokenize(label)) for label in labels]\n\n return preds, labels```\n\nThis should look familiar to you if you recall how we defined the `compute_metrics()` function of the `Seq2SeqTrainer`.\n\nFinally, we need to create a model repository on the Hugging Face Hub. For this, we can use the appropriately titled 🤗 Hub library. We just need to define a name for our repository, and the library has a utility function to combine the repository ID with the user profile:\n\n```\nfrom huggingface_hub import get_full_repo_name\n\nmodel_name = \"test-bert-finetuned-squad-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name```\n\n```\n'lewtun/mt5-finetuned-amazon-en-es-accelerate'```\n\nNow we can use this repository name to clone a local version to our results directory that will store the training artifacts:\n\n```\nfrom huggingface_hub import Repository\n\noutput_dir = \"results-mt5-finetuned-squad-accelerate\"\nrepo = Repository(output_dir, clone_from=repo_name)```\n\nThis will allow us to push the artifacts back to the Hub by calling the `repo.push_to_hub()` method during training! Let’s now wrap up our analysis by writing out the training loop.\n\n### [](#training-loop)Training loop\n\nThe training loop for summarization is quite similar to the other 🤗 Accelerate examples that we’ve encountered and is roughly split into four main steps:\n\n1. Train the model by iterating over all the examples in `train_dataloader` for each epoch.\n2. Generate model summaries at the end of each epoch, by first generating the tokens and then decoding them (and the reference summaries) into text.\n3. Compute the ROUGE scores using the same techniques we saw earlier.\n4. Save the checkpoints and push everything to the Hub. Here we rely on the nifty `blocking=False` argument of the `Repository` object so that we can push the checkpoints per epoch _asynchronously_. This allows us to continue training without having to wait for the somewhat slow upload associated with a GB-sized model!\n\nThese steps can be seen in the following block of code:\n\n```\nfrom tqdm.auto import tqdm\nimport torch\nimport numpy as np\n\nprogress_bar = tqdm(range(num_training_steps))\n\nfor epoch in range(num_train_epochs):\n \n model.train()\n for step, batch in enumerate(train_dataloader):\n outputs = model(**batch)\n loss = outputs.loss\n accelerator.backward(loss)\n\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n progress_bar.update(1)\n\n \n model.eval()\n for step, batch in enumerate(eval_dataloader):\n with torch.no_grad():\n generated_tokens = accelerator.unwrap_model(model).generate(\n batch[\"input_ids\"],\n attention_mask=batch[\"attention_mask\"],\n )\n\n generated_tokens = accelerator.pad_across_processes(\n generated_tokens, dim=1, pad_index=tokenizer.pad_token_id\n )\n labels = batch[\"labels\"]\n\n \n labels = accelerator.pad_across_processes(\n batch[\"labels\"], dim=1, pad_index=tokenizer.pad_token_id\n )\n\n generated_tokens = accelerator.gather(generated_tokens).cpu().numpy()\n labels = accelerator.gather(labels).cpu().numpy()\n\n \n labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n if isinstance(generated_tokens, tuple):\n generated_tokens = generated_tokens[0]\n decoded_preds = tokenizer.batch_decode(\n generated_tokens, skip_special_tokens=True\n )\n decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n\n decoded_preds, decoded_labels = postprocess_text(\n decoded_preds, decoded_labels\n )\n\n rouge_score.add_batch(predictions=decoded_preds, references=decoded_labels)\n\n \n result = rouge_score.compute()\n \n result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\n result = {k: round(v, 4) for k, v in result.items()}\n print(f\"Epoch {epoch}:\", result)\n\n \n accelerator.wait_for_everyone()\n unwrapped_model = accelerator.unwrap_model(model)\n unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n if accelerator.is_main_process:\n tokenizer.save_pretrained(output_dir)\n repo.push_to_hub(\n commit_message=f\"Training in progress epoch {epoch}\", blocking=False\n )```\n\n```\nEpoch 0: {'rouge1': 5.6351, 'rouge2': 1.1625, 'rougeL': 5.4866, 'rougeLsum': 5.5005}\nEpoch 1: {'rouge1': 9.8646, 'rouge2': 3.4106, 'rougeL': 9.9439, 'rougeLsum': 9.9306}\nEpoch 2: {'rouge1': 11.0872, 'rouge2': 3.3273, 'rougeL': 11.0508, 'rougeLsum': 10.9468}\nEpoch 3: {'rouge1': 11.8587, 'rouge2': 4.8167, 'rougeL': 11.7986, 'rougeLsum': 11.7518}\nEpoch 4: {'rouge1': 12.9842, 'rouge2': 5.5887, 'rougeL': 12.7546, 'rougeLsum': 12.7029}\nEpoch 5: {'rouge1': 13.4628, 'rouge2': 6.4598, 'rougeL': 13.312, 'rougeLsum': 13.2913}\nEpoch 6: {'rouge1': 12.9131, 'rouge2': 5.8914, 'rougeL': 12.6896, 'rougeLsum': 12.5701}\nEpoch 7: {'rouge1': 13.3079, 'rouge2': 6.2994, 'rougeL': 13.1536, 'rougeLsum': 13.1194}\nEpoch 8: {'rouge1': 13.96, 'rouge2': 6.5998, 'rougeL': 13.9123, 'rougeLsum': 13.7744}\nEpoch 9: {'rouge1': 14.1192, 'rouge2': 7.0059, 'rougeL': 14.1172, 'rougeLsum': 13.9509}```\n\nAnd that’s it! Once you run this, you’ll have a model and results that are pretty similar to the ones we obtained with the `Trainer`.\n\n## [](#using-your-fine-tuned-model)Using your fine-tuned model\n\nOnce you’ve pushed the model to the Hub, you can play with it either via the inference widget or with a `pipeline` object, as follows:\n\n```\nfrom transformers import pipeline\n\nhub_model_id = \"huggingface-course/mt5-small-finetuned-amazon-en-es\"\nsummarizer = pipeline(\"summarization\", model=hub_model_id)```\n\nWe can feed some examples from the test set (which the model has not seen) to our pipeline to get a feel for the quality of the summaries. First let’s implement a simple function to show the review, title, and generated summary together:\n\n```\ndef print_summary(idx):\n review = books_dataset[\"test\"][idx][\"review_body\"]\n title = books_dataset[\"test\"][idx][\"review_title\"]\n summary = summarizer(books_dataset[\"test\"][idx][\"review_body\"])[0][\"summary_text\"]\n print(f\"'>>> Review: {review}'\")\n print(f\"\\n'>>> Title: {title}'\")\n print(f\"\\n'>>> Summary: {summary}'\")```\n\nLet’s take a look at one of the English examples we get:\n\n```\n'>>> Review: Nothing special at all about this product... the book is too small and stiff and hard to write in. The huge sticker on the back doesn’t come off and looks super tacky. I would not purchase this again. I could have just bought a journal from the dollar store and it would be basically the same thing. It’s also really expensive for what it is.'\n\n'>>> Title: Not impressed at all... buy something else'\n\n'>>> Summary: Nothing special at all about this product'```\n\nThis is not too bad! We can see that our model has actually been able to perform _abstractive_ summarization by augmenting parts of the review with new words. And perhaps the coolest aspect of our model is that it is bilingual, so we can also generate summaries of Spanish reviews:\n\n```\n'>>> Review: Es una trilogia que se hace muy facil de leer. Me ha gustado, no me esperaba el final para nada'\n\n'>>> Title: Buena literatura para adolescentes'\n\n'>>> Summary: Muy facil de leer'```\n\nThe summary translates into “Very easy to read” in English, which we can see in this case was extracted directly from the review. Nevertheless, this shows the versatility of the mT5 model and has given you a taste of what it’s like to deal with a multilingual corpus!\n\nNext, we’ll turn our attention to a slightly more complex task: training a language model from scratch.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tSummarization - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Summarization

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Summarization

\"Ask \"Open \"Open

In this section we’ll take a look at how Transformer models can be used to condense long documents into summaries, a task known as text summarization. This is one of the most challenging NLP tasks as it requires a range of abilities, such as understanding long passages and generating coherent text that captures the main topics in a document. However, when done well, text summarization is a powerful tool that can speed up various business processes by relieving the burden of domain experts to read long documents in detail.

Although there already exist various fine-tuned models for summarization on the Hugging Face Hub, almost all of these are only suitable for English documents. So, to add a twist in this section, we’ll train a bilingual model for English and Spanish. By the end of this section, you’ll have a model that can summarize customer reviews like the one shown here:

As we’ll see, these summaries are concise because they’re learned from the titles that customers provide in their product reviews. Let’s start by putting together a suitable bilingual corpus for this task.

Preparing a multilingual corpus

We’ll use the Multilingual Amazon Reviews Corpus to create our bilingual summarizer. This corpus consists of Amazon product reviews in six languages and is typically used to benchmark multilingual classifiers. However, since each review is accompanied by a short title, we can use the titles as the target summaries for our model to learn from! To get started, let’s download the English and Spanish subsets from the Hugging Face Hub:

from datasets import load_dataset\n\nspanish_dataset = load_dataset(\"amazon_reviews_multi\", \"es\")\nenglish_dataset = load_dataset(\"amazon_reviews_multi\", \"en\")\nenglish_dataset
DatasetDict({\n    train: Dataset({\n        features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'],\n        num_rows: 200000\n    })\n    validation: Dataset({\n        features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'],\n        num_rows: 5000\n    })\n    test: Dataset({\n        features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'],\n        num_rows: 5000\n    })\n})

As you can see, for each language there are 200,000 reviews for the train split, and 5,000 reviews for each of the validation and test splits. The review information we are interested in is contained in the review_body and review_title columns. Let’s take a look at a few examples by creating a simple function that takes a random sample from the training set with the techniques we learned in Chapter 5:

def show_samples(dataset, num_samples=3, seed=42):\n    sample = dataset[\"train\"].shuffle(seed=seed).select(range(num_samples))\n    for example in sample:\n        print(f\"\\n'>> Title: {example['review_title']}'\")\n        print(f\"'>> Review: {example['review_body']}'\")\n\n\nshow_samples(english_dataset)
'>> Title: Worked in front position, not rear'\n'>> Review: 3 stars because these are not rear brakes as stated in the item description. At least the mount adapter only worked on the front fork of the bike that I got it for.'\n\n'>> Title: meh'\n'>> Review: Does it’s job and it’s gorgeous but mine is falling apart, I had to basically put it together again with hot glue'\n\n'>> Title: Can\\'t beat these for the money'\n'>> Review: Bought this for handling miscellaneous aircraft parts and hanger \"stuff\" that I needed to organize; it really fit the bill. The unit arrived quickly, was well packaged and arrived intact (always a good sign). There are five wall mounts-- three on the top and two on the bottom. I wanted to mount it on the wall, so all I had to do was to remove the top two layers of plastic drawers, as well as the bottom corner drawers, place it when I wanted and mark it; I then used some of the new plastic screw in wall anchors (the 50 pound variety) and it easily mounted to the wall. Some have remarked that they wanted dividers for the drawers, and that they made those. Good idea. My application was that I needed something that I can see the contents at about eye level, so I wanted the fuller-sized drawers. I also like that these are the new plastic that doesn\\'t get brittle and split like my older plastic drawers did. I like the all-plastic construction. It\\'s heavy duty enough to hold metal parts, but being made of plastic it\\'s not as heavy as a metal frame, so you can easily mount it to the wall and still load it up with heavy stuff, or light stuff. No problem there. For the money, you can\\'t beat it. Best one of these I\\'ve bought to date-- and I\\'ve been using some version of these for over forty years.'

✏️ Try it out! Change the random seed in the Dataset.shuffle() command to explore other reviews in the corpus. If you’re a Spanish speaker, take a look at some of the reviews in spanish_dataset to see if the titles also seem like reasonable summaries.

This sample shows the diversity of reviews one typically finds online, ranging from positive to negative (and everything in between!). Although the example with the “meh” title is not very informative, the other titles look like decent summaries of the reviews themselves. Training a summarization model on all 400,000 reviews would take far too long on a single GPU, so instead we’ll focus on generating summaries for a single domain of products. To get a feel for what domains we can choose from, let’s convert english_dataset to a pandas.DataFrame and compute the number of reviews per product category:

english_dataset.set_format(\"pandas\")\nenglish_df = english_dataset[\"train\"][:]\n# Show counts for top 20 products\nenglish_df[\"product_category\"].value_counts()[:20]
home                      17679\napparel                   15951\nwireless                  15717\nother                     13418\nbeauty                    12091\ndrugstore                 11730\nkitchen                   10382\ntoy                        8745\nsports                     8277\nautomotive                 7506\nlawn_and_garden            7327\nhome_improvement           7136\npet_products               7082\ndigital_ebook_purchase     6749\npc                         6401\nelectronics                6186\noffice_product             5521\nshoes                      5197\ngrocery                    4730\nbook                       3756\nName: product_category, dtype: int64

The most popular products in the English dataset are about household items, clothing, and wireless electronics. To stick with the Amazon theme, though, let’s focus on summarizing book reviews — after all, this is what the company was founded on! We can see two product categories that fit the bill (book and digital_ebook_purchase), so let’s filter the datasets in both languages for just these products. As we saw in Chapter 5, the Dataset.filter() function allows us to slice a dataset very efficiently, so we can define a simple function to do this:

def filter_books(example):\n    return (\n        example[\"product_category\"] == \"book\"\n        or example[\"product_category\"] == \"digital_ebook_purchase\"\n    )

Now when we apply this function to english_dataset and spanish_dataset, the result will contain just those rows involving the book categories. Before applying the filter, let’s switch the format of english_dataset from \"pandas\" back to \"arrow\":

english_dataset.reset_format()

We can then apply the filter function, and as a sanity check let’s inspect a sample of reviews to see if they are indeed about books:

spanish_books = spanish_dataset.filter(filter_books)\nenglish_books = english_dataset.filter(filter_books)\nshow_samples(english_books)
'>> Title: I\\'m dissapointed.'\n'>> Review: I guess I had higher expectations for this book from the reviews. I really thought I\\'d at least like it. The plot idea was great. I loved Ash but, it just didnt go anywhere. Most of the book was about their radio show and talking to callers. I wanted the author to dig deeper so we could really get to know the characters. All we know about Grace is that she is attractive looking, Latino and is kind of a brat. I\\'m dissapointed.'\n\n'>> Title: Good art, good price, poor design'\n'>> Review: I had gotten the DC Vintage calendar the past two years, but it was on backorder forever this year and I saw they had shrunk the dimensions for no good reason. This one has good art choices but the design has the fold going through the picture, so it\\'s less aesthetically pleasing, especially if you want to keep a picture to hang. For the price, a good calendar'\n\n'>> Title: Helpful'\n'>> Review: Nearly all the tips useful and. I consider myself an intermediate to advanced user of OneNote. I would highly recommend.'

Okay, we can see that the reviews are not strictly about books and might refer to things like calendars and electronic applications such as OneNote. Nevertheless, the domain seems about right to train a summarization model on. Before we look at various models that are suitable for this task, we have one last bit of data preparation to do: combining the English and Spanish reviews as a single DatasetDict object. 🤗 Datasets provides a handy concatenate_datasets() function that (as the name suggests) will stack two Dataset objects on top of each other. So, to create our bilingual dataset, we’ll loop over each split, concatenate the datasets for that split, and shuffle the result to ensure our model doesn’t overfit to a single language:

from datasets import concatenate_datasets, DatasetDict\n\nbooks_dataset = DatasetDict()\n\nfor split in english_books.keys():\n    books_dataset[split] = concatenate_datasets(\n        [english_books[split], spanish_books[split]]\n    )\n    books_dataset[split] = books_dataset[split].shuffle(seed=42)\n\n# Peek at a few examples\nshow_samples(books_dataset)
'>> Title: Easy to follow!!!!'\n'>> Review: I loved The dash diet weight loss Solution. Never hungry. I would recommend this diet. Also the menus are well rounded. Try it. Has lots of the information need thanks.'\n\n'>> Title: PARCIALMENTE DAÑADO'\n'>> Review: Me llegó el día que tocaba, junto a otros libros que pedí, pero la caja llegó en mal estado lo cual dañó las esquinas de los libros porque venían sin protección (forro).'\n\n'>> Title: no lo he podido descargar'\n'>> Review: igual que el anterior'

This certainly looks like a mix of English and Spanish reviews! Now that we have a training corpus, one final thing to check is the distribution of words in the reviews and their titles. This is especially important for summarization tasks, where short reference summaries in the data can bias the model to only output one or two words in the generated summaries. The plots below show the word distributions, and we can see that the titles are heavily skewed toward just 1-2 words:

\"Word \"Word

To deal with this, we’ll filter out the examples with very short titles so that our model can produce more interesting summaries. Since we’re dealing with English and Spanish texts, we can use a rough heuristic to split the titles on whitespace and then use our trusty Dataset.filter() method as follows:

books_dataset = books_dataset.filter(lambda x: len(x[\"review_title\"].split()) > 2)

Now that we’ve prepared our corpus, let’s take a look at a few possible Transformer models that one might fine-tune on it!

Models for text summarization

If you think about it, text summarization is a similar sort of task to machine translation: we have a body of text like a review that we’d like to “translate” into a shorter version that captures the salient features of the input. Accordingly, most Transformer models for summarization adopt the encoder-decoder architecture that we first encountered in Chapter 1, although there are some exceptions like the GPT family of models which can also be used for summarization in few-shot settings. The following table lists some popular pretrained models that can be fine-tuned for summarization.

Transformer model Description Multilingual?
GPT-2 Although trained as an auto-regressive language model, you can make GPT-2 generate summaries by appending “TL;DR” at the end of the input text.
PEGASUS Uses a pretraining objective to predict masked sentences in multi-sentence texts. This pretraining objective is closer to summarization than vanilla language modeling and scores highly on popular benchmarks.
T5 A universal Transformer architecture that formulates all tasks in a text-to-text framework; e.g., the input format for the model to summarize a document is summarize: ARTICLE.
mT5 A multilingual version of T5, pretrained on the multilingual Common Crawl corpus (mC4), covering 101 languages.
BART A novel Transformer architecture with both an encoder and a decoder stack trained to reconstruct corrupted input that combines the pretraining schemes of BERT and GPT-2.
mBART-50 A multilingual version of BART, pretrained on 50 languages.

As you can see from this table, the majority of Transformer models for summarization (and indeed most NLP tasks) are monolingual. This is great if your task is in a “high-resource” language like English or German, but less so for the thousands of other languages in use across the world. Fortunately, there is a class of multilingual Transformer models, like mT5 and mBART, that come to the rescue. These models are pretrained using language modeling, but with a twist: instead of training on a corpus of one language, they are trained jointly on texts in over 50 languages at once!

We’ll focus on mT5, an interesting architecture based on T5 that was pretrained in a text-to-text framework. In T5, every NLP task is formulated in terms of a prompt prefix like summarize: which conditions the model to adapt the generated text to the prompt. As shown in the figure below, this makes T5 extremely versatile, as you can solve many tasks with a single model!

\"Different \"Different

mT5 doesn’t use prefixes, but shares much of the versatility of T5 and has the advantage of being multilingual. Now that we’ve picked a model, let’s take a look at preparing our data for training.

✏️ Try it out! Once you’ve worked through this section, see how well mT5 compares to mBART by fine-tuning the latter with the same techniques. For bonus points, you can also try fine-tuning T5 on just the English reviews. Since T5 has a special prefix prompt, you’ll need to prepend summarize: to the input examples in the preprocessing steps below.

Preprocessing the data

Our next task is to tokenize and encode our reviews and their titles. As usual, we begin by loading the tokenizer associated with the pretrained model checkpoint. We’ll use mt5-small as our checkpoint so we can fine-tune the model in a reasonable amount of time:

from transformers import AutoTokenizer\n\nmodel_checkpoint = \"google/mt5-small\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

💡 In the early stages of your NLP projects, a good practice is to train a class of “small” models on a small sample of data. This allows you to debug and iterate faster toward an end-to-end workflow. Once you are confident in the results, you can always scale up the model by simply changing the model checkpoint!

Let’s test out the mT5 tokenizer on a small example:

inputs = tokenizer(\"I loved reading the Hunger Games!\")\ninputs
{'input_ids': [336, 259, 28387, 11807, 287, 62893, 295, 12507, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}

Here we can see the familiar input_ids and attention_mask that we encountered in our first fine-tuning experiments back in Chapter 3. Let’s decode these input IDs with the tokenizer’s convert_ids_to_tokens() function to see what kind of tokenizer we’re dealing with:

tokenizer.convert_ids_to_tokens(inputs.input_ids)
['▁I', '▁', 'loved', '▁reading', '▁the', '▁Hung', 'er', '▁Games', '</s>']

The special Unicode character and end-of-sequence token </s> indicate that we’re dealing with the SentencePiece tokenizer, which is based on the Unigram segmentation algorithm discussed in Chapter 6. Unigram is especially useful for multilingual corpora since it allows SentencePiece to be agnostic about accents, punctuation, and the fact that many languages, like Japanese, do not have whitespace characters.

To tokenize our corpus, we have to deal with a subtlety associated with summarization: because our labels are also text, it is possible that they exceed the model’s maximum context size. This means we need to apply truncation to both the reviews and their titles to ensure we don’t pass excessively long inputs to our model. The tokenizers in 🤗 Transformers provide a nifty text_target argument that allows you to tokenize the labels in parallel to the inputs. Here is an example of how the inputs and targets are processed for mT5:

max_input_length = 512\nmax_target_length = 30\n\n\ndef preprocess_function(examples):\n    model_inputs = tokenizer(\n        examples[\"review_body\"],\n        max_length=max_input_length,\n        truncation=True,\n    )\n    labels = tokenizer(\n        examples[\"review_title\"], max_length=max_target_length, truncation=True\n    )\n    model_inputs[\"labels\"] = labels[\"input_ids\"]\n    return model_inputs

Let’s walk through this code to understand what’s happening. The first thing we’ve done is define values for max_input_length and max_target_length, which set the upper limits for how long our reviews and titles can be. Since the review body is typically much larger than the title, we’ve scaled these values accordingly.

With preprocess_function(), it is then a simple matter to tokenize the whole corpus using the handy Dataset.map() function we’ve used extensively throughout this course:

tokenized_datasets = books_dataset.map(preprocess_function, batched=True)

Now that the corpus has been preprocessed, let’s take a look at some metrics that are commonly used for summarization. As we’ll see, there is no silver bullet when it comes to measuring the quality of machine-generated text.

💡 You may have noticed that we used batched=True in our Dataset.map() function above. This encodes the examples in batches of 1,000 (the default) and allows you to make use of the multithreading capabilities of the fast tokenizers in 🤗 Transformers. Where possible, try using batched=True to get the most out of your preprocessing!

Metrics for text summarization

In comparison to most of the other tasks we’ve covered in this course, measuring the performance of text generation tasks like summarization or translation is not as straightforward. For example, given a review like “I loved reading the Hunger Games”, there are multiple valid summaries, like “I loved the Hunger Games” or “Hunger Games is a great read”. Clearly, applying some sort of exact match between the generated summary and the label is not a good solution — even humans would fare poorly under such a metric, because we all have our own writing style.

For summarization, one of the most commonly used metrics is the ROUGE score (short for Recall-Oriented Understudy for Gisting Evaluation). The basic idea behind this metric is to compare a generated summary against a set of reference summaries that are typically created by humans. To make this more precise, suppose we want to compare the following two summaries:

generated_summary = \"I absolutely loved reading the Hunger Games\"\nreference_summary = \"I loved reading the Hunger Games\"

One way to compare them could be to count the number of overlapping words, which in this case would be 6. However, this is a bit crude, so instead ROUGE is based on computing the precision and recall scores for the overlap.

🙋 Don’t worry if this is the first time you’ve heard of precision and recall — we’ll go through some explicit examples together to make it all clear. These metrics are usually encountered in classification tasks, so if you want to understand how precision and recall are defined in that context, we recommend checking out the scikit-learn guides.

For ROUGE, recall measures how much of the reference summary is captured by the generated one. If we are just comparing words, recall can be calculated according to the following formula:\nRecall=NumberofoverlappingwordsTotalnumberofwordsinreferencesummary \\mathrm{Recall} = \\frac{\\mathrm{Number\\,of\\,overlapping\\, words}}{\\mathrm{Total\\, number\\, of\\, words\\, in\\, reference\\, summary}} Recall=TotalnumberofwordsinreferencesummaryNumberofoverlappingwords

For our simple example above, this formula gives a perfect recall of 6/6 = 1; i.e., all the words in the reference summary have been produced by the model. This may sound great, but imagine if our generated summary had been “I really really loved reading the Hunger Games all night”. This would also have perfect recall, but is arguably a worse summary since it is verbose. To deal with these scenarios we also compute the precision, which in the ROUGE context measures how much of the generated summary was relevant:\nPrecision=NumberofoverlappingwordsTotalnumberofwordsingeneratedsummary \\mathrm{Precision} = \\frac{\\mathrm{Number\\,of\\,overlapping\\, words}}{\\mathrm{Total\\, number\\, of\\, words\\, in\\, generated\\, summary}} Precision=TotalnumberofwordsingeneratedsummaryNumberofoverlappingwords

Applying this to our verbose summary gives a precision of 6/10 = 0.6, which is considerably worse than the precision of 6/7 = 0.86 obtained by our shorter one. In practice, both precision and recall are usually computed, and then the F1-score (the harmonic mean of precision and recall) is reported. We can do this easily in 🤗 Datasets by first installing the rouge_score package:

!pip install rouge_score

and then loading the ROUGE metric as follows:

import evaluate\n\nrouge_score = evaluate.load(\"rouge\")

Then we can use the rouge_score.compute() function to calculate all the metrics at once:

scores = rouge_score.compute(\n    predictions=[generated_summary], references=[reference_summary]\n)\nscores
{'rouge1': AggregateScore(low=Score(precision=0.86, recall=1.0, fmeasure=0.92), mid=Score(precision=0.86, recall=1.0, fmeasure=0.92), high=Score(precision=0.86, recall=1.0, fmeasure=0.92)),\n 'rouge2': AggregateScore(low=Score(precision=0.67, recall=0.8, fmeasure=0.73), mid=Score(precision=0.67, recall=0.8, fmeasure=0.73), high=Score(precision=0.67, recall=0.8, fmeasure=0.73)),\n 'rougeL': AggregateScore(low=Score(precision=0.86, recall=1.0, fmeasure=0.92), mid=Score(precision=0.86, recall=1.0, fmeasure=0.92), high=Score(precision=0.86, recall=1.0, fmeasure=0.92)),\n 'rougeLsum': AggregateScore(low=Score(precision=0.86, recall=1.0, fmeasure=0.92), mid=Score(precision=0.86, recall=1.0, fmeasure=0.92), high=Score(precision=0.86, recall=1.0, fmeasure=0.92))}

Whoa, there’s a lot of information in that output — what does it all mean? First, 🤗 Datasets actually computes confidence intervals for precision, recall, and F1-score; these are the low, mid, and high attributes you can see here. Moreover, 🤗 Datasets computes a variety of ROUGE scores which are based on different types of text granularity when comparing the generated and reference summaries. The rouge1 variant is the overlap of unigrams — this is just a fancy way of saying the overlap of words and is exactly the metric we’ve discussed above. To verify this, let’s pull out the mid value of our scores:

scores[\"rouge1\"].mid
Score(precision=0.86, recall=1.0, fmeasure=0.92)

Great, the precision and recall numbers match up! Now what about those other ROUGE scores? rouge2 measures the overlap between bigrams (think the overlap of pairs of words), while rougeL and rougeLsum measure the longest matching sequences of words by looking for the longest common substrings in the generated and reference summaries. The “sum” in rougeLsum refers to the fact that this metric is computed over a whole summary, while rougeL is computed as the average over individual sentences.

✏️ Try it out! Create your own example of a generated and reference summary and see if the resulting ROUGE scores agree with a manual calculation based on the formulas for precision and recall. For bonus points, split the text into bigrams and compare the precision and recall for the rouge2 metric.

We’ll use these ROUGE scores to track the performance of our model, but before doing that let’s do something every good NLP practitioner should do: create a strong, yet simple baseline!

Creating a strong baseline

A common baseline for text summarization is to simply take the first three sentences of an article, often called the lead-3 baseline. We could use full stops to track the sentence boundaries, but this will fail on acronyms like “U.S.” or “U.N.” — so instead we’ll use the nltk library, which includes a better algorithm to handle these cases. You can install the package using pip as follows:

!pip install nltk

and then download the punctuation rules:

import nltk\n\nnltk.download(\"punkt\")

Next, we import the sentence tokenizer from nltk and create a simple function to extract the first three sentences in a review. The convention in text summarization is to separate each summary with a newline, so let’s also include this and test it on a training example:

from nltk.tokenize import sent_tokenize\n\n\ndef three_sentence_summary(text):\n    return \"\\n\".join(sent_tokenize(text)[:3])\n\n\nprint(three_sentence_summary(books_dataset[\"train\"][1][\"review_body\"]))
'I grew up reading Koontz, and years ago, I stopped,convinced i had \"outgrown\" him.'\n'Still,when a friend was looking for something suspenseful too read, I suggested Koontz.'\n'She found Strangers.'

This seems to work, so let’s now implement a function that extracts these “summaries” from a dataset and computes the ROUGE scores for the baseline:

def evaluate_baseline(dataset, metric):\n    summaries = [three_sentence_summary(text) for text in dataset[\"review_body\"]]\n    return metric.compute(predictions=summaries, references=dataset[\"review_title\"])

We can then use this function to compute the ROUGE scores over the validation set and prettify them a bit using Pandas:

import pandas as pd\n\nscore = evaluate_baseline(books_dataset[\"validation\"], rouge_score)\nrouge_names = [\"rouge1\", \"rouge2\", \"rougeL\", \"rougeLsum\"]\nrouge_dict = dict((rn, round(score[rn].mid.fmeasure * 100, 2)) for rn in rouge_names)\nrouge_dict
{'rouge1': 16.74, 'rouge2': 8.83, 'rougeL': 15.6, 'rougeLsum': 15.96}

We can see that the rouge2 score is significantly lower than the rest; this likely reflects the fact that review titles are typically concise and so the lead-3 baseline is too verbose. Now that we have a good baseline to work from, let’s turn our attention toward fine-tuning mT5!

Fine-tuning mT5 with the Trainer API

Fine-tuning a model for summarization is very similar to the other tasks we’ve covered in this chapter. The first thing we need to do is load the pretrained model from the mt5-small checkpoint. Since summarization is a sequence-to-sequence task, we can load the model with the AutoModelForSeq2SeqLM class, which will automatically download and cache the weights:

from transformers import AutoModelForSeq2SeqLM\n\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)

💡 If you’re wondering why you don’t see any warnings about fine-tuning the model on a downstream task, that’s because for sequence-to-sequence tasks we keep all the weights of the network. Compare this to our text classification model in Chapter 3, where the head of the pretrained model was replaced with a randomly initialized network.

The next thing we need to do is log in to the Hugging Face Hub. If you’re running this code in a notebook, you can do so with the following utility function:

from huggingface_hub import notebook_login\n\nnotebook_login()

which will display a widget where you can enter your credentials. Alternatively, you can run this command in your terminal and log in there:

huggingface-cli login

We’ll need to generate summaries in order to compute ROUGE scores during training. Fortunately, 🤗 Transformers provides dedicated Seq2SeqTrainingArguments and Seq2SeqTrainer classes that can do this for us automatically! To see how this works, let’s first define the hyperparameters and other arguments for our experiments:

from transformers import Seq2SeqTrainingArguments\n\nbatch_size = 8\nnum_train_epochs = 8\n# Show the training loss with every epoch\nlogging_steps = len(tokenized_datasets[\"train\"]) // batch_size\nmodel_name = model_checkpoint.split(\"/\")[-1]\n\nargs = Seq2SeqTrainingArguments(\n    output_dir=f\"{model_name}-finetuned-amazon-en-es\",\n    evaluation_strategy=\"epoch\",\n    learning_rate=5.6e-5,\n    per_device_train_batch_size=batch_size,\n    per_device_eval_batch_size=batch_size,\n    weight_decay=0.01,\n    save_total_limit=3,\n    num_train_epochs=num_train_epochs,\n    predict_with_generate=True,\n    logging_steps=logging_steps,\n    push_to_hub=True,\n)

Here, the predict_with_generate argument has been set to indicate that we should generate summaries during evaluation so that we can compute ROUGE scores for each epoch. As discussed in Chapter 1, the decoder performs inference by predicting tokens one by one, and this is implemented by the model’s generate() method. Setting predict_with_generate=True tells the Seq2SeqTrainer to use that method for evaluation. We’ve also adjusted some of the default hyperparameters, like the learning rate, number of epochs, and weight decay, and we’ve set the save_total_limit option to only save up to 3 checkpoints during training — this is because even the “small” version of mT5 uses around a GB of hard drive space, and we can save a bit of room by limiting the number of copies we save.

The push_to_hub=True argument will allow us to push the model to the Hub after training; you’ll find the repository under your user profile in the location defined by output_dir. Note that you can specify the name of the repository you want to push to with the hub_model_id argument (in particular, you will have to use this argument to push to an organization). For instance, when we pushed the model to the huggingface-course organization, we added hub_model_id=\"huggingface-course/mt5-finetuned-amazon-en-es\" to Seq2SeqTrainingArguments.

The next thing we need to do is provide the trainer with a compute_metrics() function so that we can evaluate our model during training. For summarization this is a bit more involved than simply calling rouge_score.compute() on the model’s predictions, since we need to decode the outputs and labels into text before we can compute the ROUGE scores. The following function does exactly that, and also makes use of the sent_tokenize() function from nltk to separate the summary sentences with newlines:

import numpy as np\n\n\ndef compute_metrics(eval_pred):\n    predictions, labels = eval_pred\n    # Decode generated summaries into text\n    decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)\n    # Replace -100 in the labels as we can't decode them\n    labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n    # Decode reference summaries into text\n    decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n    # ROUGE expects a newline after each sentence\n    decoded_preds = [\"\\n\".join(sent_tokenize(pred.strip())) for pred in decoded_preds]\n    decoded_labels = [\"\\n\".join(sent_tokenize(label.strip())) for label in decoded_labels]\n    # Compute ROUGE scores\n    result = rouge_score.compute(\n        predictions=decoded_preds, references=decoded_labels, use_stemmer=True\n    )\n    # Extract the median scores\n    result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\n    return {k: round(v, 4) for k, v in result.items()}

Next, we need to define a data collator for our sequence-to-sequence task. Since mT5 is an encoder-decoder Transformer model, one subtlety with preparing our batches is that during decoding we need to shift the labels to the right by one. This is required to ensure that the decoder only sees the previous ground truth labels and not the current or future ones, which would be easy for the model to memorize. This is similar to how masked self-attention is applied to the inputs in a task like causal language modeling.

Luckily, 🤗 Transformers provides a DataCollatorForSeq2Seq collator that will dynamically pad the inputs and the labels for us. To instantiate this collator, we simply need to provide the tokenizer and model:

from transformers import DataCollatorForSeq2Seq\n\ndata_collator = DataCollatorForSeq2Seq(tokenizer, model=model)

Let’s see what this collator produces when fed a small batch of examples. First, we need to remove the columns with strings because the collator won’t know how to pad these elements:

tokenized_datasets = tokenized_datasets.remove_columns(\n    books_dataset[\"train\"].column_names\n)

Since the collator expects a list of dicts, where each dict represents a single example in the dataset, we also need to wrangle the data into the expected format before passing it to the data collator:

features = [tokenized_datasets[\"train\"][i] for i in range(2)]\ndata_collator(features)
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n         1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],\n        [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n         1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'input_ids': tensor([[  1494,    259,   8622,    390,    259,    262,   2316,   3435,    955,\n            772,    281,    772,   1617,    263,    305,  14701,    260,   1385,\n           3031,    259,  24146,    332,   1037,    259,  43906,    305,    336,\n            260,      1,      0,      0,      0,      0,      0,      0],\n        [   259,  27531,  13483,    259,   7505,    260, 112240,  15192,    305,\n          53198,    276,    259,  74060,    263,    260,    459,  25640,    776,\n           2119,    336,    259,   2220,    259,  18896,    288,   4906,    288,\n           1037,   3931,    260,   7083, 101476,   1143,    260,      1]]), 'labels': tensor([[ 7483,   259,  2364, 15695,     1,  -100],\n        [  259, 27531, 13483,   259,  7505,     1]]), 'decoder_input_ids': tensor([[    0,  7483,   259,  2364, 15695,     1],\n        [    0,   259, 27531, 13483,   259,  7505]])}

The main thing to notice here is that the first example is longer than the second one, so the input_ids and attention_mask of the second example have been padded on the right with a [PAD] token (whose ID is 0). Similarly, we can see that the labels have been padded with -100s, to make sure the padding tokens are ignored by the loss function. And finally, we can see a new decoder_input_ids which has shifted the labels to the right by inserting a [PAD] token in the first entry.

We finally have all the ingredients we need to train with! We now simply need to instantiate the trainer with the standard arguments:

from transformers import Seq2SeqTrainer\n\ntrainer = Seq2SeqTrainer(\n    model,\n    args,\n    train_dataset=tokenized_datasets[\"train\"],\n    eval_dataset=tokenized_datasets[\"validation\"],\n    data_collator=data_collator,\n    tokenizer=tokenizer,\n    compute_metrics=compute_metrics,\n)

and launch our training run:

trainer.train()

During training, you should see the training loss decrease and the ROUGE scores increase with each epoch. Once the training is complete, you can see the final ROUGE scores by running Trainer.evaluate():

trainer.evaluate()
{'eval_loss': 3.028524398803711,\n 'eval_rouge1': 16.9728,\n 'eval_rouge2': 8.2969,\n 'eval_rougeL': 16.8366,\n 'eval_rougeLsum': 16.851,\n 'eval_gen_len': 10.1597,\n 'eval_runtime': 6.1054,\n 'eval_samples_per_second': 38.982,\n 'eval_steps_per_second': 4.914}

From the scores we can see that our model has handily outperformed our lead-3 baseline — nice! The final thing to do is push the model weights to the Hub, as follows:

trainer.push_to_hub(commit_message=\"Training complete\", tags=\"summarization\")
'https://huggingface.co/huggingface-course/mt5-finetuned-amazon-en-es/commit/aa0536b829b28e73e1e4b94b8a5aacec420d40e0'

This will save the checkpoint and configuration files to output_dir, before uploading all the files to the Hub. By specifying the tags argument, we also ensure that the widget on the Hub will be one for a summarization pipeline instead of the default text generation one associated with the mT5 architecture (for more information about model tags, see the 🤗 Hub documentation). The output from trainer.push_to_hub() is a URL to the Git commit hash, so you can easily see the changes that were made to the model repository!

To wrap up this section, let’s take a look at how we can also fine-tune mT5 using the low-level features provided by 🤗 Accelerate.

Fine-tuning mT5 with 🤗 Accelerate

Fine-tuning our model with 🤗 Accelerate is very similar to the text classification example we encountered in Chapter 3. The main differences will be the need to explicitly generate our summaries during training and define how we compute the ROUGE scores (recall that the Seq2SeqTrainer took care of the generation for us). Let’s take a look how we can implement these two requirements within 🤗 Accelerate!

Preparing everything for training

The first thing we need to do is create a DataLoader for each of our splits. Since the PyTorch dataloaders expect batches of tensors, we need to set the format to \"torch\" in our datasets:

tokenized_datasets.set_format(\"torch\")

Now that we’ve got datasets consisting of just tensors, the next thing to do is instantiate the DataCollatorForSeq2Seq again. For this we need to provide a fresh version of the model, so let’s load it again from our cache:

model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)

We can then instantiate the data collator and use this to define our dataloaders:

from torch.utils.data import DataLoader\n\nbatch_size = 8\ntrain_dataloader = DataLoader(\n    tokenized_datasets[\"train\"],\n    shuffle=True,\n    collate_fn=data_collator,\n    batch_size=batch_size,\n)\neval_dataloader = DataLoader(\n    tokenized_datasets[\"validation\"], collate_fn=data_collator, batch_size=batch_size\n)

The next thing to do is define the optimizer we want to use. As in our other examples, we’ll use AdamW, which works well for most problems:

from torch.optim import AdamW\n\noptimizer = AdamW(model.parameters(), lr=2e-5)

Finally, we feed our model, optimizer, and dataloaders to the accelerator.prepare() method:

from accelerate import Accelerator\n\naccelerator = Accelerator()\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n    model, optimizer, train_dataloader, eval_dataloader\n)

🚨 If you’re training on a TPU, you’ll need to move all the code above into a dedicated training function. See Chapter 3 for more details.

Now that we’ve prepared our objects, there are three remaining things to do:

  • Define the learning rate schedule.
  • Implement a function to post-process the summaries for evaluation.
  • Create a repository on the Hub that we can push our model to.

For the learning rate schedule, we’ll use the standard linear one from previous sections:

from transformers import get_scheduler\n\nnum_train_epochs = 10\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n    \"linear\",\n    optimizer=optimizer,\n    num_warmup_steps=0,\n    num_training_steps=num_training_steps,\n)

For post-processing, we need a function that splits the generated summaries into sentences that are separated by newlines. This is the format the ROUGE metric expects, and we can achieve this with the following snippet of code:

def postprocess_text(preds, labels):\n    preds = [pred.strip() for pred in preds]\n    labels = [label.strip() for label in labels]\n\n    # ROUGE expects a newline after each sentence\n    preds = [\"\\n\".join(nltk.sent_tokenize(pred)) for pred in preds]\n    labels = [\"\\n\".join(nltk.sent_tokenize(label)) for label in labels]\n\n    return preds, labels

This should look familiar to you if you recall how we defined the compute_metrics() function of the Seq2SeqTrainer.

Finally, we need to create a model repository on the Hugging Face Hub. For this, we can use the appropriately titled 🤗 Hub library. We just need to define a name for our repository, and the library has a utility function to combine the repository ID with the user profile:

from huggingface_hub import get_full_repo_name\n\nmodel_name = \"test-bert-finetuned-squad-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name
'lewtun/mt5-finetuned-amazon-en-es-accelerate'

Now we can use this repository name to clone a local version to our results directory that will store the training artifacts:

from huggingface_hub import Repository\n\noutput_dir = \"results-mt5-finetuned-squad-accelerate\"\nrepo = Repository(output_dir, clone_from=repo_name)

This will allow us to push the artifacts back to the Hub by calling the repo.push_to_hub() method during training! Let’s now wrap up our analysis by writing out the training loop.

Training loop

The training loop for summarization is quite similar to the other 🤗 Accelerate examples that we’ve encountered and is roughly split into four main steps:

  1. Train the model by iterating over all the examples in train_dataloader for each epoch.
  2. Generate model summaries at the end of each epoch, by first generating the tokens and then decoding them (and the reference summaries) into text.
  3. Compute the ROUGE scores using the same techniques we saw earlier.
  4. Save the checkpoints and push everything to the Hub. Here we rely on the nifty blocking=False argument of the Repository object so that we can push the checkpoints per epoch asynchronously. This allows us to continue training without having to wait for the somewhat slow upload associated with a GB-sized model!

These steps can be seen in the following block of code:

from tqdm.auto import tqdm\nimport torch\nimport numpy as np\n\nprogress_bar = tqdm(range(num_training_steps))\n\nfor epoch in range(num_train_epochs):\n    # Training\n    model.train()\n    for step, batch in enumerate(train_dataloader):\n        outputs = model(**batch)\n        loss = outputs.loss\n        accelerator.backward(loss)\n\n        optimizer.step()\n        lr_scheduler.step()\n        optimizer.zero_grad()\n        progress_bar.update(1)\n\n    # Evaluation\n    model.eval()\n    for step, batch in enumerate(eval_dataloader):\n        with torch.no_grad():\n            generated_tokens = accelerator.unwrap_model(model).generate(\n                batch[\"input_ids\"],\n                attention_mask=batch[\"attention_mask\"],\n            )\n\n            generated_tokens = accelerator.pad_across_processes(\n                generated_tokens, dim=1, pad_index=tokenizer.pad_token_id\n            )\n            labels = batch[\"labels\"]\n\n            # If we did not pad to max length, we need to pad the labels too\n            labels = accelerator.pad_across_processes(\n                batch[\"labels\"], dim=1, pad_index=tokenizer.pad_token_id\n            )\n\n            generated_tokens = accelerator.gather(generated_tokens).cpu().numpy()\n            labels = accelerator.gather(labels).cpu().numpy()\n\n            # Replace -100 in the labels as we can't decode them\n            labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n            if isinstance(generated_tokens, tuple):\n                generated_tokens = generated_tokens[0]\n            decoded_preds = tokenizer.batch_decode(\n                generated_tokens, skip_special_tokens=True\n            )\n            decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n\n            decoded_preds, decoded_labels = postprocess_text(\n                decoded_preds, decoded_labels\n            )\n\n            rouge_score.add_batch(predictions=decoded_preds, references=decoded_labels)\n\n    # Compute metrics\n    result = rouge_score.compute()\n    # Extract the median ROUGE scores\n    result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\n    result = {k: round(v, 4) for k, v in result.items()}\n    print(f\"Epoch {epoch}:\", result)\n\n    # Save and upload\n    accelerator.wait_for_everyone()\n    unwrapped_model = accelerator.unwrap_model(model)\n    unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n    if accelerator.is_main_process:\n        tokenizer.save_pretrained(output_dir)\n        repo.push_to_hub(\n            commit_message=f\"Training in progress epoch {epoch}\", blocking=False\n        )
Epoch 0: {'rouge1': 5.6351, 'rouge2': 1.1625, 'rougeL': 5.4866, 'rougeLsum': 5.5005}\nEpoch 1: {'rouge1': 9.8646, 'rouge2': 3.4106, 'rougeL': 9.9439, 'rougeLsum': 9.9306}\nEpoch 2: {'rouge1': 11.0872, 'rouge2': 3.3273, 'rougeL': 11.0508, 'rougeLsum': 10.9468}\nEpoch 3: {'rouge1': 11.8587, 'rouge2': 4.8167, 'rougeL': 11.7986, 'rougeLsum': 11.7518}\nEpoch 4: {'rouge1': 12.9842, 'rouge2': 5.5887, 'rougeL': 12.7546, 'rougeLsum': 12.7029}\nEpoch 5: {'rouge1': 13.4628, 'rouge2': 6.4598, 'rougeL': 13.312, 'rougeLsum': 13.2913}\nEpoch 6: {'rouge1': 12.9131, 'rouge2': 5.8914, 'rougeL': 12.6896, 'rougeLsum': 12.5701}\nEpoch 7: {'rouge1': 13.3079, 'rouge2': 6.2994, 'rougeL': 13.1536, 'rougeLsum': 13.1194}\nEpoch 8: {'rouge1': 13.96, 'rouge2': 6.5998, 'rougeL': 13.9123, 'rougeLsum': 13.7744}\nEpoch 9: {'rouge1': 14.1192, 'rouge2': 7.0059, 'rougeL': 14.1172, 'rougeLsum': 13.9509}

And that’s it! Once you run this, you’ll have a model and results that are pretty similar to the ones we obtained with the Trainer.

Using your fine-tuned model

Once you’ve pushed the model to the Hub, you can play with it either via the inference widget or with a pipeline object, as follows:

from transformers import pipeline\n\nhub_model_id = \"huggingface-course/mt5-small-finetuned-amazon-en-es\"\nsummarizer = pipeline(\"summarization\", model=hub_model_id)

We can feed some examples from the test set (which the model has not seen) to our pipeline to get a feel for the quality of the summaries. First let’s implement a simple function to show the review, title, and generated summary together:

def print_summary(idx):\n    review = books_dataset[\"test\"][idx][\"review_body\"]\n    title = books_dataset[\"test\"][idx][\"review_title\"]\n    summary = summarizer(books_dataset[\"test\"][idx][\"review_body\"])[0][\"summary_text\"]\n    print(f\"'>>> Review: {review}'\")\n    print(f\"\\n'>>> Title: {title}'\")\n    print(f\"\\n'>>> Summary: {summary}'\")

Let’s take a look at one of the English examples we get:

print_summary(100)
'>>> Review: Nothing special at all about this product... the book is too small and stiff and hard to write in. The huge sticker on the back doesn’t come off and looks super tacky. I would not purchase this again. I could have just bought a journal from the dollar store and it would be basically the same thing. It’s also really expensive for what it is.'\n\n'>>> Title: Not impressed at all... buy something else'\n\n'>>> Summary: Nothing special at all about this product'

This is not too bad! We can see that our model has actually been able to perform abstractive summarization by augmenting parts of the review with new words. And perhaps the coolest aspect of our model is that it is bilingual, so we can also generate summaries of Spanish reviews:

print_summary(0)
'>>> Review: Es una trilogia que se hace muy facil de leer. Me ha gustado, no me esperaba el final para nada'\n\n'>>> Title: Buena literatura para adolescentes'\n\n'>>> Summary: Muy facil de leer'

The summary translates into “Very easy to read” in English, which we can see in this case was extracted directly from the review. Nevertheless, this shows the versatility of the mT5 model and has given you a taste of what it’s like to deal with a multilingual corpus!

Next, we’ll turn our attention to a slightly more complex task: training a language model from scratch.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:30.734Z"} {"title":"Question answering - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter7/7?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#question-answering)Question answering\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-7-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter7/section7_pt.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter7/section7_pt.ipynb)\n\nTime to look at question answering! This task comes in many flavors, but the one we’ll focus on in this section is called _extractive_ question answering. This involves posing questions about a document and identifying the answers as _spans of text_ in the document itself.\n\nWe will fine-tune a BERT model on the [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/), which consists of questions posed by crowdworkers on a set of Wikipedia articles. This will give us a model able to compute predictions like this one:\n\nThis is actually showcasing the model that was trained and uploaded to the Hub using the code shown in this section. You can find it and double-check the predictions [here](https://huggingface.co/huggingface-course/bert-finetuned-squad?context=%F0%9F%A4%97+Transformers+is+backed+by+the+three+most+popular+deep+learning+libraries+%E2%80%94+Jax%2C+PyTorch+and+TensorFlow+%E2%80%94+with+a+seamless+integration+between+them.+It%27s+straightforward+to+train+your+models+with+one+before+loading+them+for+inference+with+the+other.&question=Which+deep+learning+libraries+back+%F0%9F%A4%97+Transformers%3F).\n\n💡 Encoder-only models like BERT tend to be great at extracting answers to factoid questions like “Who invented the Transformer architecture?” but fare poorly when given open-ended questions like “Why is the sky blue?” In these more challenging cases, encoder-decoder models like T5 and BART are typically used to synthesize the information in a way that’s quite similar to [text summarization](/course/chapter7/5). If you’re interested in this type of _generative_ question answering, we recommend checking out our [demo](https://yjernite.github.io/lfqa.html) based on the [ELI5 dataset](https://huggingface.co/datasets/eli5).\n\n## [](#preparing-the-data)Preparing the data\n\nThe dataset that is used the most as an academic benchmark for extractive question answering is [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), so that’s the one we’ll use here. There is also a harder [SQuAD v2](https://huggingface.co/datasets/squad_v2) benchmark, which includes questions that don’t have an answer. As long as your own dataset contains a column for contexts, a column for questions, and a column for answers, you should be able to adapt the steps below.\n\n### [](#the-squad-dataset)The SQuAD dataset\n\nAs usual, we can download and cache the dataset in just one step thanks to `load_dataset()`:\n\n```\nfrom datasets import load_dataset\n\nraw_datasets = load_dataset(\"squad\")```\n\nWe can then have a look at this object to learn more about the SQuAD dataset:\n\n```\nDatasetDict({\n train: Dataset({\n features: ['id', 'title', 'context', 'question', 'answers'],\n num_rows: 87599\n })\n validation: Dataset({\n features: ['id', 'title', 'context', 'question', 'answers'],\n num_rows: 10570\n })\n})```\n\nIt looks like we have everything we need with the `context`, `question`, and `answers` fields, so let’s print those for the first element of our training set:\n\n```\nprint(\"Context: \", raw_datasets[\"train\"][0][\"context\"])\nprint(\"Question: \", raw_datasets[\"train\"][0][\"question\"])\nprint(\"Answer: \", raw_datasets[\"train\"][0][\"answers\"])```\n\n```\nContext: 'Architecturally, the school has a Catholic character. Atop the Main Building\\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.'\nQuestion: 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?'\nAnswer: {'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]}```\n\nThe `context` and `question` fields are very straightforward to use. The `answers` field is a bit trickier as it comports a dictionary with two fields that are both lists. This is the format that will be expected by the `squad` metric during evaluation; if you are using your own data, you don’t necessarily need to worry about putting the answers in the same format. The `text` field is rather obvious, and the `answer_start` field contains the starting character index of each answer in the context.\n\nDuring training, there is only one possible answer. We can double-check this by using the `Dataset.filter()` method:\n\n```\nraw_datasets[\"train\"].filter(lambda x: len(x[\"answers\"][\"text\"]) != 1)```\n\n```\nDataset({\n features: ['id', 'title', 'context', 'question', 'answers'],\n num_rows: 0\n})```\n\nFor evaluation, however, there are several possible answers for each sample, which may be the same or different:\n\n```\nprint(raw_datasets[\"validation\"][0][\"answers\"])\nprint(raw_datasets[\"validation\"][2][\"answers\"])```\n\n```\n{'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos'], 'answer_start': [177, 177, 177]}\n{'text': ['Santa Clara, California', \"Levi's Stadium\", \"Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.\"], 'answer_start': [403, 355, 355]}```\n\nWe won’t dive into the evaluation script as it will all be wrapped up by a 🤗 Datasets metric for us, but the short version is that some of the questions have several possible answers, and this script will compare a predicted answer to all the acceptable answers and take the best score. If we take a look at the sample at index 2, for instance:\n\n```\nprint(raw_datasets[\"validation\"][2][\"context\"])\nprint(raw_datasets[\"validation\"][2][\"question\"])```\n\n```\n'Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi\\'s Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50.'\n'Where did Super Bowl 50 take place?'```\n\nwe can see that the answer can indeed be one of the three possibilities we saw before.\n\n### [](#processing-the-training-data)Processing the training data\n\nLet’s start with preprocessing the training data. The hard part will be to generate labels for the question’s answer, which will be the start and end positions of the tokens corresponding to the answer inside the context.\n\nBut let’s not get ahead of ourselves. First, we need to convert the text in the input into IDs the model can make sense of, using a tokenizer:\n\n```\nfrom transformers import AutoTokenizer\n\nmodel_checkpoint = \"bert-base-cased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)```\n\nAs mentioned previously, we’ll be fine-tuning a BERT model, but you can use any other model type as long as it has a fast tokenizer implemented. You can see all the architectures that come with a fast version in [this big table](https://huggingface.co/transformers/#supported-frameworks), and to check that the `tokenizer` object you’re using is indeed backed by 🤗 Tokenizers you can look at its `is_fast` attribute:\n\nWe can pass to our tokenizer the question and the context together, and it will properly insert the special tokens to form a sentence like this:\n\n```\n[CLS] question [SEP] context [SEP]```\n\nLet’s double-check:\n\n```\ncontext = raw_datasets[\"train\"][0][\"context\"]\nquestion = raw_datasets[\"train\"][0][\"question\"]\n\ninputs = tokenizer(question, context)\ntokenizer.decode(inputs[\"input_ids\"])```\n\n```\n'[CLS] To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France? [SEP] Architecturally, '\n'the school has a Catholic character. Atop the Main Building\\'s gold dome is a golden statue of the Virgin '\n'Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms '\n'upraised with the legend \" Venite Ad Me Omnes \". Next to the Main Building is the Basilica of the Sacred '\n'Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a '\n'replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette '\n'Soubirous in 1858. At the end of the main drive ( and in a direct line that connects through 3 statues '\n'and the Gold Dome ), is a simple, modern stone statue of Mary. [SEP]'```\n\nThe labels will then be the index of the tokens starting and ending the answer, and the model will be tasked to predicted one start and end logit per token in the input, with the theoretical labels being as follow:\n\n![One-hot encoded labels for question answering.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/qa_labels.svg) ![One-hot encoded labels for question answering.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter7/qa_labels-dark.svg)\n\nIn this case the context is not too long, but some of the examples in the dataset have very long contexts that will exceed the maximum length we set (which is 384 in this case). As we saw in [Chapter 6](/course/chapter6/4) when we explored the internals of the `question-answering` pipeline, we will deal with long contexts by creating several training features from one sample of our dataset, with a sliding window between them.\n\nTo see how this works using the current example, we can limit the length to 100 and use a sliding window of 50 tokens. As a reminder, we use:\n\n- `max_length` to set the maximum length (here 100)\n- `truncation=\"only_second\"` to truncate the context (which is in the second position) when the question with its context is too long\n- `stride` to set the number of overlapping tokens between two successive chunks (here 50)\n- `return_overflowing_tokens=True` to let the tokenizer know we want the overflowing tokens\n\n```\ninputs = tokenizer(\n question,\n context,\n max_length=100,\n truncation=\"only_second\",\n stride=50,\n return_overflowing_tokens=True,\n)\n\nfor ids in inputs[\"input_ids\"]:\n print(tokenizer.decode(ids))```\n\n```\n'[CLS] To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France? [SEP] Architecturally, the school has a Catholic character. Atop the Main Building\\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \" Venite Ad Me Omnes \". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basi [SEP]'\n'[CLS] To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France? [SEP] the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \" Venite Ad Me Omnes \". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin [SEP]'\n'[CLS] To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France? [SEP] Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive ( and in a direct line that connects through 3 [SEP]'\n'[CLS] To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France? [SEP]. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive ( and in a direct line that connects through 3 statues and the Gold Dome ), is a simple, modern stone statue of Mary. [SEP]'```\n\nAs we can see, our example has been in split into four inputs, each of them containing the question and some part of the context. Note that the answer to the question (“Bernadette Soubirous”) only appears in the third and last inputs, so by dealing with long contexts in this way we will create some training examples where the answer is not included in the context. For those examples, the labels will be `start_position = end_position = 0` (so we predict the `[CLS]` token). We will also set those labels in the unfortunate case where the answer has been truncated so that we only have the start (or end) of it. For the examples where the answer is fully in the context, the labels will be the index of the token where the answer starts and the index of the token where the answer ends.\n\nThe dataset provides us with the start character of the answer in the context, and by adding the length of the answer, we can find the end character in the context. To map those to token indices, we will need to use the offset mappings we studied in [Chapter 6](/course/chapter6/4). We can have our tokenizer return these by passing along `return_offsets_mapping=True`:\n\n```\ninputs = tokenizer(\n question,\n context,\n max_length=100,\n truncation=\"only_second\",\n stride=50,\n return_overflowing_tokens=True,\n return_offsets_mapping=True,\n)\ninputs.keys()```\n\n```\ndict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'offset_mapping', 'overflow_to_sample_mapping'])```\n\nAs we can see, we get back the usual input IDs, token type IDs, and attention mask, as well as the offset mapping we required and an extra key, `overflow_to_sample_mapping`. The corresponding value will be of use to us when we tokenize several texts at the same time (which we should do to benefit from the fact that our tokenizer is backed by Rust). Since one sample can give several features, it maps each feature to the example it originated from. Because here we only tokenized one example, we get a list of `0`s:\n\n```\ninputs[\"overflow_to_sample_mapping\"]```\n\nBut if we tokenize more examples, this will become more useful:\n\n```\ninputs = tokenizer(\n raw_datasets[\"train\"][2:6][\"question\"],\n raw_datasets[\"train\"][2:6][\"context\"],\n max_length=100,\n truncation=\"only_second\",\n stride=50,\n return_overflowing_tokens=True,\n return_offsets_mapping=True,\n)\n\nprint(f\"The 4 examples gave {len(inputs['input_ids'])} features.\")\nprint(f\"Here is where each comes from: {inputs['overflow_to_sample_mapping']}.\")```\n\n```\n'The 4 examples gave 19 features.'\n'Here is where each comes from: [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3].'```\n\nAs we can see, the first three examples (at indices 2, 3, and 4 in the training set) each gave four features and the last example (at index 5 in the training set) gave 7 features.\n\nThis information will be useful to map each feature we get to its corresponding label. As mentioned earlier, those labels are:\n\n- `(0, 0)` if the answer is not in the corresponding span of the context\n- `(start_position, end_position)` if the answer is in the corresponding span of the context, with `start_position` being the index of the token (in the input IDs) at the start of the answer and `end_position` being the index of the token (in the input IDs) where the answer ends\n\nTo determine which of these is the case and, if relevant, the positions of the tokens, we first find the indices that start and end the context in the input IDs. We could use the token type IDs to do this, but since those do not necessarily exist for all models (DistilBERT does not require them, for instance), we’ll instead use the `sequence_ids()` method of the `BatchEncoding` our tokenizer returns.\n\nOnce we have those token indices, we look at the corresponding offsets, which are tuples of two integers representing the span of characters inside the original context. We can thus detect if the chunk of the context in this feature starts after the answer or ends before the answer begins (in which case the label is `(0, 0)`). If that’s not the case, we loop to find the first and last token of the answer:\n\n```\nanswers = raw_datasets[\"train\"][2:6][\"answers\"]\nstart_positions = []\nend_positions = []\n\nfor i, offset in enumerate(inputs[\"offset_mapping\"]):\n sample_idx = inputs[\"overflow_to_sample_mapping\"][i]\n answer = answers[sample_idx]\n start_char = answer[\"answer_start\"][0]\n end_char = answer[\"answer_start\"][0] + len(answer[\"text\"][0])\n sequence_ids = inputs.sequence_ids(i)\n\n \n idx = 0\n while sequence_ids[idx] != 1:\n idx += 1\n context_start = idx\n while sequence_ids[idx] == 1:\n idx += 1\n context_end = idx - 1\n\n \n if offset[context_start][0] > start_char or offset[context_end][1] < end_char:\n start_positions.append(0)\n end_positions.append(0)\n else:\n \n idx = context_start\n while idx <= context_end and offset[idx][0] <= start_char:\n idx += 1\n start_positions.append(idx - 1)\n\n idx = context_end\n while idx >= context_start and offset[idx][1] >= end_char:\n idx -= 1\n end_positions.append(idx + 1)\n\nstart_positions, end_positions```\n\n```\n([83, 51, 19, 0, 0, 64, 27, 0, 34, 0, 0, 0, 67, 34, 0, 0, 0, 0, 0],\n [85, 53, 21, 0, 0, 70, 33, 0, 40, 0, 0, 0, 68, 35, 0, 0, 0, 0, 0])```\n\nLet’s take a look at a few results to verify that our approach is correct. For the first feature we find `(83, 85)` as labels, so let’s compare the theoretical answer with the decoded span of tokens from 83 to 85 (inclusive):\n\n```\nidx = 0\nsample_idx = inputs[\"overflow_to_sample_mapping\"][idx]\nanswer = answers[sample_idx][\"text\"][0]\n\nstart = start_positions[idx]\nend = end_positions[idx]\nlabeled_answer = tokenizer.decode(inputs[\"input_ids\"][idx][start : end + 1])\n\nprint(f\"Theoretical answer: {answer}, labels give: {labeled_answer}\")```\n\n```\n'Theoretical answer: the Main Building, labels give: the Main Building'```\n\nSo that’s a match! Now let’s check index 4, where we set the labels to `(0, 0)`, which means the answer is not in the context chunk of that feature:\n\n```\nidx = 4\nsample_idx = inputs[\"overflow_to_sample_mapping\"][idx]\nanswer = answers[sample_idx][\"text\"][0]\n\ndecoded_example = tokenizer.decode(inputs[\"input_ids\"][idx])\nprint(f\"Theoretical answer: {answer}, decoded example: {decoded_example}\")```\n\n```\n'Theoretical answer: a Marian place of prayer and reflection, decoded example: [CLS] What is the Grotto at Notre Dame? [SEP] Architecturally, the school has a Catholic character. Atop the Main Building\\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \" Venite Ad Me Omnes \". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grot [SEP]'```\n\nIndeed, we don’t see the answer inside the context.\n\n✏️ **Your turn!** When using the XLNet architecture, padding is applied on the left and the question and context are switched. Adapt all the code we just saw to the XLNet architecture (and add `padding=True`). Be aware that the `[CLS]` token may not be at the 0 position with padding applied.\n\nNow that we have seen step by step how to preprocess our training data, we can group it in a function we will apply on the whole training dataset. We’ll pad every feature to the maximum length we set, as most of the contexts will be long (and the corresponding samples will be split into several features), so there is no real benefit to applying dynamic padding here:\n\n```\nmax_length = 384\nstride = 128\n\n\ndef preprocess_training_examples(examples):\n questions = [q.strip() for q in examples[\"question\"]]\n inputs = tokenizer(\n questions,\n examples[\"context\"],\n max_length=max_length,\n truncation=\"only_second\",\n stride=stride,\n return_overflowing_tokens=True,\n return_offsets_mapping=True,\n padding=\"max_length\",\n )\n\n offset_mapping = inputs.pop(\"offset_mapping\")\n sample_map = inputs.pop(\"overflow_to_sample_mapping\")\n answers = examples[\"answers\"]\n start_positions = []\n end_positions = []\n\n for i, offset in enumerate(offset_mapping):\n sample_idx = sample_map[i]\n answer = answers[sample_idx]\n start_char = answer[\"answer_start\"][0]\n end_char = answer[\"answer_start\"][0] + len(answer[\"text\"][0])\n sequence_ids = inputs.sequence_ids(i)\n\n \n idx = 0\n while sequence_ids[idx] != 1:\n idx += 1\n context_start = idx\n while sequence_ids[idx] == 1:\n idx += 1\n context_end = idx - 1\n\n \n if offset[context_start][0] > start_char or offset[context_end][1] < end_char:\n start_positions.append(0)\n end_positions.append(0)\n else:\n \n idx = context_start\n while idx <= context_end and offset[idx][0] <= start_char:\n idx += 1\n start_positions.append(idx - 1)\n\n idx = context_end\n while idx >= context_start and offset[idx][1] >= end_char:\n idx -= 1\n end_positions.append(idx + 1)\n\n inputs[\"start_positions\"] = start_positions\n inputs[\"end_positions\"] = end_positions\n return inputs```\n\nNote that we defined two constants to determine the maximum length used as well as the length of the sliding window, and that we added a tiny bit of cleanup before tokenizing: some of the questions in the SQuAD dataset have extra spaces at the beginning and the end that don’t add anything (and take up space when being tokenized if you use a model like RoBERTa), so we removed those extra spaces.\n\nTo apply this function to the whole training set, we use the `Dataset.map()` method with the `batched=True` flag. It’s necessary here as we are changing the length of the dataset (since one example can give several training features):\n\n```\ntrain_dataset = raw_datasets[\"train\"].map(\n preprocess_training_examples,\n batched=True,\n remove_columns=raw_datasets[\"train\"].column_names,\n)\nlen(raw_datasets[\"train\"]), len(train_dataset)```\n\nAs we can see, the preprocessing added roughly 1,000 features. Our training set is now ready to be used — let’s dig into the preprocessing of the validation set!\n\n### [](#processing-the-validation-data)Processing the validation data\n\nPreprocessing the validation data will be slightly easier as we don’t need to generate labels (unless we want to compute a validation loss, but that number won’t really help us understand how good the model is). The real joy will be to interpret the predictions of the model into spans of the original context. For this, we will just need to store both the offset mappings and some way to match each created feature to the original example it comes from. Since there is an ID column in the original dataset, we’ll use that ID.\n\nThe only thing we’ll add here is a tiny bit of cleanup of the offset mappings. They will contain offsets for the question and the context, but once we’re in the post-processing stage we won’t have any way to know which part of the input IDs corresponded to the context and which part was the question (the `sequence_ids()` method we used is available for the output of the tokenizer only). So, we’ll set the offsets corresponding to the question to `None`:\n\n```\ndef preprocess_validation_examples(examples):\n questions = [q.strip() for q in examples[\"question\"]]\n inputs = tokenizer(\n questions,\n examples[\"context\"],\n max_length=max_length,\n truncation=\"only_second\",\n stride=stride,\n return_overflowing_tokens=True,\n return_offsets_mapping=True,\n padding=\"max_length\",\n )\n\n sample_map = inputs.pop(\"overflow_to_sample_mapping\")\n example_ids = []\n\n for i in range(len(inputs[\"input_ids\"])):\n sample_idx = sample_map[i]\n example_ids.append(examples[\"id\"][sample_idx])\n\n sequence_ids = inputs.sequence_ids(i)\n offset = inputs[\"offset_mapping\"][i]\n inputs[\"offset_mapping\"][i] = [\n o if sequence_ids[k] == 1 else None for k, o in enumerate(offset)\n ]\n\n inputs[\"example_id\"] = example_ids\n return inputs```\n\nWe can apply this function on the whole validation dataset like before:\n\n```\nvalidation_dataset = raw_datasets[\"validation\"].map(\n preprocess_validation_examples,\n batched=True,\n remove_columns=raw_datasets[\"validation\"].column_names,\n)\nlen(raw_datasets[\"validation\"]), len(validation_dataset)```\n\nIn this case we’ve only added a couple of hundred samples, so it appears the contexts in the validation dataset are a bit shorter.\n\nNow that we have preprocessed all the data, we can get to the training.\n\n## [](#fine-tuning-the-model-with-the-trainer-api)Fine-tuning the model with the `Trainer` API\n\nThe training code for this example will look a lot like the code in the previous sections — the hardest thing will be to write the `compute_metrics()` function. Since we padded all the samples to the maximum length we set, there is no data collator to define, so this metric computation is really the only thing we have to worry about. The difficult part will be to post-process the model predictions into spans of text in the original examples; once we have done that, the metric from the 🤗 Datasets library will do most of the work for us.\n\n### [](#post-processing)Post-processing\n\nThe model will output logits for the start and end positions of the answer in the input IDs, as we saw during our exploration of the [`question-answering` pipeline](/course/chapter6/3b). The post-processing step will be similar to what we did there, so here’s a quick reminder of the actions we took:\n\n- We masked the start and end logits corresponding to tokens outside of the context.\n- We then converted the start and end logits into probabilities using a softmax.\n- We attributed a score to each `(start_token, end_token)` pair by taking the product of the corresponding two probabilities.\n- We looked for the pair with the maximum score that yielded a valid answer (e.g., a `start_token` lower than `end_token`).\n\nHere we will change this process slightly because we don’t need to compute actual scores (just the predicted answer). This means we can skip the softmax step. To go faster, we also won’t score all the possible `(start_token, end_token)` pairs, but only the ones corresponding to the highest `n_best` logits (with `n_best=20`). Since we will skip the softmax, those scores will be logit scores, and will be obtained by taking the sum of the start and end logits (instead of the product, because of the rule log⁡(ab)\\=log⁡(a)+log⁡(b)\\\\log(ab) = \\\\log(a) + \\\\log(b)).\n\nTo demonstrate all of this, we will need some kind of predictions. Since we have not trained our model yet, we are going to use the default model for the QA pipeline to generate some predictions on a small part of the validation set. We can use the same processing function as before; because it relies on the global constant `tokenizer`, we just have to change that object to the tokenizer of the model we want to use temporarily:\n\n```\nsmall_eval_set = raw_datasets[\"validation\"].select(range(100))\ntrained_checkpoint = \"distilbert-base-cased-distilled-squad\"\n\ntokenizer = AutoTokenizer.from_pretrained(trained_checkpoint)\neval_set = small_eval_set.map(\n preprocess_validation_examples,\n batched=True,\n remove_columns=raw_datasets[\"validation\"].column_names,\n)```\n\nNow that the preprocessing is done, we change the tokenizer back to the one we originally picked:\n\n```\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)```\n\nWe then remove the columns of our `eval_set` that are not expected by the model, build a batch with all of that small validation set, and pass it through the model. If a GPU is available, we use it to go faster:\n\n```\nimport torch\nfrom transformers import AutoModelForQuestionAnswering\n\neval_set_for_model = eval_set.remove_columns([\"example_id\", \"offset_mapping\"])\neval_set_for_model.set_format(\"torch\")\n\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\nbatch = {k: eval_set_for_model[k].to(device) for k in eval_set_for_model.column_names}\ntrained_model = AutoModelForQuestionAnswering.from_pretrained(trained_checkpoint).to(\n device\n)\n\nwith torch.no_grad():\n outputs = trained_model(**batch)```\n\nSince the `Trainer` will give us predictions as NumPy arrays, we grab the start and end logits and convert them to that format:\n\n```\nstart_logits = outputs.start_logits.cpu().numpy()\nend_logits = outputs.end_logits.cpu().numpy()```\n\nNow, we need to find the predicted answer for each example in our `small_eval_set`. One example may have been split into several features in `eval_set`, so the first step is to map each example in `small_eval_set` to the corresponding features in `eval_set`:\n\n```\nimport collections\n\nexample_to_features = collections.defaultdict(list)\nfor idx, feature in enumerate(eval_set):\n example_to_features[feature[\"example_id\"]].append(idx)```\n\nWith this in hand, we can really get to work by looping through all the examples and, for each example, through all the associated features. As we said before, we’ll look at the logit scores for the `n_best` start logits and end logits, excluding positions that give:\n\n- An answer that wouldn’t be inside the context\n- An answer with negative length\n- An answer that is too long (we limit the possibilities at `max_answer_length=30`)\n\nOnce we have all the scored possible answers for one example, we just pick the one with the best logit score:\n\n```\nimport numpy as np\n\nn_best = 20\nmax_answer_length = 30\npredicted_answers = []\n\nfor example in small_eval_set:\n example_id = example[\"id\"]\n context = example[\"context\"]\n answers = []\n\n for feature_index in example_to_features[example_id]:\n start_logit = start_logits[feature_index]\n end_logit = end_logits[feature_index]\n offsets = eval_set[\"offset_mapping\"][feature_index]\n\n start_indexes = np.argsort(start_logit)[-1 : -n_best - 1 : -1].tolist()\n end_indexes = np.argsort(end_logit)[-1 : -n_best - 1 : -1].tolist()\n for start_index in start_indexes:\n for end_index in end_indexes:\n \n if offsets[start_index] is None or offsets[end_index] is None:\n continue\n \n if (\n end_index < start_index\n or end_index - start_index + 1 > max_answer_length\n ):\n continue\n\n answers.append(\n {\n \"text\": context[offsets[start_index][0] : offsets[end_index][1]],\n \"logit_score\": start_logit[start_index] + end_logit[end_index],\n }\n )\n\n best_answer = max(answers, key=lambda x: x[\"logit_score\"])\n predicted_answers.append({\"id\": example_id, \"prediction_text\": best_answer[\"text\"]})```\n\nThe final format of the predicted answers is the one that will be expected by the metric we will use. As usual, we can load it with the help of the 🤗 Evaluate library:\n\n```\nimport evaluate\n\nmetric = evaluate.load(\"squad\")```\n\nThis metric expects the predicted answers in the format we saw above (a list of dictionaries with one key for the ID of the example and one key for the predicted text) and the theoretical answers in the format below (a list of dictionaries with one key for the ID of the example and one key for the possible answers):\n\n```\ntheoretical_answers = [\n {\"id\": ex[\"id\"], \"answers\": ex[\"answers\"]} for ex in small_eval_set\n]```\n\nWe can now check that we get sensible results by looking at the first element of both lists:\n\n```\nprint(predicted_answers[0])\nprint(theoretical_answers[0])```\n\n```\n{'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}\n{'id': '56be4db0acb8001400a502ec', 'answers': {'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos'], 'answer_start': [177, 177, 177]}}```\n\nNot too bad! Now let’s have a look at the score the metric gives us:\n\n```\nmetric.compute(predictions=predicted_answers, references=theoretical_answers)```\n\n```\n{'exact_match': 83.0, 'f1': 88.25}```\n\nAgain, that’s rather good considering that according to [its paper](https://arxiv.org/abs/1910.01108v2) DistilBERT fine-tuned on SQuAD obtains 79.1 and 86.9 for those scores on the whole dataset.\n\nNow let’s put everything we just did in a `compute_metrics()` function that we will use in the `Trainer`. Normally, that `compute_metrics()` function only receives a tuple `eval_preds` with logits and labels. Here we will need a bit more, as we have to look in the dataset of features for the offset and in the dataset of examples for the original contexts, so we won’t be able to use this function to get regular evaluation results during training. We will only use it at the end of training to check the results.\n\nThe `compute_metrics()` function groups the same steps as before; we just add a small check in case we don’t come up with any valid answers (in which case we predict an empty string).\n\n```\nfrom tqdm.auto import tqdm\n\n\ndef compute_metrics(start_logits, end_logits, features, examples):\n example_to_features = collections.defaultdict(list)\n for idx, feature in enumerate(features):\n example_to_features[feature[\"example_id\"]].append(idx)\n\n predicted_answers = []\n for example in tqdm(examples):\n example_id = example[\"id\"]\n context = example[\"context\"]\n answers = []\n\n \n for feature_index in example_to_features[example_id]:\n start_logit = start_logits[feature_index]\n end_logit = end_logits[feature_index]\n offsets = features[feature_index][\"offset_mapping\"]\n\n start_indexes = np.argsort(start_logit)[-1 : -n_best - 1 : -1].tolist()\n end_indexes = np.argsort(end_logit)[-1 : -n_best - 1 : -1].tolist()\n for start_index in start_indexes:\n for end_index in end_indexes:\n \n if offsets[start_index] is None or offsets[end_index] is None:\n continue\n \n if (\n end_index < start_index\n or end_index - start_index + 1 > max_answer_length\n ):\n continue\n\n answer = {\n \"text\": context[offsets[start_index][0] : offsets[end_index][1]],\n \"logit_score\": start_logit[start_index] + end_logit[end_index],\n }\n answers.append(answer)\n\n \n if len(answers) > 0:\n best_answer = max(answers, key=lambda x: x[\"logit_score\"])\n predicted_answers.append(\n {\"id\": example_id, \"prediction_text\": best_answer[\"text\"]}\n )\n else:\n predicted_answers.append({\"id\": example_id, \"prediction_text\": \"\"})\n\n theoretical_answers = [{\"id\": ex[\"id\"], \"answers\": ex[\"answers\"]} for ex in examples]\n return metric.compute(predictions=predicted_answers, references=theoretical_answers)```\n\nWe can check it works on our predictions:\n\n```\ncompute_metrics(start_logits, end_logits, eval_set, small_eval_set)```\n\n```\n{'exact_match': 83.0, 'f1': 88.25}```\n\nLooking good! Now let’s use this to fine-tune our model.\n\n### [](#fine-tuning-the-model)Fine-tuning the model\n\nWe are now ready to train our model. Let’s create it first, using the `AutoModelForQuestionAnswering` class like before:\n\n```\nmodel = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)```\n\nAs usual, we get a warning that some weights are not used (the ones from the pretraining head) and some others are initialized randomly (the ones for the question answering head). You should be used to this by now, but that means this model is not ready to be used just yet and needs fine-tuning — good thing we’re about to do that!\n\nTo be able to push our model to the Hub, we’ll need to log in to Hugging Face. If you’re running this code in a notebook, you can do so with the following utility function, which displays a widget where you can enter your login credentials:\n\n```\nfrom huggingface_hub import notebook_login\n\nnotebook_login()```\n\nIf you aren’t working in a notebook, just type the following line in your terminal:\n\nOnce this is done, we can define our `TrainingArguments`. As we said when we defined our function to compute the metric, we won’t be able to have a regular evaluation loop because of the signature of the `compute_metrics()` function. We could write our own subclass of `Trainer` to do this (an approach you can find in the [question answering example script](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/trainer_qa.py)), but that’s a bit too long for this section. Instead, we will only evaluate the model at the end of training here and show you how to do a regular evaluation in “A custom training loop” below.\n\nThis is really where the `Trainer` API shows its limits and the 🤗 Accelerate library shines: customizing the class to a specific use case can be painful, but tweaking a fully exposed training loop is easy.\n\nLet’s take a look at our `TrainingArguments`:\n\n```\nfrom transformers import TrainingArguments\n\nargs = TrainingArguments(\n \"bert-finetuned-squad\",\n evaluation_strategy=\"no\",\n save_strategy=\"epoch\",\n learning_rate=2e-5,\n num_train_epochs=3,\n weight_decay=0.01,\n fp16=True,\n push_to_hub=True,\n)```\n\nWe’ve seen most of these before: we set some hyperparameters (like the learning rate, the number of epochs we train for, and some weight decay) and indicate that we want to save the model at the end of every epoch, skip evaluation, and upload our results to the Model Hub. We also enable mixed-precision training with `fp16=True`, as it can speed up the training nicely on a recent GPU.\n\nBy default, the repository used will be in your namespace and named after the output directory you set, so in our case it will be in `\"sgugger/bert-finetuned-squad\"`. We can override this by passing a `hub_model_id`; for instance, to push the model to the `huggingface_course` organization we used `hub_model_id=\"huggingface_course/bert-finetuned-squad\"` (which is the model we linked to at the beginning of this section).\n\n💡 If the output directory you are using exists, it needs to be a local clone of the repository you want to push to (so set a new name if you get an error when defining your `Trainer`).\n\nFinally, we just pass everything to the `Trainer` class and launch the training:\n\n```\nfrom transformers import Trainer\n\ntrainer = Trainer(\n model=model,\n args=args,\n train_dataset=train_dataset,\n eval_dataset=validation_dataset,\n tokenizer=tokenizer,\n)\ntrainer.train()```\n\nNote that while the training happens, each time the model is saved (here, every epoch) it is uploaded to the Hub in the background. This way, you will be able to to resume your training on another machine if necessary. The whole training takes a while (a little over an hour on a Titan RTX), so you can grab a coffee or reread some of the parts of the course that you’ve found more challenging while it proceeds. Also note that as soon as the first epoch is finished, you will see some weights uploaded to the Hub and you can start playing with your model on its page.\n\nOnce the training is complete, we can finally evaluate our model (and pray we didn’t spend all that compute time on nothing). The `predict()` method of the `Trainer` will return a tuple where the first elements will be the predictions of the model (here a pair with the start and end logits). We send this to our `compute_metrics()` function:\n\n```\npredictions, _, _ = trainer.predict(validation_dataset)\nstart_logits, end_logits = predictions\ncompute_metrics(start_logits, end_logits, validation_dataset, raw_datasets[\"validation\"])```\n\n```\n{'exact_match': 81.18259224219489, 'f1': 88.67381321905516}```\n\nGreat! As a comparison, the baseline scores reported in the BERT article for this model are 80.8 and 88.5, so we’re right where we should be.\n\nFinally, we use the `push_to_hub()` method to make sure we upload the latest version of the model:\n\n```\ntrainer.push_to_hub(commit_message=\"Training complete\")```\n\nThis returns the URL of the commit it just did, if you want to inspect it:\n\n```\n'https://huggingface.co/sgugger/bert-finetuned-squad/commit/9dcee1fbc25946a6ed4bb32efb1bd71d5fa90b68'```\n\nThe `Trainer` also drafts a model card with all the evaluation results and uploads it.\n\nAt this stage, you can use the inference widget on the Model Hub to test the model and share it with your friends, family, and favorite pets. You have successfully fine-tuned a model on a question answering task — congratulations!\n\n✏️ **Your turn!** Try another model architecture to see if it performs better on this task!\n\nIf you want to dive a bit more deeply into the training loop, we will now show you how to do the same thing using 🤗 Accelerate.\n\n## [](#a-custom-training-loop)A custom training loop\n\nLet’s now have a look at the full training loop, so you can easily customize the parts you need. It will look a lot like the training loop in [Chapter 3](/course/chapter3/4), with the exception of the evaluation loop. We will be able to evaluate the model regularly since we’re not constrained by the `Trainer` class anymore.\n\n### [](#preparing-everything-for-training)Preparing everything for training\n\nFirst we need to build the `DataLoader`s from our datasets. We set the format of those datasets to `\"torch\"`, and remove the columns in the validation set that are not used by the model. Then, we can use the `default_data_collator` provided by Transformers as a `collate_fn` and shuffle the training set, but not the validation set:\n\n```\nfrom torch.utils.data import DataLoader\nfrom transformers import default_data_collator\n\ntrain_dataset.set_format(\"torch\")\nvalidation_set = validation_dataset.remove_columns([\"example_id\", \"offset_mapping\"])\nvalidation_set.set_format(\"torch\")\n\ntrain_dataloader = DataLoader(\n train_dataset,\n shuffle=True,\n collate_fn=default_data_collator,\n batch_size=8,\n)\neval_dataloader = DataLoader(\n validation_set, collate_fn=default_data_collator, batch_size=8\n)```\n\nNext we reinstantiate our model, to make sure we’re not continuing the fine-tuning from before but starting from the BERT pretrained model again:\n\n```\nmodel = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)```\n\nThen we will need an optimizer. As usual we use the classic `AdamW`, which is like Adam, but with a fix in the way weight decay is applied:\n\n```\nfrom torch.optim import AdamW\n\noptimizer = AdamW(model.parameters(), lr=2e-5)```\n\nOnce we have all those objects, we can send them to the `accelerator.prepare()` method. Remember that if you want to train on TPUs in a Colab notebook, you will need to move all of this code into a training function, and that shouldn’t execute any cell that instantiates an `Accelerator`. We can force mixed-precision training by passing `fp16=True` to the `Accelerator` (or, if you are executing the code as a script, just make sure to fill in the 🤗 Accelerate `config` appropriately).\n\n```\nfrom accelerate import Accelerator\n\naccelerator = Accelerator(fp16=True)\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n model, optimizer, train_dataloader, eval_dataloader\n)```\n\nAs you should know from the previous sections, we can only use the `train_dataloader` length to compute the number of training steps after it has gone through the `accelerator.prepare()` method. We use the same linear schedule as in the previous sections:\n\n```\nfrom transformers import get_scheduler\n\nnum_train_epochs = 3\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n \"linear\",\n optimizer=optimizer,\n num_warmup_steps=0,\n num_training_steps=num_training_steps,\n)```\n\nTo push our model to the Hub, we will need to create a `Repository` object in a working folder. First log in to the Hugging Face Hub, if you’re not logged in already. We’ll determine the repository name from the model ID we want to give our model (feel free to replace the `repo_name` with your own choice; it just needs to contain your username, which is what the function `get_full_repo_name()` does):\n\n```\nfrom huggingface_hub import Repository, get_full_repo_name\n\nmodel_name = \"bert-finetuned-squad-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name```\n\n```\n'sgugger/bert-finetuned-squad-accelerate'```\n\nThen we can clone that repository in a local folder. If it already exists, this local folder should be a clone of the repository we are working with:\n\n```\noutput_dir = \"bert-finetuned-squad-accelerate\"\nrepo = Repository(output_dir, clone_from=repo_name)```\n\nWe can now upload anything we save in `output_dir` by calling the `repo.push_to_hub()` method. This will help us upload the intermediate models at the end of each epoch.\n\n## [](#training-loop)Training loop\n\nWe are now ready to write the full training loop. After defining a progress bar to follow how training goes, the loop has three parts:\n\n- The training in itself, which is the classic iteration over the `train_dataloader`, forward pass through the model, then backward pass and optimizer step.\n- The evaluation, in which we gather all the values for `start_logits` and `end_logits` before converting them to NumPy arrays. Once the evaluation loop is finished, we concatenate all the results. Note that we need to truncate because the `Accelerator` may have added a few samples at the end to ensure we have the same number of examples in each process.\n- Saving and uploading, where we first save the model and the tokenizer, then call `repo.push_to_hub()`. As we did before, we use the argument `blocking=False` to tell the 🤗 Hub library to push in an asynchronous process. This way, training continues normally and this (long) instruction is executed in the background.\n\nHere’s the complete code for the training loop:\n\n```\nfrom tqdm.auto import tqdm\nimport torch\n\nprogress_bar = tqdm(range(num_training_steps))\n\nfor epoch in range(num_train_epochs):\n \n model.train()\n for step, batch in enumerate(train_dataloader):\n outputs = model(**batch)\n loss = outputs.loss\n accelerator.backward(loss)\n\n optimizer.step()\n lr_scheduler.step()\n optimizer.zero_grad()\n progress_bar.update(1)\n\n \n model.eval()\n start_logits = []\n end_logits = []\n accelerator.print(\"Evaluation!\")\n for batch in tqdm(eval_dataloader):\n with torch.no_grad():\n outputs = model(**batch)\n\n start_logits.append(accelerator.gather(outputs.start_logits).cpu().numpy())\n end_logits.append(accelerator.gather(outputs.end_logits).cpu().numpy())\n\n start_logits = np.concatenate(start_logits)\n end_logits = np.concatenate(end_logits)\n start_logits = start_logits[: len(validation_dataset)]\n end_logits = end_logits[: len(validation_dataset)]\n\n metrics = compute_metrics(\n start_logits, end_logits, validation_dataset, raw_datasets[\"validation\"]\n )\n print(f\"epoch {epoch}:\", metrics)\n\n \n accelerator.wait_for_everyone()\n unwrapped_model = accelerator.unwrap_model(model)\n unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n if accelerator.is_main_process:\n tokenizer.save_pretrained(output_dir)\n repo.push_to_hub(\n commit_message=f\"Training in progress epoch {epoch}\", blocking=False\n )```\n\nIn case this is the first time you’re seeing a model saved with 🤗 Accelerate, let’s take a moment to inspect the three lines of code that go with it:\n\n```\naccelerator.wait_for_everyone()\nunwrapped_model = accelerator.unwrap_model(model)\nunwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)```\n\nThe first line is self-explanatory: it tells all the processes to wait until everyone is at that stage before continuing. This is to make sure we have the same model in every process before saving. Then we grab the `unwrapped_model`, which is the base model we defined. The `accelerator.prepare()` method changes the model to work in distributed training, so it won’t have the `save_pretrained()` method anymore; the `accelerator.unwrap_model()` method undoes that step. Lastly, we call `save_pretrained()` but tell that method to use `accelerator.save()` instead of `torch.save()`.\n\nOnce this is done, you should have a model that produces results pretty similar to the one trained with the `Trainer`. You can check the model we trained using this code at [_huggingface-course/bert-finetuned-squad-accelerate_](https://huggingface.co/huggingface-course/bert-finetuned-squad-accelerate). And if you want to test out any tweaks to the training loop, you can directly implement them by editing the code shown above!\n\n## [](#using-the-fine-tuned-model)Using the fine-tuned model\n\nWe’ve already shown you how you can use the model we fine-tuned on the Model Hub with the inference widget. To use it locally in a `pipeline`, you just have to specify the model identifier:\n\n```\nfrom transformers import pipeline\n\n\nmodel_checkpoint = \"huggingface-course/bert-finetuned-squad\"\nquestion_answerer = pipeline(\"question-answering\", model=model_checkpoint)\n\ncontext = \"\"\"\n🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration\nbetween them. It's straightforward to train your models with one before loading them for inference with the other.\n\"\"\"\nquestion = \"Which deep learning libraries back 🤗 Transformers?\"\nquestion_answerer(question=question, context=context)```\n\n```\n{'score': 0.9979003071784973,\n 'start': 78,\n 'end': 105,\n 'answer': 'Jax, PyTorch and TensorFlow'}```\n\nGreat! Our model is working as well as the default one for this pipeline!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tQuestion answering - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Question answering

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Question answering

\"Ask \"Open \"Open

Time to look at question answering! This task comes in many flavors, but the one we’ll focus on in this section is called extractive question answering. This involves posing questions about a document and identifying the answers as spans of text in the document itself.

We will fine-tune a BERT model on the SQuAD dataset, which consists of questions posed by crowdworkers on a set of Wikipedia articles. This will give us a model able to compute predictions like this one:

This is actually showcasing the model that was trained and uploaded to the Hub using the code shown in this section. You can find it and double-check the predictions here.

💡 Encoder-only models like BERT tend to be great at extracting answers to factoid questions like “Who invented the Transformer architecture?” but fare poorly when given open-ended questions like “Why is the sky blue?” In these more challenging cases, encoder-decoder models like T5 and BART are typically used to synthesize the information in a way that’s quite similar to text summarization. If you’re interested in this type of generative question answering, we recommend checking out our demo based on the ELI5 dataset.

Preparing the data

The dataset that is used the most as an academic benchmark for extractive question answering is SQuAD, so that’s the one we’ll use here. There is also a harder SQuAD v2 benchmark, which includes questions that don’t have an answer. As long as your own dataset contains a column for contexts, a column for questions, and a column for answers, you should be able to adapt the steps below.

The SQuAD dataset

As usual, we can download and cache the dataset in just one step thanks to load_dataset():

from datasets import load_dataset\n\nraw_datasets = load_dataset(\"squad\")

We can then have a look at this object to learn more about the SQuAD dataset:

raw_datasets
DatasetDict({\n    train: Dataset({\n        features: ['id', 'title', 'context', 'question', 'answers'],\n        num_rows: 87599\n    })\n    validation: Dataset({\n        features: ['id', 'title', 'context', 'question', 'answers'],\n        num_rows: 10570\n    })\n})

It looks like we have everything we need with the context, question, and answers fields, so let’s print those for the first element of our training set:

print(\"Context: \", raw_datasets[\"train\"][0][\"context\"])\nprint(\"Question: \", raw_datasets[\"train\"][0][\"question\"])\nprint(\"Answer: \", raw_datasets[\"train\"][0][\"answers\"])
Context: 'Architecturally, the school has a Catholic character. Atop the Main Building\\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.'\nQuestion: 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?'\nAnswer: {'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]}

The context and question fields are very straightforward to use. The answers field is a bit trickier as it comports a dictionary with two fields that are both lists. This is the format that will be expected by the squad metric during evaluation; if you are using your own data, you don’t necessarily need to worry about putting the answers in the same format. The text field is rather obvious, and the answer_start field contains the starting character index of each answer in the context.

During training, there is only one possible answer. We can double-check this by using the Dataset.filter() method:

raw_datasets[\"train\"].filter(lambda x: len(x[\"answers\"][\"text\"]) != 1)
Dataset({\n    features: ['id', 'title', 'context', 'question', 'answers'],\n    num_rows: 0\n})

For evaluation, however, there are several possible answers for each sample, which may be the same or different:

print(raw_datasets[\"validation\"][0][\"answers\"])\nprint(raw_datasets[\"validation\"][2][\"answers\"])
{'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos'], 'answer_start': [177, 177, 177]}\n{'text': ['Santa Clara, California', \"Levi's Stadium\", \"Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.\"], 'answer_start': [403, 355, 355]}

We won’t dive into the evaluation script as it will all be wrapped up by a 🤗 Datasets metric for us, but the short version is that some of the questions have several possible answers, and this script will compare a predicted answer to all the acceptable answers and take the best score. If we take a look at the sample at index 2, for instance:

print(raw_datasets[\"validation\"][2][\"context\"])\nprint(raw_datasets[\"validation\"][2][\"question\"])
'Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi\\'s Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50.'\n'Where did Super Bowl 50 take place?'

we can see that the answer can indeed be one of the three possibilities we saw before.

Processing the training data

Let’s start with preprocessing the training data. The hard part will be to generate labels for the question’s answer, which will be the start and end positions of the tokens corresponding to the answer inside the context.

But let’s not get ahead of ourselves. First, we need to convert the text in the input into IDs the model can make sense of, using a tokenizer:

from transformers import AutoTokenizer\n\nmodel_checkpoint = \"bert-base-cased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

As mentioned previously, we’ll be fine-tuning a BERT model, but you can use any other model type as long as it has a fast tokenizer implemented. You can see all the architectures that come with a fast version in this big table, and to check that the tokenizer object you’re using is indeed backed by 🤗 Tokenizers you can look at its is_fast attribute:

tokenizer.is_fast
True

We can pass to our tokenizer the question and the context together, and it will properly insert the special tokens to form a sentence like this:

[CLS] question [SEP] context [SEP]

Let’s double-check:

context = raw_datasets[\"train\"][0][\"context\"]\nquestion = raw_datasets[\"train\"][0][\"question\"]\n\ninputs = tokenizer(question, context)\ntokenizer.decode(inputs[\"input_ids\"])
'[CLS] To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France? [SEP] Architecturally, '\n'the school has a Catholic character. Atop the Main Building\\'s gold dome is a golden statue of the Virgin '\n'Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms '\n'upraised with the legend \" Venite Ad Me Omnes \". Next to the Main Building is the Basilica of the Sacred '\n'Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a '\n'replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette '\n'Soubirous in 1858. At the end of the main drive ( and in a direct line that connects through 3 statues '\n'and the Gold Dome ), is a simple, modern stone statue of Mary. [SEP]'

The labels will then be the index of the tokens starting and ending the answer, and the model will be tasked to predicted one start and end logit per token in the input, with the theoretical labels being as follow:

\"One-hot \"One-hot

In this case the context is not too long, but some of the examples in the dataset have very long contexts that will exceed the maximum length we set (which is 384 in this case). As we saw in Chapter 6 when we explored the internals of the question-answering pipeline, we will deal with long contexts by creating several training features from one sample of our dataset, with a sliding window between them.

To see how this works using the current example, we can limit the length to 100 and use a sliding window of 50 tokens. As a reminder, we use:

  • max_length to set the maximum length (here 100)
  • truncation=\"only_second\" to truncate the context (which is in the second position) when the question with its context is too long
  • stride to set the number of overlapping tokens between two successive chunks (here 50)
  • return_overflowing_tokens=True to let the tokenizer know we want the overflowing tokens
inputs = tokenizer(\n    question,\n    context,\n    max_length=100,\n    truncation=\"only_second\",\n    stride=50,\n    return_overflowing_tokens=True,\n)\n\nfor ids in inputs[\"input_ids\"]:\n    print(tokenizer.decode(ids))
'[CLS] To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France? [SEP] Architecturally, the school has a Catholic character. Atop the Main Building\\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \" Venite Ad Me Omnes \". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basi [SEP]'\n'[CLS] To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France? [SEP] the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \" Venite Ad Me Omnes \". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin [SEP]'\n'[CLS] To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France? [SEP] Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive ( and in a direct line that connects through 3 [SEP]'\n'[CLS] To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France? [SEP]. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive ( and in a direct line that connects through 3 statues and the Gold Dome ), is a simple, modern stone statue of Mary. [SEP]'

As we can see, our example has been in split into four inputs, each of them containing the question and some part of the context. Note that the answer to the question (“Bernadette Soubirous”) only appears in the third and last inputs, so by dealing with long contexts in this way we will create some training examples where the answer is not included in the context. For those examples, the labels will be start_position = end_position = 0 (so we predict the [CLS] token). We will also set those labels in the unfortunate case where the answer has been truncated so that we only have the start (or end) of it. For the examples where the answer is fully in the context, the labels will be the index of the token where the answer starts and the index of the token where the answer ends.

The dataset provides us with the start character of the answer in the context, and by adding the length of the answer, we can find the end character in the context. To map those to token indices, we will need to use the offset mappings we studied in Chapter 6. We can have our tokenizer return these by passing along return_offsets_mapping=True:

inputs = tokenizer(\n    question,\n    context,\n    max_length=100,\n    truncation=\"only_second\",\n    stride=50,\n    return_overflowing_tokens=True,\n    return_offsets_mapping=True,\n)\ninputs.keys()
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'offset_mapping', 'overflow_to_sample_mapping'])

As we can see, we get back the usual input IDs, token type IDs, and attention mask, as well as the offset mapping we required and an extra key, overflow_to_sample_mapping. The corresponding value will be of use to us when we tokenize several texts at the same time (which we should do to benefit from the fact that our tokenizer is backed by Rust). Since one sample can give several features, it maps each feature to the example it originated from. Because here we only tokenized one example, we get a list of 0s:

inputs[\"overflow_to_sample_mapping\"]
[0, 0, 0, 0]

But if we tokenize more examples, this will become more useful:

inputs = tokenizer(\n    raw_datasets[\"train\"][2:6][\"question\"],\n    raw_datasets[\"train\"][2:6][\"context\"],\n    max_length=100,\n    truncation=\"only_second\",\n    stride=50,\n    return_overflowing_tokens=True,\n    return_offsets_mapping=True,\n)\n\nprint(f\"The 4 examples gave {len(inputs['input_ids'])} features.\")\nprint(f\"Here is where each comes from: {inputs['overflow_to_sample_mapping']}.\")
'The 4 examples gave 19 features.'\n'Here is where each comes from: [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3].'

As we can see, the first three examples (at indices 2, 3, and 4 in the training set) each gave four features and the last example (at index 5 in the training set) gave 7 features.

This information will be useful to map each feature we get to its corresponding label. As mentioned earlier, those labels are:

  • (0, 0) if the answer is not in the corresponding span of the context
  • (start_position, end_position) if the answer is in the corresponding span of the context, with start_position being the index of the token (in the input IDs) at the start of the answer and end_position being the index of the token (in the input IDs) where the answer ends

To determine which of these is the case and, if relevant, the positions of the tokens, we first find the indices that start and end the context in the input IDs. We could use the token type IDs to do this, but since those do not necessarily exist for all models (DistilBERT does not require them, for instance), we’ll instead use the sequence_ids() method of the BatchEncoding our tokenizer returns.

Once we have those token indices, we look at the corresponding offsets, which are tuples of two integers representing the span of characters inside the original context. We can thus detect if the chunk of the context in this feature starts after the answer or ends before the answer begins (in which case the label is (0, 0)). If that’s not the case, we loop to find the first and last token of the answer:

answers = raw_datasets[\"train\"][2:6][\"answers\"]\nstart_positions = []\nend_positions = []\n\nfor i, offset in enumerate(inputs[\"offset_mapping\"]):\n    sample_idx = inputs[\"overflow_to_sample_mapping\"][i]\n    answer = answers[sample_idx]\n    start_char = answer[\"answer_start\"][0]\n    end_char = answer[\"answer_start\"][0] + len(answer[\"text\"][0])\n    sequence_ids = inputs.sequence_ids(i)\n\n    # Find the start and end of the context\n    idx = 0\n    while sequence_ids[idx] != 1:\n        idx += 1\n    context_start = idx\n    while sequence_ids[idx] == 1:\n        idx += 1\n    context_end = idx - 1\n\n    # If the answer is not fully inside the context, label is (0, 0)\n    if offset[context_start][0] > start_char or offset[context_end][1] < end_char:\n        start_positions.append(0)\n        end_positions.append(0)\n    else:\n        # Otherwise it's the start and end token positions\n        idx = context_start\n        while idx <= context_end and offset[idx][0] <= start_char:\n            idx += 1\n        start_positions.append(idx - 1)\n\n        idx = context_end\n        while idx >= context_start and offset[idx][1] >= end_char:\n            idx -= 1\n        end_positions.append(idx + 1)\n\nstart_positions, end_positions
([83, 51, 19, 0, 0, 64, 27, 0, 34, 0, 0, 0, 67, 34, 0, 0, 0, 0, 0],\n [85, 53, 21, 0, 0, 70, 33, 0, 40, 0, 0, 0, 68, 35, 0, 0, 0, 0, 0])

Let’s take a look at a few results to verify that our approach is correct. For the first feature we find (83, 85) as labels, so let’s compare the theoretical answer with the decoded span of tokens from 83 to 85 (inclusive):

idx = 0\nsample_idx = inputs[\"overflow_to_sample_mapping\"][idx]\nanswer = answers[sample_idx][\"text\"][0]\n\nstart = start_positions[idx]\nend = end_positions[idx]\nlabeled_answer = tokenizer.decode(inputs[\"input_ids\"][idx][start : end + 1])\n\nprint(f\"Theoretical answer: {answer}, labels give: {labeled_answer}\")
'Theoretical answer: the Main Building, labels give: the Main Building'

So that’s a match! Now let’s check index 4, where we set the labels to (0, 0), which means the answer is not in the context chunk of that feature:

idx = 4\nsample_idx = inputs[\"overflow_to_sample_mapping\"][idx]\nanswer = answers[sample_idx][\"text\"][0]\n\ndecoded_example = tokenizer.decode(inputs[\"input_ids\"][idx])\nprint(f\"Theoretical answer: {answer}, decoded example: {decoded_example}\")
'Theoretical answer: a Marian place of prayer and reflection, decoded example: [CLS] What is the Grotto at Notre Dame? [SEP] Architecturally, the school has a Catholic character. Atop the Main Building\\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \" Venite Ad Me Omnes \". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grot [SEP]'

Indeed, we don’t see the answer inside the context.

✏️ Your turn! When using the XLNet architecture, padding is applied on the left and the question and context are switched. Adapt all the code we just saw to the XLNet architecture (and add padding=True). Be aware that the [CLS] token may not be at the 0 position with padding applied.

Now that we have seen step by step how to preprocess our training data, we can group it in a function we will apply on the whole training dataset. We’ll pad every feature to the maximum length we set, as most of the contexts will be long (and the corresponding samples will be split into several features), so there is no real benefit to applying dynamic padding here:

max_length = 384\nstride = 128\n\n\ndef preprocess_training_examples(examples):\n    questions = [q.strip() for q in examples[\"question\"]]\n    inputs = tokenizer(\n        questions,\n        examples[\"context\"],\n        max_length=max_length,\n        truncation=\"only_second\",\n        stride=stride,\n        return_overflowing_tokens=True,\n        return_offsets_mapping=True,\n        padding=\"max_length\",\n    )\n\n    offset_mapping = inputs.pop(\"offset_mapping\")\n    sample_map = inputs.pop(\"overflow_to_sample_mapping\")\n    answers = examples[\"answers\"]\n    start_positions = []\n    end_positions = []\n\n    for i, offset in enumerate(offset_mapping):\n        sample_idx = sample_map[i]\n        answer = answers[sample_idx]\n        start_char = answer[\"answer_start\"][0]\n        end_char = answer[\"answer_start\"][0] + len(answer[\"text\"][0])\n        sequence_ids = inputs.sequence_ids(i)\n\n        # Find the start and end of the context\n        idx = 0\n        while sequence_ids[idx] != 1:\n            idx += 1\n        context_start = idx\n        while sequence_ids[idx] == 1:\n            idx += 1\n        context_end = idx - 1\n\n        # If the answer is not fully inside the context, label is (0, 0)\n        if offset[context_start][0] > start_char or offset[context_end][1] < end_char:\n            start_positions.append(0)\n            end_positions.append(0)\n        else:\n            # Otherwise it's the start and end token positions\n            idx = context_start\n            while idx <= context_end and offset[idx][0] <= start_char:\n                idx += 1\n            start_positions.append(idx - 1)\n\n            idx = context_end\n            while idx >= context_start and offset[idx][1] >= end_char:\n                idx -= 1\n            end_positions.append(idx + 1)\n\n    inputs[\"start_positions\"] = start_positions\n    inputs[\"end_positions\"] = end_positions\n    return inputs

Note that we defined two constants to determine the maximum length used as well as the length of the sliding window, and that we added a tiny bit of cleanup before tokenizing: some of the questions in the SQuAD dataset have extra spaces at the beginning and the end that don’t add anything (and take up space when being tokenized if you use a model like RoBERTa), so we removed those extra spaces.

To apply this function to the whole training set, we use the Dataset.map() method with the batched=True flag. It’s necessary here as we are changing the length of the dataset (since one example can give several training features):

train_dataset = raw_datasets[\"train\"].map(\n    preprocess_training_examples,\n    batched=True,\n    remove_columns=raw_datasets[\"train\"].column_names,\n)\nlen(raw_datasets[\"train\"]), len(train_dataset)
(87599, 88729)

As we can see, the preprocessing added roughly 1,000 features. Our training set is now ready to be used — let’s dig into the preprocessing of the validation set!

Processing the validation data

Preprocessing the validation data will be slightly easier as we don’t need to generate labels (unless we want to compute a validation loss, but that number won’t really help us understand how good the model is). The real joy will be to interpret the predictions of the model into spans of the original context. For this, we will just need to store both the offset mappings and some way to match each created feature to the original example it comes from. Since there is an ID column in the original dataset, we’ll use that ID.

The only thing we’ll add here is a tiny bit of cleanup of the offset mappings. They will contain offsets for the question and the context, but once we’re in the post-processing stage we won’t have any way to know which part of the input IDs corresponded to the context and which part was the question (the sequence_ids() method we used is available for the output of the tokenizer only). So, we’ll set the offsets corresponding to the question to None:

def preprocess_validation_examples(examples):\n    questions = [q.strip() for q in examples[\"question\"]]\n    inputs = tokenizer(\n        questions,\n        examples[\"context\"],\n        max_length=max_length,\n        truncation=\"only_second\",\n        stride=stride,\n        return_overflowing_tokens=True,\n        return_offsets_mapping=True,\n        padding=\"max_length\",\n    )\n\n    sample_map = inputs.pop(\"overflow_to_sample_mapping\")\n    example_ids = []\n\n    for i in range(len(inputs[\"input_ids\"])):\n        sample_idx = sample_map[i]\n        example_ids.append(examples[\"id\"][sample_idx])\n\n        sequence_ids = inputs.sequence_ids(i)\n        offset = inputs[\"offset_mapping\"][i]\n        inputs[\"offset_mapping\"][i] = [\n            o if sequence_ids[k] == 1 else None for k, o in enumerate(offset)\n        ]\n\n    inputs[\"example_id\"] = example_ids\n    return inputs

We can apply this function on the whole validation dataset like before:

validation_dataset = raw_datasets[\"validation\"].map(\n    preprocess_validation_examples,\n    batched=True,\n    remove_columns=raw_datasets[\"validation\"].column_names,\n)\nlen(raw_datasets[\"validation\"]), len(validation_dataset)
(10570, 10822)

In this case we’ve only added a couple of hundred samples, so it appears the contexts in the validation dataset are a bit shorter.

Now that we have preprocessed all the data, we can get to the training.

Fine-tuning the model with the Trainer API

The training code for this example will look a lot like the code in the previous sections — the hardest thing will be to write the compute_metrics() function. Since we padded all the samples to the maximum length we set, there is no data collator to define, so this metric computation is really the only thing we have to worry about. The difficult part will be to post-process the model predictions into spans of text in the original examples; once we have done that, the metric from the 🤗 Datasets library will do most of the work for us.

Post-processing

The model will output logits for the start and end positions of the answer in the input IDs, as we saw during our exploration of the question-answering pipeline. The post-processing step will be similar to what we did there, so here’s a quick reminder of the actions we took:

  • We masked the start and end logits corresponding to tokens outside of the context.
  • We then converted the start and end logits into probabilities using a softmax.
  • We attributed a score to each (start_token, end_token) pair by taking the product of the corresponding two probabilities.
  • We looked for the pair with the maximum score that yielded a valid answer (e.g., a start_token lower than end_token).

Here we will change this process slightly because we don’t need to compute actual scores (just the predicted answer). This means we can skip the softmax step. To go faster, we also won’t score all the possible (start_token, end_token) pairs, but only the ones corresponding to the highest n_best logits (with n_best=20). Since we will skip the softmax, those scores will be logit scores, and will be obtained by taking the sum of the start and end logits (instead of the product, because of the rule log(ab)=log(a)+log(b)\\log(ab) = \\log(a) + \\log(b)log(ab)=log(a)+log(b)).

To demonstrate all of this, we will need some kind of predictions. Since we have not trained our model yet, we are going to use the default model for the QA pipeline to generate some predictions on a small part of the validation set. We can use the same processing function as before; because it relies on the global constant tokenizer, we just have to change that object to the tokenizer of the model we want to use temporarily:

small_eval_set = raw_datasets[\"validation\"].select(range(100))\ntrained_checkpoint = \"distilbert-base-cased-distilled-squad\"\n\ntokenizer = AutoTokenizer.from_pretrained(trained_checkpoint)\neval_set = small_eval_set.map(\n    preprocess_validation_examples,\n    batched=True,\n    remove_columns=raw_datasets[\"validation\"].column_names,\n)

Now that the preprocessing is done, we change the tokenizer back to the one we originally picked:

tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

We then remove the columns of our eval_set that are not expected by the model, build a batch with all of that small validation set, and pass it through the model. If a GPU is available, we use it to go faster:

import torch\nfrom transformers import AutoModelForQuestionAnswering\n\neval_set_for_model = eval_set.remove_columns([\"example_id\", \"offset_mapping\"])\neval_set_for_model.set_format(\"torch\")\n\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\nbatch = {k: eval_set_for_model[k].to(device) for k in eval_set_for_model.column_names}\ntrained_model = AutoModelForQuestionAnswering.from_pretrained(trained_checkpoint).to(\n    device\n)\n\nwith torch.no_grad():\n    outputs = trained_model(**batch)

Since the Trainer will give us predictions as NumPy arrays, we grab the start and end logits and convert them to that format:

start_logits = outputs.start_logits.cpu().numpy()\nend_logits = outputs.end_logits.cpu().numpy()

Now, we need to find the predicted answer for each example in our small_eval_set. One example may have been split into several features in eval_set, so the first step is to map each example in small_eval_set to the corresponding features in eval_set:

import collections\n\nexample_to_features = collections.defaultdict(list)\nfor idx, feature in enumerate(eval_set):\n    example_to_features[feature[\"example_id\"]].append(idx)

With this in hand, we can really get to work by looping through all the examples and, for each example, through all the associated features. As we said before, we’ll look at the logit scores for the n_best start logits and end logits, excluding positions that give:

  • An answer that wouldn’t be inside the context
  • An answer with negative length
  • An answer that is too long (we limit the possibilities at max_answer_length=30)

Once we have all the scored possible answers for one example, we just pick the one with the best logit score:

import numpy as np\n\nn_best = 20\nmax_answer_length = 30\npredicted_answers = []\n\nfor example in small_eval_set:\n    example_id = example[\"id\"]\n    context = example[\"context\"]\n    answers = []\n\n    for feature_index in example_to_features[example_id]:\n        start_logit = start_logits[feature_index]\n        end_logit = end_logits[feature_index]\n        offsets = eval_set[\"offset_mapping\"][feature_index]\n\n        start_indexes = np.argsort(start_logit)[-1 : -n_best - 1 : -1].tolist()\n        end_indexes = np.argsort(end_logit)[-1 : -n_best - 1 : -1].tolist()\n        for start_index in start_indexes:\n            for end_index in end_indexes:\n                # Skip answers that are not fully in the context\n                if offsets[start_index] is None or offsets[end_index] is None:\n                    continue\n                # Skip answers with a length that is either < 0 or > max_answer_length.\n                if (\n                    end_index < start_index\n                    or end_index - start_index + 1 > max_answer_length\n                ):\n                    continue\n\n                answers.append(\n                    {\n                        \"text\": context[offsets[start_index][0] : offsets[end_index][1]],\n                        \"logit_score\": start_logit[start_index] + end_logit[end_index],\n                    }\n                )\n\n    best_answer = max(answers, key=lambda x: x[\"logit_score\"])\n    predicted_answers.append({\"id\": example_id, \"prediction_text\": best_answer[\"text\"]})

The final format of the predicted answers is the one that will be expected by the metric we will use. As usual, we can load it with the help of the 🤗 Evaluate library:

import evaluate\n\nmetric = evaluate.load(\"squad\")

This metric expects the predicted answers in the format we saw above (a list of dictionaries with one key for the ID of the example and one key for the predicted text) and the theoretical answers in the format below (a list of dictionaries with one key for the ID of the example and one key for the possible answers):

theoretical_answers = [\n    {\"id\": ex[\"id\"], \"answers\": ex[\"answers\"]} for ex in small_eval_set\n]

We can now check that we get sensible results by looking at the first element of both lists:

print(predicted_answers[0])\nprint(theoretical_answers[0])
{'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}\n{'id': '56be4db0acb8001400a502ec', 'answers': {'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos'], 'answer_start': [177, 177, 177]}}

Not too bad! Now let’s have a look at the score the metric gives us:

metric.compute(predictions=predicted_answers, references=theoretical_answers)
{'exact_match': 83.0, 'f1': 88.25}

Again, that’s rather good considering that according to its paper DistilBERT fine-tuned on SQuAD obtains 79.1 and 86.9 for those scores on the whole dataset.

Now let’s put everything we just did in a compute_metrics() function that we will use in the Trainer. Normally, that compute_metrics() function only receives a tuple eval_preds with logits and labels. Here we will need a bit more, as we have to look in the dataset of features for the offset and in the dataset of examples for the original contexts, so we won’t be able to use this function to get regular evaluation results during training. We will only use it at the end of training to check the results.

The compute_metrics() function groups the same steps as before; we just add a small check in case we don’t come up with any valid answers (in which case we predict an empty string).

from tqdm.auto import tqdm\n\n\ndef compute_metrics(start_logits, end_logits, features, examples):\n    example_to_features = collections.defaultdict(list)\n    for idx, feature in enumerate(features):\n        example_to_features[feature[\"example_id\"]].append(idx)\n\n    predicted_answers = []\n    for example in tqdm(examples):\n        example_id = example[\"id\"]\n        context = example[\"context\"]\n        answers = []\n\n        # Loop through all features associated with that example\n        for feature_index in example_to_features[example_id]:\n            start_logit = start_logits[feature_index]\n            end_logit = end_logits[feature_index]\n            offsets = features[feature_index][\"offset_mapping\"]\n\n            start_indexes = np.argsort(start_logit)[-1 : -n_best - 1 : -1].tolist()\n            end_indexes = np.argsort(end_logit)[-1 : -n_best - 1 : -1].tolist()\n            for start_index in start_indexes:\n                for end_index in end_indexes:\n                    # Skip answers that are not fully in the context\n                    if offsets[start_index] is None or offsets[end_index] is None:\n                        continue\n                    # Skip answers with a length that is either < 0 or > max_answer_length\n                    if (\n                        end_index < start_index\n                        or end_index - start_index + 1 > max_answer_length\n                    ):\n                        continue\n\n                    answer = {\n                        \"text\": context[offsets[start_index][0] : offsets[end_index][1]],\n                        \"logit_score\": start_logit[start_index] + end_logit[end_index],\n                    }\n                    answers.append(answer)\n\n        # Select the answer with the best score\n        if len(answers) > 0:\n            best_answer = max(answers, key=lambda x: x[\"logit_score\"])\n            predicted_answers.append(\n                {\"id\": example_id, \"prediction_text\": best_answer[\"text\"]}\n            )\n        else:\n            predicted_answers.append({\"id\": example_id, \"prediction_text\": \"\"})\n\n    theoretical_answers = [{\"id\": ex[\"id\"], \"answers\": ex[\"answers\"]} for ex in examples]\n    return metric.compute(predictions=predicted_answers, references=theoretical_answers)

We can check it works on our predictions:

compute_metrics(start_logits, end_logits, eval_set, small_eval_set)
{'exact_match': 83.0, 'f1': 88.25}

Looking good! Now let’s use this to fine-tune our model.

Fine-tuning the model

We are now ready to train our model. Let’s create it first, using the AutoModelForQuestionAnswering class like before:

model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)

As usual, we get a warning that some weights are not used (the ones from the pretraining head) and some others are initialized randomly (the ones for the question answering head). You should be used to this by now, but that means this model is not ready to be used just yet and needs fine-tuning — good thing we’re about to do that!

To be able to push our model to the Hub, we’ll need to log in to Hugging Face. If you’re running this code in a notebook, you can do so with the following utility function, which displays a widget where you can enter your login credentials:

from huggingface_hub import notebook_login\n\nnotebook_login()

If you aren’t working in a notebook, just type the following line in your terminal:

huggingface-cli login

Once this is done, we can define our TrainingArguments. As we said when we defined our function to compute the metric, we won’t be able to have a regular evaluation loop because of the signature of the compute_metrics() function. We could write our own subclass of Trainer to do this (an approach you can find in the question answering example script), but that’s a bit too long for this section. Instead, we will only evaluate the model at the end of training here and show you how to do a regular evaluation in “A custom training loop” below.

This is really where the Trainer API shows its limits and the 🤗 Accelerate library shines: customizing the class to a specific use case can be painful, but tweaking a fully exposed training loop is easy.

Let’s take a look at our TrainingArguments:

from transformers import TrainingArguments\n\nargs = TrainingArguments(\n    \"bert-finetuned-squad\",\n    evaluation_strategy=\"no\",\n    save_strategy=\"epoch\",\n    learning_rate=2e-5,\n    num_train_epochs=3,\n    weight_decay=0.01,\n    fp16=True,\n    push_to_hub=True,\n)

We’ve seen most of these before: we set some hyperparameters (like the learning rate, the number of epochs we train for, and some weight decay) and indicate that we want to save the model at the end of every epoch, skip evaluation, and upload our results to the Model Hub. We also enable mixed-precision training with fp16=True, as it can speed up the training nicely on a recent GPU.

By default, the repository used will be in your namespace and named after the output directory you set, so in our case it will be in \"sgugger/bert-finetuned-squad\". We can override this by passing a hub_model_id; for instance, to push the model to the huggingface_course organization we used hub_model_id=\"huggingface_course/bert-finetuned-squad\" (which is the model we linked to at the beginning of this section).

💡 If the output directory you are using exists, it needs to be a local clone of the repository you want to push to (so set a new name if you get an error when defining your Trainer).

Finally, we just pass everything to the Trainer class and launch the training:

from transformers import Trainer\n\ntrainer = Trainer(\n    model=model,\n    args=args,\n    train_dataset=train_dataset,\n    eval_dataset=validation_dataset,\n    tokenizer=tokenizer,\n)\ntrainer.train()

Note that while the training happens, each time the model is saved (here, every epoch) it is uploaded to the Hub in the background. This way, you will be able to to resume your training on another machine if necessary. The whole training takes a while (a little over an hour on a Titan RTX), so you can grab a coffee or reread some of the parts of the course that you’ve found more challenging while it proceeds. Also note that as soon as the first epoch is finished, you will see some weights uploaded to the Hub and you can start playing with your model on its page.

Once the training is complete, we can finally evaluate our model (and pray we didn’t spend all that compute time on nothing). The predict() method of the Trainer will return a tuple where the first elements will be the predictions of the model (here a pair with the start and end logits). We send this to our compute_metrics() function:

predictions, _, _ = trainer.predict(validation_dataset)\nstart_logits, end_logits = predictions\ncompute_metrics(start_logits, end_logits, validation_dataset, raw_datasets[\"validation\"])
{'exact_match': 81.18259224219489, 'f1': 88.67381321905516}

Great! As a comparison, the baseline scores reported in the BERT article for this model are 80.8 and 88.5, so we’re right where we should be.

Finally, we use the push_to_hub() method to make sure we upload the latest version of the model:

trainer.push_to_hub(commit_message=\"Training complete\")

This returns the URL of the commit it just did, if you want to inspect it:

'https://huggingface.co/sgugger/bert-finetuned-squad/commit/9dcee1fbc25946a6ed4bb32efb1bd71d5fa90b68'

The Trainer also drafts a model card with all the evaluation results and uploads it.

At this stage, you can use the inference widget on the Model Hub to test the model and share it with your friends, family, and favorite pets. You have successfully fine-tuned a model on a question answering task — congratulations!

✏️ Your turn! Try another model architecture to see if it performs better on this task!

If you want to dive a bit more deeply into the training loop, we will now show you how to do the same thing using 🤗 Accelerate.

A custom training loop

Let’s now have a look at the full training loop, so you can easily customize the parts you need. It will look a lot like the training loop in Chapter 3, with the exception of the evaluation loop. We will be able to evaluate the model regularly since we’re not constrained by the Trainer class anymore.

Preparing everything for training

First we need to build the DataLoaders from our datasets. We set the format of those datasets to \"torch\", and remove the columns in the validation set that are not used by the model. Then, we can use the default_data_collator provided by Transformers as a collate_fn and shuffle the training set, but not the validation set:

from torch.utils.data import DataLoader\nfrom transformers import default_data_collator\n\ntrain_dataset.set_format(\"torch\")\nvalidation_set = validation_dataset.remove_columns([\"example_id\", \"offset_mapping\"])\nvalidation_set.set_format(\"torch\")\n\ntrain_dataloader = DataLoader(\n    train_dataset,\n    shuffle=True,\n    collate_fn=default_data_collator,\n    batch_size=8,\n)\neval_dataloader = DataLoader(\n    validation_set, collate_fn=default_data_collator, batch_size=8\n)

Next we reinstantiate our model, to make sure we’re not continuing the fine-tuning from before but starting from the BERT pretrained model again:

model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)

Then we will need an optimizer. As usual we use the classic AdamW, which is like Adam, but with a fix in the way weight decay is applied:

from torch.optim import AdamW\n\noptimizer = AdamW(model.parameters(), lr=2e-5)

Once we have all those objects, we can send them to the accelerator.prepare() method. Remember that if you want to train on TPUs in a Colab notebook, you will need to move all of this code into a training function, and that shouldn’t execute any cell that instantiates an Accelerator. We can force mixed-precision training by passing fp16=True to the Accelerator (or, if you are executing the code as a script, just make sure to fill in the 🤗 Accelerate config appropriately).

from accelerate import Accelerator\n\naccelerator = Accelerator(fp16=True)\nmodel, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n    model, optimizer, train_dataloader, eval_dataloader\n)

As you should know from the previous sections, we can only use the train_dataloader length to compute the number of training steps after it has gone through the accelerator.prepare() method. We use the same linear schedule as in the previous sections:

from transformers import get_scheduler\n\nnum_train_epochs = 3\nnum_update_steps_per_epoch = len(train_dataloader)\nnum_training_steps = num_train_epochs * num_update_steps_per_epoch\n\nlr_scheduler = get_scheduler(\n    \"linear\",\n    optimizer=optimizer,\n    num_warmup_steps=0,\n    num_training_steps=num_training_steps,\n)

To push our model to the Hub, we will need to create a Repository object in a working folder. First log in to the Hugging Face Hub, if you’re not logged in already. We’ll determine the repository name from the model ID we want to give our model (feel free to replace the repo_name with your own choice; it just needs to contain your username, which is what the function get_full_repo_name() does):

from huggingface_hub import Repository, get_full_repo_name\n\nmodel_name = \"bert-finetuned-squad-accelerate\"\nrepo_name = get_full_repo_name(model_name)\nrepo_name
'sgugger/bert-finetuned-squad-accelerate'

Then we can clone that repository in a local folder. If it already exists, this local folder should be a clone of the repository we are working with:

output_dir = \"bert-finetuned-squad-accelerate\"\nrepo = Repository(output_dir, clone_from=repo_name)

We can now upload anything we save in output_dir by calling the repo.push_to_hub() method. This will help us upload the intermediate models at the end of each epoch.

Training loop

We are now ready to write the full training loop. After defining a progress bar to follow how training goes, the loop has three parts:

  • The training in itself, which is the classic iteration over the train_dataloader, forward pass through the model, then backward pass and optimizer step.
  • The evaluation, in which we gather all the values for start_logits and end_logits before converting them to NumPy arrays. Once the evaluation loop is finished, we concatenate all the results. Note that we need to truncate because the Accelerator may have added a few samples at the end to ensure we have the same number of examples in each process.
  • Saving and uploading, where we first save the model and the tokenizer, then call repo.push_to_hub(). As we did before, we use the argument blocking=False to tell the 🤗 Hub library to push in an asynchronous process. This way, training continues normally and this (long) instruction is executed in the background.

Here’s the complete code for the training loop:

from tqdm.auto import tqdm\nimport torch\n\nprogress_bar = tqdm(range(num_training_steps))\n\nfor epoch in range(num_train_epochs):\n    # Training\n    model.train()\n    for step, batch in enumerate(train_dataloader):\n        outputs = model(**batch)\n        loss = outputs.loss\n        accelerator.backward(loss)\n\n        optimizer.step()\n        lr_scheduler.step()\n        optimizer.zero_grad()\n        progress_bar.update(1)\n\n    # Evaluation\n    model.eval()\n    start_logits = []\n    end_logits = []\n    accelerator.print(\"Evaluation!\")\n    for batch in tqdm(eval_dataloader):\n        with torch.no_grad():\n            outputs = model(**batch)\n\n        start_logits.append(accelerator.gather(outputs.start_logits).cpu().numpy())\n        end_logits.append(accelerator.gather(outputs.end_logits).cpu().numpy())\n\n    start_logits = np.concatenate(start_logits)\n    end_logits = np.concatenate(end_logits)\n    start_logits = start_logits[: len(validation_dataset)]\n    end_logits = end_logits[: len(validation_dataset)]\n\n    metrics = compute_metrics(\n        start_logits, end_logits, validation_dataset, raw_datasets[\"validation\"]\n    )\n    print(f\"epoch {epoch}:\", metrics)\n\n    # Save and upload\n    accelerator.wait_for_everyone()\n    unwrapped_model = accelerator.unwrap_model(model)\n    unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)\n    if accelerator.is_main_process:\n        tokenizer.save_pretrained(output_dir)\n        repo.push_to_hub(\n            commit_message=f\"Training in progress epoch {epoch}\", blocking=False\n        )

In case this is the first time you’re seeing a model saved with 🤗 Accelerate, let’s take a moment to inspect the three lines of code that go with it:

accelerator.wait_for_everyone()\nunwrapped_model = accelerator.unwrap_model(model)\nunwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)

The first line is self-explanatory: it tells all the processes to wait until everyone is at that stage before continuing. This is to make sure we have the same model in every process before saving. Then we grab the unwrapped_model, which is the base model we defined. The accelerator.prepare() method changes the model to work in distributed training, so it won’t have the save_pretrained() method anymore; the accelerator.unwrap_model() method undoes that step. Lastly, we call save_pretrained() but tell that method to use accelerator.save() instead of torch.save().

Once this is done, you should have a model that produces results pretty similar to the one trained with the Trainer. You can check the model we trained using this code at huggingface-course/bert-finetuned-squad-accelerate. And if you want to test out any tweaks to the training loop, you can directly implement them by editing the code shown above!

Using the fine-tuned model

We’ve already shown you how you can use the model we fine-tuned on the Model Hub with the inference widget. To use it locally in a pipeline, you just have to specify the model identifier:

from transformers import pipeline\n\n# Replace this with your own checkpoint\nmodel_checkpoint = \"huggingface-course/bert-finetuned-squad\"\nquestion_answerer = pipeline(\"question-answering\", model=model_checkpoint)\n\ncontext = \"\"\"\n🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration\nbetween them. It's straightforward to train your models with one before loading them for inference with the other.\n\"\"\"\nquestion = \"Which deep learning libraries back 🤗 Transformers?\"\nquestion_answerer(question=question, context=context)
{'score': 0.9979003071784973,\n 'start': 78,\n 'end': 105,\n 'answer': 'Jax, PyTorch and TensorFlow'}

Great! Our model is working as well as the default one for this pipeline!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:31.159Z"} {"title":"Mastering NLP - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter7/8?fw=pt","markdown":"## [](#mastering-nlp)Mastering NLP\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-7-questions)\n\nIf you’ve made it this far in the course, congratulations — you now have all the knowledge and tools you need to tackle (almost) any NLP task with 🤗 Transformers and the Hugging Face ecosystem!\n\nWe have seen a lot of different data collators, so we made this little video to help you find which one to use for each task:\n\nAfter completing this lightning tour through the core NLP tasks, you should:\n\n- Know which architectures (encoder, decoder, or encoder-decoder) are best suited for each task\n- Understand the difference between pretraining and fine-tuning a language model\n- Know how to train Transformer models using either the `Trainer` API and distributed training features of 🤗 Accelerate or TensorFlow and Keras, depending on which track you’ve been following\n- Understand the meaning and limitations of metrics like ROUGE and BLEU for text generation tasks\n- Know how to interact with your fine-tuned models, both on the Hub and using the `pipeline` from 🤗 Transformers\n\nDespite all this knowledge, there will come a time when you’ll either encounter a difficult bug in your code or have a question about how to solve a particular NLP problem. Fortunately, the Hugging Face community is here to help you! In the final chapter of this part of the course, we’ll explore how you can debug your Transformer models and ask for help effectively.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tMastering NLP - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Mastering NLP

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Mastering NLP

\"Ask

If you’ve made it this far in the course, congratulations — you now have all the knowledge and tools you need to tackle (almost) any NLP task with 🤗 Transformers and the Hugging Face ecosystem!

We have seen a lot of different data collators, so we made this little video to help you find which one to use for each task:

After completing this lightning tour through the core NLP tasks, you should:

  • Know which architectures (encoder, decoder, or encoder-decoder) are best suited for each task
  • Understand the difference between pretraining and fine-tuning a language model
  • Know how to train Transformer models using either the Trainer API and distributed training features of 🤗 Accelerate or TensorFlow and Keras, depending on which track you’ve been following
  • Understand the meaning and limitations of metrics like ROUGE and BLEU for text generation tasks
  • Know how to interact with your fine-tuned models, both on the Hub and using the pipeline from 🤗 Transformers

Despite all this knowledge, there will come a time when you’ll either encounter a difficult bug in your code or have a question about how to solve a particular NLP problem. Fortunately, the Hugging Face community is here to help you! In the final chapter of this part of the course, we’ll explore how you can debug your Transformer models and ask for help effectively.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:31.623Z"} {"title":"End-of-chapter quiz - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter7/9?fw=pt","markdown":"3\\. Fine-tuning a pretrained model\n\n4\\. Sharing models and tokenizers\n\n5\\. The 🤗 Datasets library\n\n6\\. The 🤗 Tokenizers library\n\n9\\. Building and sharing demos new\n\n[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#end-of-chapter-quiz)End-of-chapter quiz\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-7-questions)\n\nLet’s test what you learned in this chapter!\n\n### [](#1.-which-of-the-following-tasks-can-be-framed-as-a-token-classification-problem?)1\\. Which of the following tasks can be framed as a token classification problem?\n\n### [](#2.-what-part-of-the-preprocessing-for-token-classification-differs-from-the-other-preprocessing-pipelines?)2\\. What part of the preprocessing for token classification differs from the other preprocessing pipelines?\n\n### [](#3.-what-problem-arises-when-we-tokenize-the-words-in-a-token-classification-problem-and-want-to-label-the-tokens?)3\\. What problem arises when we tokenize the words in a token classification problem and want to label the tokens?\n\n### [](#4.-what-does-“domain-adaptation”-mean?)4\\. What does “domain adaptation” mean?\n\n### [](#5.-what-are-the-labels-in-a-masked-language-modeling-problem?)5\\. What are the labels in a masked language modeling problem?\n\n### [](#6.-which-of-these-tasks-can-be-seen-as-a-sequence-to-sequence-problem?)6\\. Which of these tasks can be seen as a sequence-to-sequence problem?\n\n### [](#7.-what-is-the-proper-way-to-preprocess-the-data-for-a-sequence-to-sequence-problem?)7\\. What is the proper way to preprocess the data for a sequence-to-sequence problem?\n\n### [](#8.-why-is-there-a-specific-subclass-of-trainer-for-sequence-to-sequence-problems?)8\\. Why is there a specific subclass of `Trainer` for sequence-to-sequence problems?\n\n### [](#10.-when-should-you-pretrain-a-new-model?)10\\. When should you pretrain a new model?\n\n### [](#11.-why-is-it-easy-to-pretrain-a-language-model-on-lots-and-lots-of-texts?)11\\. Why is it easy to pretrain a language model on lots and lots of texts?\n\n### [](#12.-what-are-the-main-challenges-when-preprocessing-data-for-a-question-answering-task?)12\\. What are the main challenges when preprocessing data for a question answering task?\n\n### [](#13.-how-is-post-processing-usually-done-in-question-answering?)13\\. How is post-processing usually done in question answering?","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tEnd-of-chapter quiz - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

End-of-chapter quiz

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

End-of-chapter quiz

\"Ask

Let’s test what you learned in this chapter!

1. Which of the following tasks can be framed as a token classification problem?

2. What part of the preprocessing for token classification differs from the other preprocessing pipelines?

3. What problem arises when we tokenize the words in a token classification problem and want to label the tokens?

4. What does “domain adaptation” mean?

5. What are the labels in a masked language modeling problem?

6. Which of these tasks can be seen as a sequence-to-sequence problem?

7. What is the proper way to preprocess the data for a sequence-to-sequence problem?

trainer-for-sequence-to-sequence-problems?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#8.-why-is-there-a-specific-subclass-of-trainer-for-sequence-to-sequence-problems?\"> 8. Why is there a specific subclass of Trainer for sequence-to-sequence problems?

10. When should you pretrain a new model?

11. Why is it easy to pretrain a language model on lots and lots of texts?

12. What are the main challenges when preprocessing data for a question answering task?

13. How is post-processing usually done in question answering?

\n\t\t\t\t
Mastering NLP\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tNext chapter
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:31.736Z"} {"title":"Introduction - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter8/1?fw=pt","markdown":"## [](#introduction)Introduction\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-8-questions)\n\nNow that you know how to tackle the most common NLP tasks with 🤗 Transformers, you should be able to get started on your own projects! In this chapter we will explore what to do when you hit a problem. You’ll learn how to successfully debug your code or your training, and how to ask the community for help if you don’t manage to solve the problem by yourself. And if you think you’ve found a bug in one of the Hugging Face libraries, we’ll show you the best way to report it so that the issue is resolved as quickly as possible.\n\nMore precisely, in this chapter you will learn:\n\n- The first thing to do when you get an error\n- How to ask for help on the [forums](https://discuss.huggingface.co/)\n- How to debug your training pipeline\n- How to write a good issue\n\nNone of this is specifically related to 🤗 Transformers or the Hugging Face ecosystem, of course; the lessons from this chapter are applicable to most open source projects!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tIntroduction - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Introduction

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Introduction

\"Ask

Now that you know how to tackle the most common NLP tasks with 🤗 Transformers, you should be able to get started on your own projects! In this chapter we will explore what to do when you hit a problem. You’ll learn how to successfully debug your code or your training, and how to ask the community for help if you don’t manage to solve the problem by yourself. And if you think you’ve found a bug in one of the Hugging Face libraries, we’ll show you the best way to report it so that the issue is resolved as quickly as possible.

More precisely, in this chapter you will learn:

  • The first thing to do when you get an error
  • How to ask for help on the forums
  • How to debug your training pipeline
  • How to write a good issue

None of this is specifically related to 🤗 Transformers or the Hugging Face ecosystem, of course; the lessons from this chapter are applicable to most open source projects!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:32.305Z"} {"title":"What to do when you get an error - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter8/2?fw=pt","markdown":"## [](#what-to-do-when-you-get-an-error)What to do when you get an error\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-8-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter8/section2.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter8/section2.ipynb)\n\nIn this section we’ll look at some common errors that can occur when you’re trying to generate predictions from your freshly tuned Transformer model. This will prepare you for [section 4](/course/chapter8/section4), where we’ll explore how to debug the training phase itself.\n\nWe’ve prepared a [template model repository](https://huggingface.co/lewtun/distilbert-base-uncased-finetuned-squad-d5716d28) for this section, and if you want to run the code in this chapter you’ll first need to copy the model to your account on the [Hugging Face Hub](https://huggingface.co/). To do so, first log in by running either the following in a Jupyter notebook:\n\n```\nfrom huggingface_hub import notebook_login\n\nnotebook_login()```\n\nor the following in your favorite terminal:\n\nThis will prompt you to enter your username and password, and will save a token under _~/.cache/huggingface/_. Once you’ve logged in, you can copy the template repository with the following function:\n\n```\nfrom distutils.dir_util import copy_tree\nfrom huggingface_hub import Repository, snapshot_download, create_repo, get_full_repo_name\n\n\ndef copy_repository_template():\n \n template_repo_id = \"lewtun/distilbert-base-uncased-finetuned-squad-d5716d28\"\n commit_hash = \"be3eaffc28669d7932492681cd5f3e8905e358b4\"\n template_repo_dir = snapshot_download(template_repo_id, revision=commit_hash)\n \n model_name = template_repo_id.split(\"/\")[1]\n create_repo(model_name, exist_ok=True)\n \n new_repo_id = get_full_repo_name(model_name)\n new_repo_dir = model_name\n repo = Repository(local_dir=new_repo_dir, clone_from=new_repo_id)\n \n copy_tree(template_repo_dir, new_repo_dir)\n \n repo.push_to_hub()```\n\nNow when you call `copy_repository_template()`, it will create a copy of the template repository under your account.\n\n## [](#debugging-the-pipeline-from-transformers)Debugging the pipeline from 🤗 Transformers\n\nTo kick off our journey into the wonderful world of debugging Transformer models, consider the following scenario: you’re working with a colleague on a question answering project to help the customers of an e-commerce website find answers about consumer products. Your colleague shoots you a message like:\n\n> G’day! I just ran an experiment using the techniques in [Chapter 7](/course/chapter7/7) of the Hugging Face course and got some great results on SQuAD! I think we can use this model as a starting point for our project. The model ID on the Hub is “lewtun/distillbert-base-uncased-finetuned-squad-d5716d28”. Feel free to test it out :)\n\nand the first thing you think of is to load the model using the `pipeline` from 🤗 Transformers:\n\n```\nfrom transformers import pipeline\n\nmodel_checkpoint = get_full_repo_name(\"distillbert-base-uncased-finetuned-squad-d5716d28\")\nreader = pipeline(\"question-answering\", model=model_checkpoint)```\n\n```\n\"\"\"\nOSError: Can't load config for 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28'. Make sure that:\n\n- 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28' is a correct model identifier listed on 'https://huggingface.co/models'\n\n- or 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28' is the correct path to a directory containing a config.json file\n\"\"\"```\n\nOh no, something seems to have gone wrong! If you’re new to programming, these kind of errors can seem a bit cryptic at first (what even is an `OSError`?!). The error displayed here is just the last part of a much larger error report called a _Python traceback_ (aka stack trace). For example, if you’re running this code on Google Colab, you should see something like the following screenshot:\n\n![A Python traceback.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter8/traceback.png)\n\nThere’s a lot of information contained in these reports, so let’s walk through the key parts together. The first thing to note is that tracebacks should be read _from bottom to top_. This might sound weird if you’re used to reading English text from top to bottom, but it reflects the fact that the traceback shows the sequence of function calls that the `pipeline` makes when downloading the model and tokenizer. (Check out [Chapter 2](/course/chapter2) for more details on how the `pipeline` works under the hood.)\n\n🚨 See that blue box around “6 frames” in the traceback from Google Colab? That’s a special feature of Colab, which compresses the traceback into “frames.” If you can’t seem to find the source of an error, make sure you expand the full traceback by clicking on those two little arrows.\n\nThis means that the last line of the traceback indicates the last error message and gives the name of the exception that was raised. In this case, the exception type is `OSError`, which indicates a system-related error. If we read the accompanying error message, we can see that there seems to be a problem with the model’s _config.json_ file, and we’re given two suggestions to fix it:\n\n```\n\"\"\"\nMake sure that:\n\n- 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28' is a correct model identifier listed on 'https://huggingface.co/models'\n\n- or 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28' is the correct path to a directory containing a config.json file\n\"\"\"```\n\n💡 If you encounter an error message that is difficult to understand, just copy and paste the message into the Google or [Stack Overflow](https://stackoverflow.com/) search bar (yes, really!). There’s a good chance that you’re not the first person to encounter the error, and this is a good way to find solutions that others in the community have posted. For example, searching for `OSError: Can't load config for` on Stack Overflow gives several [hits](https://stackoverflow.com/search?q=OSError%3A+Can%27t+load+config+for+) that could be used as a starting point for solving the problem.\n\nThe first suggestion is asking us to check whether the model ID is actually correct, so the first order of business is to copy the identifier and paste it into the Hub’s search bar:\n\n![The wrong model name.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter8/wrong-model-id.png)\n\nHmm, it indeed looks like our colleague’s model is not on the Hub… aha, but there’s a typo in the name of the model! DistilBERT only has one “l” in its name, so let’s fix that and look for “lewtun/distilbert-base-uncased-finetuned-squad-d5716d28” instead:\n\n![The right model name.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter8/true-model-id.png)\n\nOkay, this got a hit. Now let’s try to download the model again with the correct model ID:\n\n```\nmodel_checkpoint = get_full_repo_name(\"distilbert-base-uncased-finetuned-squad-d5716d28\")\nreader = pipeline(\"question-answering\", model=model_checkpoint)```\n\n```\n\"\"\"\nOSError: Can't load config for 'lewtun/distilbert-base-uncased-finetuned-squad-d5716d28'. Make sure that:\n\n- 'lewtun/distilbert-base-uncased-finetuned-squad-d5716d28' is a correct model identifier listed on 'https://huggingface.co/models'\n\n- or 'lewtun/distilbert-base-uncased-finetuned-squad-d5716d28' is the correct path to a directory containing a config.json file\n\"\"\"```\n\nArgh, foiled again — welcome to the daily life of a machine learning engineer! Since we’ve fixed the model ID, the problem must lie in the repository itself. A quick way to access the contents of a repository on the 🤗 Hub is via the `list_repo_files()` function of the `huggingface_hub` library:\n\n```\nfrom huggingface_hub import list_repo_files\n\nlist_repo_files(repo_id=model_checkpoint)```\n\n```\n['.gitattributes', 'README.md', 'pytorch_model.bin', 'special_tokens_map.json', 'tokenizer_config.json', 'training_args.bin', 'vocab.txt']```\n\nInteresting — there doesn’t seem to be a _config.json_ file in the repository! No wonder our `pipeline` couldn’t load the model; our colleague must have forgotten to push this file to the Hub after they fine-tuned it. In this case, the problem seems pretty straightforward to fix: we could ask them to add the file, or, since we can see from the model ID that the pretrained model used was [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased), we can download the config for this model and push it to our repo to see if that resolves the problem. Let’s try that. Using the techniques we learned in [Chapter 2](/course/chapter2), we can download the model’s configuration with the `AutoConfig` class:\n\n```\nfrom transformers import AutoConfig\n\npretrained_checkpoint = \"distilbert-base-uncased\"\nconfig = AutoConfig.from_pretrained(pretrained_checkpoint)```\n\n🚨 The approach we’re taking here is not foolproof, since our colleague may have tweaked the configuration of `distilbert-base-uncased` before fine-tuning the model. In real life, we’d want to check with them first, but for the purposes of this section we’ll assume they used the default configuration.\n\nWe can then push this to our model repository with the configuration’s `push_to_hub()` function:\n\n```\nconfig.push_to_hub(model_checkpoint, commit_message=\"Add config.json\")```\n\nNow we can test if this worked by loading the model from the latest commit on the `main` branch:\n\n```\nreader = pipeline(\"question-answering\", model=model_checkpoint, revision=\"main\")\n\ncontext = r\"\"\"\nExtractive Question Answering is the task of extracting an answer from a text\ngiven a question. An example of a question answering dataset is the SQuAD\ndataset, which is entirely based on that task. If you would like to fine-tune a\nmodel on a SQuAD task, you may leverage the\nexamples/pytorch/question-answering/run_squad.py script.\n\n🤗 Transformers is interoperable with the PyTorch, TensorFlow, and JAX\nframeworks, so you can use your favourite tools for a wide variety of tasks!\n\"\"\"\n\nquestion = \"What is extractive question answering?\"\nreader(question=question, context=context)```\n\n```\n{'score': 0.38669535517692566,\n 'start': 34,\n 'end': 95,\n 'answer': 'the task of extracting an answer from a text given a question'}```\n\nWoohoo, it worked! Let’s recap what you’ve just learned:\n\n- The error messages in Python are known as _tracebacks_ and are read from bottom to top. The last line of the error message usually contains the information you need to locate the source of the problem.\n- If the last line does not contain sufficient information, work your way up the traceback and see if you can identify where in the source code the error occurs.\n- If none of the error messages can help you debug the problem, try searching online for a solution to a similar issue.\n- The `huggingface_hub` // 🤗 Hub? library provides a suite of tools that you can use to interact with and debug repositories on the Hub.\n\nNow that you know how to debug a pipeline, let’s take a look at a trickier example in the forward pass of the model itself.\n\n## [](#debugging-the-forward-pass-of-your-model)Debugging the forward pass of your model\n\nAlthough the `pipeline` is great for most applications where you need to quickly generate predictions, sometimes you’ll need to access the model’s logits (say, if you have some custom post-processing that you’d like to apply). To see what can go wrong in this case, let’s first grab the model and tokenizer from our `pipeline`:\n\n```\ntokenizer = reader.tokenizer\nmodel = reader.model```\n\nNext we need a question, so let’s see if our favorite frameworks are supported:\n\n```\nquestion = \"Which frameworks can I use?\"```\n\nAs we saw in [Chapter 7](/course/chapter7), the usual steps we need to take are tokenizing the inputs, extracting the logits of the start and end tokens, and then decoding the answer span:\n\n```\nimport torch\n\ninputs = tokenizer(question, context, add_special_tokens=True)\ninput_ids = inputs[\"input_ids\"][0]\noutputs = model(**inputs)\nanswer_start_scores = outputs.start_logits\nanswer_end_scores = outputs.end_logits\n\nanswer_start = torch.argmax(answer_start_scores)\n\nanswer_end = torch.argmax(answer_end_scores) + 1\nanswer = tokenizer.convert_tokens_to_string(\n tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])\n)\nprint(f\"Question: {question}\")\nprint(f\"Answer: {answer}\")```\n\n```\n\"\"\"\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n/var/folders/28/k4cy5q7s2hs92xq7_h89_vgm0000gn/T/ipykernel_75743/2725838073.py in \n 1 inputs = tokenizer(question, text, add_special_tokens=True)\n 2 input_ids = inputs[\"input_ids\"]\n----> 3 outputs = model(**inputs)\n 4 answer_start_scores = outputs.start_logits\n 5 answer_end_scores = outputs.end_logits\n\n~/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\n 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n 1050 or _global_forward_hooks or _global_forward_pre_hooks):\n-> 1051 return forward_call(*input, **kwargs)\n 1052 # Do not call functions when jit is used\n 1053 full_backward_hooks, non_full_backward_hooks = [], []\n\n~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, start_positions, end_positions, output_attentions, output_hidden_states, return_dict)\n 723 return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n 724\n--> 725 distilbert_output = self.distilbert(\n 726 input_ids=input_ids,\n 727 attention_mask=attention_mask,\n\n~/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\n 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n 1050 or _global_forward_hooks or _global_forward_pre_hooks):\n-> 1051 return forward_call(*input, **kwargs)\n 1052 # Do not call functions when jit is used\n 1053 full_backward_hooks, non_full_backward_hooks = [], []\n\n~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)\n 471 raise ValueError(\"You cannot specify both input_ids and inputs_embeds at the same time\")\n 472 elif input_ids is not None:\n--> 473 input_shape = input_ids.size()\n 474 elif inputs_embeds is not None:\n 475 input_shape = inputs_embeds.size()[:-1]\n\nAttributeError: 'list' object has no attribute 'size'\n\"\"\"```\n\nOh dear, it looks like we have a bug in our code! But we’re not afraid of a little debugging. You can use the Python debugger in a notebook:\n\nor in a terminal:\n\nHere, reading the error message tells us that `'list' object has no attribute 'size'`, and we can see a `-->` arrow pointing to the line where the problem was raised in `model(**inputs)`.You can debug this interactively using the Python debugger, but for now we’ll simply print out a slice of `inputs` to see what we have:\n\n```\n[101, 2029, 7705, 2015, 2064]```\n\nThis certainly looks like an ordinary Python `list`, but let’s double-check the type:\n\n```\ntype(inputs[\"input_ids\"])```\n\nYep, that’s a Python `list` for sure. So what went wrong? Recall from [Chapter 2](/course/chapter2) that the `AutoModelForXxx` classes in 🤗 Transformers operate on _tensors_ (either in PyTorch or TensorFlow), and a common operation is to extract the dimensions of a tensor using `Tensor.size()` in, say, PyTorch. Let’s take another look at the traceback, to see which line triggered the exception:\n\n```\n~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)\n 471 raise ValueError(\"You cannot specify both input_ids and inputs_embeds at the same time\")\n 472 elif input_ids is not None:\n--> 473 input_shape = input_ids.size()\n 474 elif inputs_embeds is not None:\n 475 input_shape = inputs_embeds.size()[:-1]\n\nAttributeError: 'list' object has no attribute 'size'```\n\nIt looks like our code tried to call `input_ids.size()`, but this clearly won’t work for a Python `list`, which is just a container. How can we solve this problem? Searching for the error message on Stack Overflow gives quite a few relevant [hits](https://stackoverflow.com/search?q=AttributeError%3A+%27list%27+object+has+no+attribute+%27size%27&s=c15ec54c-63cb-481d-a749-408920073e8f). Clicking on the first one displays a similar question to ours, with the answer shown in the screenshot below:\n\n![An answer from Stack Overflow.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter8/stack-overflow.png)\n\nThe answer recommends that we add `return_tensors='pt'` to the tokenizer, so let’s see if that works for us:\n\n```\ninputs = tokenizer(question, context, add_special_tokens=True, return_tensors=\"pt\")\ninput_ids = inputs[\"input_ids\"][0]\noutputs = model(**inputs)\nanswer_start_scores = outputs.start_logits\nanswer_end_scores = outputs.end_logits\n\nanswer_start = torch.argmax(answer_start_scores)\n\nanswer_end = torch.argmax(answer_end_scores) + 1\nanswer = tokenizer.convert_tokens_to_string(\n tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])\n)\nprint(f\"Question: {question}\")\nprint(f\"Answer: {answer}\")```\n\n```\n\"\"\"\nQuestion: Which frameworks can I use?\nAnswer: pytorch, tensorflow, and jax\n\"\"\"```\n\nNice, it worked! This is a great example of how useful Stack Overflow can be: by identifying a similar problem, we were able to benefit from the experience of others in the community. However, a search like this won’t always yield a relevant answer, so what can you do in such cases? Fortunately, there is a welcoming community of developers on the [Hugging Face forums](https://discuss.huggingface.co/) that can help you out! In the next section, we’ll take a look at how you can craft good forum questions that are likely to get answered.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tWhat to do when you get an error - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

What to do when you get an error

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

What to do when you get an error

\"Ask \"Open \"Open

In this section we’ll look at some common errors that can occur when you’re trying to generate predictions from your freshly tuned Transformer model. This will prepare you for section 4, where we’ll explore how to debug the training phase itself.

We’ve prepared a template model repository for this section, and if you want to run the code in this chapter you’ll first need to copy the model to your account on the Hugging Face Hub. To do so, first log in by running either the following in a Jupyter notebook:

from huggingface_hub import notebook_login\n\nnotebook_login()

or the following in your favorite terminal:

huggingface-cli login

This will prompt you to enter your username and password, and will save a token under ~/.cache/huggingface/. Once you’ve logged in, you can copy the template repository with the following function:

from distutils.dir_util import copy_tree\nfrom huggingface_hub import Repository, snapshot_download, create_repo, get_full_repo_name\n\n\ndef copy_repository_template():\n    # Clone the repo and extract the local path\n    template_repo_id = \"lewtun/distilbert-base-uncased-finetuned-squad-d5716d28\"\n    commit_hash = \"be3eaffc28669d7932492681cd5f3e8905e358b4\"\n    template_repo_dir = snapshot_download(template_repo_id, revision=commit_hash)\n    # Create an empty repo on the Hub\n    model_name = template_repo_id.split(\"/\")[1]\n    create_repo(model_name, exist_ok=True)\n    # Clone the empty repo\n    new_repo_id = get_full_repo_name(model_name)\n    new_repo_dir = model_name\n    repo = Repository(local_dir=new_repo_dir, clone_from=new_repo_id)\n    # Copy files\n    copy_tree(template_repo_dir, new_repo_dir)\n    # Push to Hub\n    repo.push_to_hub()

Now when you call copy_repository_template(), it will create a copy of the template repository under your account.

Debugging the pipeline from 🤗 Transformers

To kick off our journey into the wonderful world of debugging Transformer models, consider the following scenario: you’re working with a colleague on a question answering project to help the customers of an e-commerce website find answers about consumer products. Your colleague shoots you a message like:

G’day! I just ran an experiment using the techniques in Chapter 7 of the Hugging Face course and got some great results on SQuAD! I think we can use this model as a starting point for our project. The model ID on the Hub is “lewtun/distillbert-base-uncased-finetuned-squad-d5716d28”. Feel free to test it out :)

and the first thing you think of is to load the model using the pipeline from 🤗 Transformers:

from transformers import pipeline\n\nmodel_checkpoint = get_full_repo_name(\"distillbert-base-uncased-finetuned-squad-d5716d28\")\nreader = pipeline(\"question-answering\", model=model_checkpoint)
\"\"\"\nOSError: Can't load config for 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28'. Make sure that:\n\n- 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28' is a correct model identifier listed on 'https://huggingface.co/models'\n\n- or 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28' is the correct path to a directory containing a config.json file\n\"\"\"

Oh no, something seems to have gone wrong! If you’re new to programming, these kind of errors can seem a bit cryptic at first (what even is an OSError?!). The error displayed here is just the last part of a much larger error report called a Python traceback (aka stack trace). For example, if you’re running this code on Google Colab, you should see something like the following screenshot:

\"A

There’s a lot of information contained in these reports, so let’s walk through the key parts together. The first thing to note is that tracebacks should be read from bottom to top. This might sound weird if you’re used to reading English text from top to bottom, but it reflects the fact that the traceback shows the sequence of function calls that the pipeline makes when downloading the model and tokenizer. (Check out Chapter 2 for more details on how the pipeline works under the hood.)

🚨 See that blue box around “6 frames” in the traceback from Google Colab? That’s a special feature of Colab, which compresses the traceback into “frames.” If you can’t seem to find the source of an error, make sure you expand the full traceback by clicking on those two little arrows.

This means that the last line of the traceback indicates the last error message and gives the name of the exception that was raised. In this case, the exception type is OSError, which indicates a system-related error. If we read the accompanying error message, we can see that there seems to be a problem with the model’s config.json file, and we’re given two suggestions to fix it:

\"\"\"\nMake sure that:\n\n- 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28' is a correct model identifier listed on 'https://huggingface.co/models'\n\n- or 'lewtun/distillbert-base-uncased-finetuned-squad-d5716d28' is the correct path to a directory containing a config.json file\n\"\"\"

💡 If you encounter an error message that is difficult to understand, just copy and paste the message into the Google or Stack Overflow search bar (yes, really!). There’s a good chance that you’re not the first person to encounter the error, and this is a good way to find solutions that others in the community have posted. For example, searching for OSError: Can't load config for on Stack Overflow gives several hits that could be used as a starting point for solving the problem.

The first suggestion is asking us to check whether the model ID is actually correct, so the first order of business is to copy the identifier and paste it into the Hub’s search bar:

\"The

Hmm, it indeed looks like our colleague’s model is not on the Hub… aha, but there’s a typo in the name of the model! DistilBERT only has one “l” in its name, so let’s fix that and look for “lewtun/distilbert-base-uncased-finetuned-squad-d5716d28” instead:

\"The

Okay, this got a hit. Now let’s try to download the model again with the correct model ID:

model_checkpoint = get_full_repo_name(\"distilbert-base-uncased-finetuned-squad-d5716d28\")\nreader = pipeline(\"question-answering\", model=model_checkpoint)
\"\"\"\nOSError: Can't load config for 'lewtun/distilbert-base-uncased-finetuned-squad-d5716d28'. Make sure that:\n\n- 'lewtun/distilbert-base-uncased-finetuned-squad-d5716d28' is a correct model identifier listed on 'https://huggingface.co/models'\n\n- or 'lewtun/distilbert-base-uncased-finetuned-squad-d5716d28' is the correct path to a directory containing a config.json file\n\"\"\"

Argh, foiled again — welcome to the daily life of a machine learning engineer! Since we’ve fixed the model ID, the problem must lie in the repository itself. A quick way to access the contents of a repository on the 🤗 Hub is via the list_repo_files() function of the huggingface_hub library:

from huggingface_hub import list_repo_files\n\nlist_repo_files(repo_id=model_checkpoint)
['.gitattributes', 'README.md', 'pytorch_model.bin', 'special_tokens_map.json', 'tokenizer_config.json', 'training_args.bin', 'vocab.txt']

Interesting — there doesn’t seem to be a config.json file in the repository! No wonder our pipeline couldn’t load the model; our colleague must have forgotten to push this file to the Hub after they fine-tuned it. In this case, the problem seems pretty straightforward to fix: we could ask them to add the file, or, since we can see from the model ID that the pretrained model used was distilbert-base-uncased, we can download the config for this model and push it to our repo to see if that resolves the problem. Let’s try that. Using the techniques we learned in Chapter 2, we can download the model’s configuration with the AutoConfig class:

from transformers import AutoConfig\n\npretrained_checkpoint = \"distilbert-base-uncased\"\nconfig = AutoConfig.from_pretrained(pretrained_checkpoint)

🚨 The approach we’re taking here is not foolproof, since our colleague may have tweaked the configuration of distilbert-base-uncased before fine-tuning the model. In real life, we’d want to check with them first, but for the purposes of this section we’ll assume they used the default configuration.

We can then push this to our model repository with the configuration’s push_to_hub() function:

config.push_to_hub(model_checkpoint, commit_message=\"Add config.json\")

Now we can test if this worked by loading the model from the latest commit on the main branch:

reader = pipeline(\"question-answering\", model=model_checkpoint, revision=\"main\")\n\ncontext = r\"\"\"\nExtractive Question Answering is the task of extracting an answer from a text\ngiven a question. An example of a question answering dataset is the SQuAD\ndataset, which is entirely based on that task. If you would like to fine-tune a\nmodel on a SQuAD task, you may leverage the\nexamples/pytorch/question-answering/run_squad.py script.\n\n🤗 Transformers is interoperable with the PyTorch, TensorFlow, and JAX\nframeworks, so you can use your favourite tools for a wide variety of tasks!\n\"\"\"\n\nquestion = \"What is extractive question answering?\"\nreader(question=question, context=context)
{'score': 0.38669535517692566,\n 'start': 34,\n 'end': 95,\n 'answer': 'the task of extracting an answer from a text given a question'}

Woohoo, it worked! Let’s recap what you’ve just learned:

  • The error messages in Python are known as tracebacks and are read from bottom to top. The last line of the error message usually contains the information you need to locate the source of the problem.
  • If the last line does not contain sufficient information, work your way up the traceback and see if you can identify where in the source code the error occurs.
  • If none of the error messages can help you debug the problem, try searching online for a solution to a similar issue.
  • The huggingface_hub\n// 🤗 Hub?\nlibrary provides a suite of tools that you can use to interact with and debug repositories on the Hub.

Now that you know how to debug a pipeline, let’s take a look at a trickier example in the forward pass of the model itself.

Debugging the forward pass of your model

Although the pipeline is great for most applications where you need to quickly generate predictions, sometimes you’ll need to access the model’s logits (say, if you have some custom post-processing that you’d like to apply). To see what can go wrong in this case, let’s first grab the model and tokenizer from our pipeline:

tokenizer = reader.tokenizer\nmodel = reader.model

Next we need a question, so let’s see if our favorite frameworks are supported:

question = \"Which frameworks can I use?\"

As we saw in Chapter 7, the usual steps we need to take are tokenizing the inputs, extracting the logits of the start and end tokens, and then decoding the answer span:

import torch\n\ninputs = tokenizer(question, context, add_special_tokens=True)\ninput_ids = inputs[\"input_ids\"][0]\noutputs = model(**inputs)\nanswer_start_scores = outputs.start_logits\nanswer_end_scores = outputs.end_logits\n# Get the most likely beginning of answer with the argmax of the score\nanswer_start = torch.argmax(answer_start_scores)\n# Get the most likely end of answer with the argmax of the score\nanswer_end = torch.argmax(answer_end_scores) + 1\nanswer = tokenizer.convert_tokens_to_string(\n    tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])\n)\nprint(f\"Question: {question}\")\nprint(f\"Answer: {answer}\")
\"\"\"\n---------------------------------------------------------------------------\nAttributeError                            Traceback (most recent call last)\n/var/folders/28/k4cy5q7s2hs92xq7_h89_vgm0000gn/T/ipykernel_75743/2725838073.py in <module>\n      1 inputs = tokenizer(question, text, add_special_tokens=True)\n      2 input_ids = inputs[\"input_ids\"]\n----> 3 outputs = model(**inputs)\n      4 answer_start_scores = outputs.start_logits\n      5 answer_end_scores = outputs.end_logits\n\n~/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\n   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n   1050                 or _global_forward_hooks or _global_forward_pre_hooks):\n-> 1051             return forward_call(*input, **kwargs)\n   1052         # Do not call functions when jit is used\n   1053         full_backward_hooks, non_full_backward_hooks = [], []\n\n~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, start_positions, end_positions, output_attentions, output_hidden_states, return_dict)\n    723         return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n    724\n--> 725         distilbert_output = self.distilbert(\n    726             input_ids=input_ids,\n    727             attention_mask=attention_mask,\n\n~/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\n   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n   1050                 or _global_forward_hooks or _global_forward_pre_hooks):\n-> 1051             return forward_call(*input, **kwargs)\n   1052         # Do not call functions when jit is used\n   1053         full_backward_hooks, non_full_backward_hooks = [], []\n\n~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)\n    471             raise ValueError(\"You cannot specify both input_ids and inputs_embeds at the same time\")\n    472         elif input_ids is not None:\n--> 473             input_shape = input_ids.size()\n    474         elif inputs_embeds is not None:\n    475             input_shape = inputs_embeds.size()[:-1]\n\nAttributeError: 'list' object has no attribute 'size'\n\"\"\"

Oh dear, it looks like we have a bug in our code! But we’re not afraid of a little debugging. You can use the Python debugger in a notebook:

or in a terminal:

Here, reading the error message tells us that 'list' object has no attribute 'size', and we can see a --> arrow pointing to the line where the problem was raised in model(**inputs).You can debug this interactively using the Python debugger, but for now we’ll simply print out a slice of inputs to see what we have:

inputs[\"input_ids\"][:5]
[101, 2029, 7705, 2015, 2064]

This certainly looks like an ordinary Python list, but let’s double-check the type:

type(inputs[\"input_ids\"])
list

Yep, that’s a Python list for sure. So what went wrong? Recall from Chapter 2 that the AutoModelForXxx classes in 🤗 Transformers operate on tensors (either in PyTorch or TensorFlow), and a common operation is to extract the dimensions of a tensor using Tensor.size() in, say, PyTorch. Let’s take another look at the traceback, to see which line triggered the exception:

~/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)\n    471             raise ValueError(\"You cannot specify both input_ids and inputs_embeds at the same time\")\n    472         elif input_ids is not None:\n--> 473             input_shape = input_ids.size()\n    474         elif inputs_embeds is not None:\n    475             input_shape = inputs_embeds.size()[:-1]\n\nAttributeError: 'list' object has no attribute 'size'

It looks like our code tried to call input_ids.size(), but this clearly won’t work for a Python list, which is just a container. How can we solve this problem? Searching for the error message on Stack Overflow gives quite a few relevant hits. Clicking on the first one displays a similar question to ours, with the answer shown in the screenshot below:

\"An

The answer recommends that we add return_tensors='pt' to the tokenizer, so let’s see if that works for us:

inputs = tokenizer(question, context, add_special_tokens=True, return_tensors=\"pt\")\ninput_ids = inputs[\"input_ids\"][0]\noutputs = model(**inputs)\nanswer_start_scores = outputs.start_logits\nanswer_end_scores = outputs.end_logits\n# Get the most likely beginning of answer with the argmax of the score\nanswer_start = torch.argmax(answer_start_scores)\n# Get the most likely end of answer with the argmax of the score\nanswer_end = torch.argmax(answer_end_scores) + 1\nanswer = tokenizer.convert_tokens_to_string(\n    tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])\n)\nprint(f\"Question: {question}\")\nprint(f\"Answer: {answer}\")
\"\"\"\nQuestion: Which frameworks can I use?\nAnswer: pytorch, tensorflow, and jax\n\"\"\"

Nice, it worked! This is a great example of how useful Stack Overflow can be: by identifying a similar problem, we were able to benefit from the experience of others in the community. However, a search like this won’t always yield a relevant answer, so what can you do in such cases? Fortunately, there is a welcoming community of developers on the Hugging Face forums that can help you out! In the next section, we’ll take a look at how you can craft good forum questions that are likely to get answered.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:33.485Z"} {"title":"Asking for help on the forums - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter8/3?fw=pt","markdown":"## [](#asking-for-help-on-the-forums)Asking for help on the forums\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-8-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter8/section3.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter8/section3.ipynb)\n\nThe [Hugging Face forums](https://discuss.huggingface.co/) are a great place to get help from the open source team and wider Hugging Face community. Here’s what the main page looks like on any given day:\n\n![The Hugging Face forums.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter8/forums.png)\n\nOn the lefthand side you can see all the categories that the various topics are grouped into, while the righthand side shows the most recent topics. A topic is a post that contains a title, category, and description; it’s quite similar to the GitHub issues format that we saw when creating our own dataset in [Chapter 5](/course/chapter5). As the name suggests, the [Beginners](https://discuss.huggingface.co/c/beginners/5) category is primarily intended for people just starting out with the Hugging Face libraries and ecosystem. Any question on any of the libraries is welcome there, be it to debug some code or to ask for help about how to do something. (That said, if your question concerns one library in particular, you should probably head to the corresponding library category on the forum.)\n\nSimilarly, the [Intermediate](https://discuss.huggingface.co/c/intermediate/6) and [Research](https://discuss.huggingface.co/c/research/7) categories are for more advanced questions, for example about the libraries or some cool new NLP research that you’d like to discuss.\n\nAnd naturally, we should also mention the [Course](https://discuss.huggingface.co/c/course/20) category, where you can ask any questions you have that are related to the Hugging Face course!\n\nOnce you have selected a category, you’ll be ready to write your first topic. You can find some [guidelines](https://discuss.huggingface.co/t/how-to-request-support/3128) in the forum on how to do this, and in this section we’ll take a look at some features that make up a good topic.\n\n## [](#writing-a-good-forum-post)Writing a good forum post\n\nAs a running example, let’s suppose that we’re trying to generate embeddings from Wikipedia articles to create a custom search engine. As usual, we load the tokenizer and model as follows:\n\n```\nfrom transformers import AutoTokenizer, AutoModel\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\nmodel = AutoModel.from_pretrained(model_checkpoint)```\n\nNow suppose we try to embed a whole section of the [Wikipedia article](https://en.wikipedia.org/wiki/Transformers) on Transformers (the franchise, not the library!):\n\n```\ntext = \"\"\"\nGeneration One is a retroactive term for the Transformers characters that\nappeared between 1984 and 1993. The Transformers began with the 1980s Japanese\ntoy lines Micro Change and Diaclone. They presented robots able to transform\ninto everyday vehicles, electronic items or weapons. Hasbro bought the Micro\nChange and Diaclone toys, and partnered with Takara. Marvel Comics was hired by\nHasbro to create the backstory; editor-in-chief Jim Shooter wrote an overall\nstory, and gave the task of creating the characthers to writer Dennis O'Neil.\nUnhappy with O'Neil's work (although O'Neil created the name \"Optimus Prime\"),\nShooter chose Bob Budiansky to create the characters.\n\nThe Transformers mecha were largely designed by Shōji Kawamori, the creator of\nthe Japanese mecha anime franchise Macross (which was adapted into the Robotech\nfranchise in North America). Kawamori came up with the idea of transforming\nmechs while working on the Diaclone and Macross franchises in the early 1980s\n(such as the VF-1 Valkyrie in Macross and Robotech), with his Diaclone mechs\nlater providing the basis for Transformers.\n\nThe primary concept of Generation One is that the heroic Optimus Prime, the\nvillainous Megatron, and their finest soldiers crash land on pre-historic Earth\nin the Ark and the Nemesis before awakening in 1985, Cybertron hurtling through\nthe Neutral zone as an effect of the war. The Marvel comic was originally part\nof the main Marvel Universe, with appearances from Spider-Man and Nick Fury,\nplus some cameos, as well as a visit to the Savage Land.\n\nThe Transformers TV series began around the same time. Produced by Sunbow\nProductions and Marvel Productions, later Hasbro Productions, from the start it\ncontradicted Budiansky's backstories. The TV series shows the Autobots looking\nfor new energy sources, and crash landing as the Decepticons attack. Marvel\ninterpreted the Autobots as destroying a rogue asteroid approaching Cybertron.\nShockwave is loyal to Megatron in the TV series, keeping Cybertron in a\nstalemate during his absence, but in the comic book he attempts to take command\nof the Decepticons. The TV series would also differ wildly from the origins\nBudiansky had created for the Dinobots, the Decepticon turned Autobot Jetfire\n(known as Skyfire on TV), the Constructicons (who combine to form\nDevastator),[19][20] and Omega Supreme. The Marvel comic establishes early on\nthat Prime wields the Creation Matrix, which gives life to machines. In the\nsecond season, the two-part episode The Key to Vector Sigma introduced the\nancient Vector Sigma computer, which served the same original purpose as the\nCreation Matrix (giving life to Transformers), and its guardian Alpha Trion.\n\"\"\"\n\ninputs = tokenizer(text, return_tensors=\"pt\")\nlogits = model(**inputs).logits```\n\n```\nIndexError: index out of range in self```\n\nUh-oh, we’ve hit a problem — and the error message is far more cryptic than the ones we saw in [section 2](/course/chapter8/section2)! We can’t make head or tails of the full traceback, so we decide to turn to the Hugging Face forums for help. How might we craft the topic?\n\nTo get started, we need to click the “New Topic” button at the upper-right corner (note that to create a topic, we’ll need to be logged in):\n\n![Creating a new forum topic.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter8/forums-new-topic.png)\n\nThis brings up a writing interface where we can input the title of our topic, select a category, and draft the content:\n\n![The interface for creating a forum topic.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter8/forum-topic01.png)\n\nSince the error seems to be exclusively about 🤗 Transformers, we’ll select this for the category. Our first attempt at explaining the problem might look something like this:\n\n![Drafting the content for a new forum topic.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter8/forum-topic02.png)\n\nAlthough this topic contains the error message we need help with, there are a few problems with the way it is written:\n\n1. The title is not very descriptive, so anyone browsing the forum won’t be able to tell what the topic is about without reading the body as well.\n2. The body doesn’t provide enough information about _where_ the error is coming from and _how_ to reproduce it.\n3. The topic tags a few people directly with a somewhat demanding tone.\n\nTopics like this one are not likely to get a fast answer (if they get one at all), so let’s look at how we can improve it. We’ll start with the first issue of picking a good title.\n\n### [](#choosing-a-descriptive-title)Choosing a descriptive title\n\nIf you’re trying to get help with a bug in your code, a good rule of thumb is to include enough information in the title so that others can quickly determine whether they think they can answer your question or not. In our running example, we know the name of the exception that’s being raised and have some hints that it’s triggered in the forward pass of the model, where we call `model(**inputs)`. To communicate this, one possible title could be:\n\n> Source of IndexError in the AutoModel forward pass?\n\nThis title tells the reader _where_ you think the bug is coming from, and if they’ve encountered an `IndexError` before, there’s a good chance they’ll know how to debug it. Of course, the title can be anything you want, and other variations like:\n\n> Why does my model produce an IndexError?\n\ncould also be fine. Now that we’ve got a descriptive title, let’s take a look at improving the body.\n\n### [](#formatting-your-code-snippets)Formatting your code snippets\n\nReading source code is hard enough in an IDE, but it’s even harder when the code is copied and pasted as plain text! Fortunately, the Hugging Face forums support the use of Markdown, so you should always enclose your code blocks with three backticks (\\`\\`\\`) so it’s more easily readable. Let’s do this to prettify the error message — and while we’re at it, let’s make the body a bit more polite than our original version:\n\n![Our revised forum topic, with proper code formatting.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter8/forum-topic03.png)\n\nAs you can see in the screenshot, enclosing the code blocks in backticks converts the raw text into formatted code, complete with color styling! Also note that single backticks can be used to format inline variables, like we’ve done for `distilbert-base-uncased`. This topic is looking much better, and with a bit of luck we might find someone in the community who can guess what the error is about. However, instead of relying on luck, let’s make life easier by including the traceback in its full gory detail!\n\n### [](#including-the-full-traceback)Including the full traceback\n\nSince the last line of the traceback is often enough to debug your own code, it can be tempting to just provide that in your topic to “save space.” Although well intentioned, this actually makes it _harder_ for others to debug the problem since the information that’s higher up in the traceback can be really useful too. So, a good practice is to copy and paste the _whole_ traceback, while making sure that it’s nicely formatted. Since these tracebacks can get rather long, some people prefer to show them after they’ve explained the source code. Let’s do this. Now, our forum topic looks like the following:\n\n![Our example forum topic, with the complete traceback.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter8/forum-topic04.png)\n\nThis is much more informative, and a careful reader might be able to point out that the problem seems to be due to passing a long input because of this line in the traceback:\n\n> Token indices sequence length is longer than the specified maximum sequence length for this model (583 > 512).\n\nHowever, we can make things even easier for them by providing the actual code that triggered the error. Let’s do that now.\n\n### [](#providing-a-reproducible-example)Providing a reproducible example\n\nIf you’ve ever tried to debug someone else’s code, you’ve probably first tried to recreate the problem they’ve reported so you can start working your way through the traceback to pinpoint the error. It’s no different when it comes to getting (or giving) assistance on the forums, so it really helps if you can provide a small example that reproduces the error. Half the time, simply walking through this exercise will help you figure out what’s going wrong. In any case, the missing piece of our example is to show the _inputs_ that we provided to the model. Doing that gives us something like the following completed example:\n\n![The final version of our forum topic.](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter8/forum-topic05.png)\n\nThis topic now contains quite a lot of information, and it’s written in a way that is much more likely to attract the attention of the community and get a helpful answer. With these basic guidelines, you can now create great topics to find the answers to your 🤗 Transformers questions!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tAsking for help on the forums - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Asking for help on the forums

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Asking for help on the forums

\"Ask \"Open \"Open

The Hugging Face forums are a great place to get help from the open source team and wider Hugging Face community. Here’s what the main page looks like on any given day:

\"The

On the lefthand side you can see all the categories that the various topics are grouped into, while the righthand side shows the most recent topics. A topic is a post that contains a title, category, and description; it’s quite similar to the GitHub issues format that we saw when creating our own dataset in Chapter 5. As the name suggests, the Beginners category is primarily intended for people just starting out with the Hugging Face libraries and ecosystem. Any question on any of the libraries is welcome there, be it to debug some code or to ask for help about how to do something. (That said, if your question concerns one library in particular, you should probably head to the corresponding library category on the forum.)

Similarly, the Intermediate and Research categories are for more advanced questions, for example about the libraries or some cool new NLP research that you’d like to discuss.

And naturally, we should also mention the Course category, where you can ask any questions you have that are related to the Hugging Face course!

Once you have selected a category, you’ll be ready to write your first topic. You can find some guidelines in the forum on how to do this, and in this section we’ll take a look at some features that make up a good topic.

Writing a good forum post

As a running example, let’s suppose that we’re trying to generate embeddings from Wikipedia articles to create a custom search engine. As usual, we load the tokenizer and model as follows:

from transformers import AutoTokenizer, AutoModel\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\nmodel = AutoModel.from_pretrained(model_checkpoint)

Now suppose we try to embed a whole section of the Wikipedia article on Transformers (the franchise, not the library!):

text = \"\"\"\nGeneration One is a retroactive term for the Transformers characters that\nappeared between 1984 and 1993. The Transformers began with the 1980s Japanese\ntoy lines Micro Change and Diaclone. They presented robots able to transform\ninto everyday vehicles, electronic items or weapons. Hasbro bought the Micro\nChange and Diaclone toys, and partnered with Takara. Marvel Comics was hired by\nHasbro to create the backstory; editor-in-chief Jim Shooter wrote an overall\nstory, and gave the task of creating the characthers to writer Dennis O'Neil.\nUnhappy with O'Neil's work (although O'Neil created the name \"Optimus Prime\"),\nShooter chose Bob Budiansky to create the characters.\n\nThe Transformers mecha were largely designed by Shōji Kawamori, the creator of\nthe Japanese mecha anime franchise Macross (which was adapted into the Robotech\nfranchise in North America). Kawamori came up with the idea of transforming\nmechs while working on the Diaclone and Macross franchises in the early 1980s\n(such as the VF-1 Valkyrie in Macross and Robotech), with his Diaclone mechs\nlater providing the basis for Transformers.\n\nThe primary concept of Generation One is that the heroic Optimus Prime, the\nvillainous Megatron, and their finest soldiers crash land on pre-historic Earth\nin the Ark and the Nemesis before awakening in 1985, Cybertron hurtling through\nthe Neutral zone as an effect of the war. The Marvel comic was originally part\nof the main Marvel Universe, with appearances from Spider-Man and Nick Fury,\nplus some cameos, as well as a visit to the Savage Land.\n\nThe Transformers TV series began around the same time. Produced by Sunbow\nProductions and Marvel Productions, later Hasbro Productions, from the start it\ncontradicted Budiansky's backstories. The TV series shows the Autobots looking\nfor new energy sources, and crash landing as the Decepticons attack. Marvel\ninterpreted the Autobots as destroying a rogue asteroid approaching Cybertron.\nShockwave is loyal to Megatron in the TV series, keeping Cybertron in a\nstalemate during his absence, but in the comic book he attempts to take command\nof the Decepticons. The TV series would also differ wildly from the origins\nBudiansky had created for the Dinobots, the Decepticon turned Autobot Jetfire\n(known as Skyfire on TV), the Constructicons (who combine to form\nDevastator),[19][20] and Omega Supreme. The Marvel comic establishes early on\nthat Prime wields the Creation Matrix, which gives life to machines. In the\nsecond season, the two-part episode The Key to Vector Sigma introduced the\nancient Vector Sigma computer, which served the same original purpose as the\nCreation Matrix (giving life to Transformers), and its guardian Alpha Trion.\n\"\"\"\n\ninputs = tokenizer(text, return_tensors=\"pt\")\nlogits = model(**inputs).logits
IndexError: index out of range in self

Uh-oh, we’ve hit a problem — and the error message is far more cryptic than the ones we saw in section 2! We can’t make head or tails of the full traceback, so we decide to turn to the Hugging Face forums for help. How might we craft the topic?

To get started, we need to click the “New Topic” button at the upper-right corner (note that to create a topic, we’ll need to be logged in):

\"Creating

This brings up a writing interface where we can input the title of our topic, select a category, and draft the content:

\"The

Since the error seems to be exclusively about 🤗 Transformers, we’ll select this for the category. Our first attempt at explaining the problem might look something like this:

\"Drafting

Although this topic contains the error message we need help with, there are a few problems with the way it is written:

  1. The title is not very descriptive, so anyone browsing the forum won’t be able to tell what the topic is about without reading the body as well.
  2. The body doesn’t provide enough information about where the error is coming from and how to reproduce it.
  3. The topic tags a few people directly with a somewhat demanding tone.

Topics like this one are not likely to get a fast answer (if they get one at all), so let’s look at how we can improve it. We’ll start with the first issue of picking a good title.

Choosing a descriptive title

If you’re trying to get help with a bug in your code, a good rule of thumb is to include enough information in the title so that others can quickly determine whether they think they can answer your question or not. In our running example, we know the name of the exception that’s being raised and have some hints that it’s triggered in the forward pass of the model, where we call model(**inputs). To communicate this, one possible title could be:

Source of IndexError in the AutoModel forward pass?

This title tells the reader where you think the bug is coming from, and if they’ve encountered an IndexError before, there’s a good chance they’ll know how to debug it. Of course, the title can be anything you want, and other variations like:

Why does my model produce an IndexError?

could also be fine. Now that we’ve got a descriptive title, let’s take a look at improving the body.

Formatting your code snippets

Reading source code is hard enough in an IDE, but it’s even harder when the code is copied and pasted as plain text! Fortunately, the Hugging Face forums support the use of Markdown, so you should always enclose your code blocks with three backticks (```) so it’s more easily readable. Let’s do this to prettify the error message — and while we’re at it, let’s make the body a bit more polite than our original version:

\"Our

As you can see in the screenshot, enclosing the code blocks in backticks converts the raw text into formatted code, complete with color styling! Also note that single backticks can be used to format inline variables, like we’ve done for distilbert-base-uncased. This topic is looking much better, and with a bit of luck we might find someone in the community who can guess what the error is about. However, instead of relying on luck, let’s make life easier by including the traceback in its full gory detail!

Including the full traceback

Since the last line of the traceback is often enough to debug your own code, it can be tempting to just provide that in your topic to “save space.” Although well intentioned, this actually makes it harder for others to debug the problem since the information that’s higher up in the traceback can be really useful too. So, a good practice is to copy and paste the whole traceback, while making sure that it’s nicely formatted. Since these tracebacks can get rather long, some people prefer to show them after they’ve explained the source code. Let’s do this. Now, our forum topic looks like the following:

\"Our

This is much more informative, and a careful reader might be able to point out that the problem seems to be due to passing a long input because of this line in the traceback:

Token indices sequence length is longer than the specified maximum sequence length for this model (583 > 512).

However, we can make things even easier for them by providing the actual code that triggered the error. Let’s do that now.

Providing a reproducible example

If you’ve ever tried to debug someone else’s code, you’ve probably first tried to recreate the problem they’ve reported so you can start working your way through the traceback to pinpoint the error. It’s no different when it comes to getting (or giving) assistance on the forums, so it really helps if you can provide a small example that reproduces the error. Half the time, simply walking through this exercise will help you figure out what’s going wrong. In any case, the missing piece of our example is to show the inputs that we provided to the model. Doing that gives us something like the following completed example:

\"The

This topic now contains quite a lot of information, and it’s written in a way that is much more likely to attract the attention of the community and get a helpful answer. With these basic guidelines, you can now create great topics to find the answers to your 🤗 Transformers questions!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:33.589Z"} {"title":"Debugging the training pipeline - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter8/4?fw=pt","markdown":"[Pytorch](?fw=pt) [TensorFlow](?fw=tf)\n\n## [](#debugging-the-training-pipeline)Debugging the training pipeline\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-8-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter8/section4.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter8/section4.ipynb)\n\nYou’ve written a beautiful script to train or fine-tune a model on a given task, dutifully following the advice from [Chapter 7](/course/chapter7). But when you launch the command `trainer.train()`, something horrible happens: you get an error 😱! Or worse, everything seems to be fine and the training runs without error, but the resulting model is crappy. In this section, we will show you what you can do to debug these kinds of issues.\n\n## [](#debugging-the-training-pipeline)Debugging the training pipeline\n\nThe problem when you encounter an error in `trainer.train()` is that it could come from multiple sources, as the `Trainer` usually puts together lots of things. It converts datasets to dataloaders, so the problem could be something wrong in your dataset, or some issue when trying to batch elements of the datasets together. Then it takes a batch of data and feeds it to the model, so the problem could be in the model code. After that, it computes the gradients and performs the optimization step, so the problem could also be in your optimizer. And even if everything goes well for training, something could still go wrong during the evaluation if there is a problem with your metric.\n\nThe best way to debug an error that arises in `trainer.train()` is to manually go through this whole pipeline to see where things went awry. The error is then often very easy to solve.\n\nTo demonstrate this, we will use the following script that (tries to) fine-tune a DistilBERT model on the [MNLI dataset](https://huggingface.co/datasets/glue):\n\n```\nfrom datasets import load_dataset\nimport evaluate\nfrom transformers import (\n AutoTokenizer,\n AutoModelForSequenceClassification,\n TrainingArguments,\n Trainer,\n)\n\nraw_datasets = load_dataset(\"glue\", \"mnli\")\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n\n\ndef preprocess_function(examples):\n return tokenizer(examples[\"premise\"], examples[\"hypothesis\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(preprocess_function, batched=True)\nmodel = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)\n\nargs = TrainingArguments(\n f\"distilbert-finetuned-mnli\",\n evaluation_strategy=\"epoch\",\n save_strategy=\"epoch\",\n learning_rate=2e-5,\n num_train_epochs=3,\n weight_decay=0.01,\n)\n\nmetric = evaluate.load(\"glue\", \"mnli\")\n\n\ndef compute_metrics(eval_pred):\n predictions, labels = eval_pred\n return metric.compute(predictions=predictions, references=labels)\n\n\ntrainer = Trainer(\n model,\n args,\n train_dataset=raw_datasets[\"train\"],\n eval_dataset=raw_datasets[\"validation_matched\"],\n compute_metrics=compute_metrics,\n)\ntrainer.train()```\n\nIf you try to execute it, you will be met with a rather cryptic error:\n\n```\n'ValueError: You have to specify either input_ids or inputs_embeds'```\n\n### [](#check-your-data)Check your data\n\nThis goes without saying, but if your data is corrupted, the `Trainer` is not going to be able to form batches, let alone train your model. So first things first, you need to have a look at what is inside your training set.\n\nTo avoid countless hours spent trying to fix something that is not the source of the bug, we recommend you use `trainer.train_dataset` for your checks and nothing else. So let’s do that here:\n\n```\n{'hypothesis': 'Product and geography are what make cream skimming work. ',\n 'idx': 0,\n 'label': 1,\n 'premise': 'Conceptually cream skimming has two basic dimensions - product and geography.'}```\n\nDo you notice something wrong? This, in conjunction with the error message about `input_ids` missing, should make you realize those are texts, not numbers the model can make sense of. Here, the original error is very misleading because the `Trainer` automatically removes the columns that don’t match the model signature (that is, the arguments expected by the model). That means here, everything apart from the labels was discarded. There was thus no issue with creating batches and then sending them to the model, which in turn complained it didn’t receive the proper input.\n\nWhy wasn’t the data processed? We did use the `Dataset.map()` method on the datasets to apply the tokenizer on each sample. But if you look closely at the code, you will see that we made a mistake when passing the training and evaluation sets to the `Trainer`. Instead of using `tokenized_datasets` here, we used `raw_datasets` 🤦. So let’s fix this!\n\n```\nfrom datasets import load_dataset\nimport evaluate\nfrom transformers import (\n AutoTokenizer,\n AutoModelForSequenceClassification,\n TrainingArguments,\n Trainer,\n)\n\nraw_datasets = load_dataset(\"glue\", \"mnli\")\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n\n\ndef preprocess_function(examples):\n return tokenizer(examples[\"premise\"], examples[\"hypothesis\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(preprocess_function, batched=True)\nmodel = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)\n\nargs = TrainingArguments(\n f\"distilbert-finetuned-mnli\",\n evaluation_strategy=\"epoch\",\n save_strategy=\"epoch\",\n learning_rate=2e-5,\n num_train_epochs=3,\n weight_decay=0.01,\n)\n\nmetric = evaluate.load(\"glue\", \"mnli\")\n\n\ndef compute_metrics(eval_pred):\n predictions, labels = eval_pred\n return metric.compute(predictions=predictions, references=labels)\n\n\ntrainer = Trainer(\n model,\n args,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"validation_matched\"],\n compute_metrics=compute_metrics,\n)\ntrainer.train()```\n\nThis new code will now give a different error (progress!):\n\n```\n'ValueError: expected sequence of length 43 at dim 1 (got 37)'```\n\nLooking at the traceback, we can see the error happens in the data collation step:\n\n```\n~/git/transformers/src/transformers/data/data_collator.py in torch_default_data_collator(features)\n 105 batch[k] = torch.stack([f[k] for f in features])\n 106 else:\n--> 107 batch[k] = torch.tensor([f[k] for f in features])\n 108 \n 109 return batch```\n\nSo, we should move to that. Before we do, however, let’s finish inspecting our data, just to be 100% sure it’s correct.\n\nOne thing you should always do when debugging a training session is have a look at the decoded inputs of your model. We can’t make sense of the numbers that we feed it directly, so we should look at what those numbers represent. In computer vision, for example, that means looking at the decoded pictures of the pixels you pass, in speech it means listening to the decoded audio samples, and for our NLP example here it means using our tokenizer to decode the inputs:\n\n```\ntokenizer.decode(trainer.train_dataset[0][\"input_ids\"])```\n\n```\n'[CLS] conceptually cream skimming has two basic dimensions - product and geography. [SEP] product and geography are what make cream skimming work. [SEP]'```\n\nSo that seems correct. You should do this for all the keys in the inputs:\n\n```\ntrainer.train_dataset[0].keys()```\n\n```\ndict_keys(['attention_mask', 'hypothesis', 'idx', 'input_ids', 'label', 'premise'])```\n\nNote that the keys that don’t correspond to inputs accepted by the model will be automatically discarded, so here we will only keep `input_ids`, `attention_mask`, and `label` (which will be renamed `labels`). To double-check the model signature, you can print the class of your model, then go check its documentation:\n\n```\ntransformers.models.distilbert.modeling_distilbert.DistilBertForSequenceClassification```\n\nSo in our case, we can check the parameters accepted on [this page](https://huggingface.co/transformers/model_doc/distilbert.html#distilbertforsequenceclassification). The `Trainer` will also log the columns it’s discarding.\n\nWe have checked that the input IDs are correct by decoding them. Next is the `attention_mask`:\n\n```\ntrainer.train_dataset[0][\"attention_mask\"]```\n\n```\n[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]```\n\nSince we didn’t apply padding in our preprocessing, this seems perfectly natural. To be sure there is no issue with that attention mask, let’s check it is the same length as our input IDs:\n\n```\nlen(trainer.train_dataset[0][\"attention_mask\"]) == len(\n trainer.train_dataset[0][\"input_ids\"]\n)```\n\nThat’s good! Lastly, let’s check our label:\n\n```\ntrainer.train_dataset[0][\"label\"]```\n\nLike the input IDs, this is a number that doesn’t really make sense on its own. As we saw before, the map between integers and label names is stored inside the `names` attribute of the corresponding _feature_ of the dataset:\n\n```\ntrainer.train_dataset.features[\"label\"].names```\n\n```\n['entailment', 'neutral', 'contradiction']```\n\nSo `1` means `neutral`, which means the two sentences we saw above are not in contradiction, and the first one does not imply the second one. That seems correct!\n\nWe don’t have token type IDs here, since DistilBERT does not expect them; if you have some in your model, you should also make sure that they properly match where the first and second sentences are in the input.\n\n✏️ **Your turn!** Check that everything seems correct with the second element of the training dataset.\n\nWe are only doing the check on the training set here, but you should of course double-check the validation and test sets the same way.\n\nNow that we know our datasets look good, it’s time to check the next step of the training pipeline.\n\n### [](#from-datasets-to-dataloaders)From datasets to dataloaders\n\nThe next thing that can go wrong in the training pipeline is when the `Trainer` tries to form batches from the training or validation set. Once you are sure the `Trainer`’s datasets are correct, you can try to manually form a batch by executing the following (replace `train` with `eval` for the validation dataloader):\n\n```\nfor batch in trainer.get_train_dataloader():\n break```\n\nThis code creates the training dataloader, then iterates through it, stopping at the first iteration. If the code executes without error, you have the first training batch that you can inspect, and if the code errors out, you know for sure the problem is in the dataloader, as is the case here:\n\n```\n~/git/transformers/src/transformers/data/data_collator.py in torch_default_data_collator(features)\n 105 batch[k] = torch.stack([f[k] for f in features])\n 106 else:\n--> 107 batch[k] = torch.tensor([f[k] for f in features])\n 108 \n 109 return batch\n\nValueError: expected sequence of length 45 at dim 1 (got 76)```\n\nInspecting the last frame of the traceback should be enough to give you a clue, but let’s do a bit more digging. Most of the problems during batch creation arise because of the collation of examples into a single batch, so the first thing to check when in doubt is what `collate_fn` your `DataLoader` is using:\n\n```\ndata_collator = trainer.get_train_dataloader().collate_fn\ndata_collator```\n\n```\n Dict[str, Any]>```\n\nSo this is the `default_data_collator`, but that’s not what we want in this case. We want to pad our examples to the longest sentence in the batch, which is done by the `DataCollatorWithPadding` collator. And this data collator is supposed to be used by default by the `Trainer`, so why is it not used here?\n\nThe answer is because we did not pass the `tokenizer` to the `Trainer`, so it couldn’t create the `DataCollatorWithPadding` we want. In practice, you should never hesitate to explicitly pass along the data collator you want to use, to make sure you avoid these kinds of errors. Let’s adapt our code to do exactly that:\n\n```\nfrom datasets import load_dataset\nimport evaluate\nfrom transformers import (\n AutoTokenizer,\n AutoModelForSequenceClassification,\n DataCollatorWithPadding,\n TrainingArguments,\n Trainer,\n)\n\nraw_datasets = load_dataset(\"glue\", \"mnli\")\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n\n\ndef preprocess_function(examples):\n return tokenizer(examples[\"premise\"], examples[\"hypothesis\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(preprocess_function, batched=True)\nmodel = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)\n\nargs = TrainingArguments(\n f\"distilbert-finetuned-mnli\",\n evaluation_strategy=\"epoch\",\n save_strategy=\"epoch\",\n learning_rate=2e-5,\n num_train_epochs=3,\n weight_decay=0.01,\n)\n\nmetric = evaluate.load(\"glue\", \"mnli\")\n\n\ndef compute_metrics(eval_pred):\n predictions, labels = eval_pred\n return metric.compute(predictions=predictions, references=labels)\n\n\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)\n\ntrainer = Trainer(\n model,\n args,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"validation_matched\"],\n compute_metrics=compute_metrics,\n data_collator=data_collator,\n tokenizer=tokenizer,\n)\ntrainer.train()```\n\nThe good news? We don’t get the same error as before, which is definitely progress. The bad news? We get an infamous CUDA error instead:\n\n```\nRuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)````\n\nThis is bad because CUDA errors are extremely hard to debug in general. We will see in a minute how to solve this, but first let’s finish our analysis of batch creation.\n\nIf you are sure your data collator is the right one, you should try to apply it on a couple of samples of your dataset:\n\n```\ndata_collator = trainer.get_train_dataloader().collate_fn\nbatch = data_collator([trainer.train_dataset[i] for i in range(4)])```\n\nThis code will fail because the `train_dataset` contains string columns, which the `Trainer` usually removes. You can remove them manually, or if you want to replicate exactly what the `Trainer` is doing behind the scenes, you can call the private `Trainer._remove_unused_columns()` method that does that:\n\n```\ndata_collator = trainer.get_train_dataloader().collate_fn\nactual_train_set = trainer._remove_unused_columns(trainer.train_dataset)\nbatch = data_collator([actual_train_set[i] for i in range(4)])```\n\nYou should then be able to manually debug what happens inside the data collator if the error persists.\n\nNow that we’ve debugged the batch creation process, it’s time to pass one through the model!\n\n### [](#going-through-the-model)Going through the model\n\nYou should be able to get a batch by executing the following command:\n\n```\nfor batch in trainer.get_train_dataloader():\n break```\n\nIf you’re running this code in a notebook, you may get a CUDA error that’s similar to the one we saw earlier, in which case you need to restart your notebook and reexecute the last snippet without the `trainer.train()` line. That’s the second most annoying thing about CUDA errors: they irremediably break your kernel. The most annoying thing about them is the fact that they are hard to debug.\n\nWhy is that? It has to do with the way GPUs work. They are extremely efficient at executing a lot of operations in parallel, but the drawback is that when one of those instructions results in an error, you don’t know it instantly. It’s only when the program calls a synchronization of the multiple processes on the GPU that it will realize something went wrong, so the error is actually raised at a place that has nothing to do with what created it. For instance, if we look at our previous traceback, the error was raised during the backward pass, but we will see in a minute that it actually stems from something in the forward pass.\n\nSo how do we debug those errors? The answer is easy: we don’t. Unless your CUDA error is an out-of-memory error (which means there is not enough memory in your GPU), you should always go back to the CPU to debug it.\n\nTo do this in our case, we just have to put the model back on the CPU and call it on our batch — the batch returned by the `DataLoader` has not been moved to the GPU yet:\n\n```\noutputs = trainer.model.cpu()(**batch)```\n\n```\n~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)\n 2386 )\n 2387 if dim == 2:\n-> 2388 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\n 2389 elif dim == 4:\n 2390 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\n\nIndexError: Target 2 is out of bounds.```\n\nSo, the picture is getting clearer. Instead of having a CUDA error, we now have an `IndexError` in the loss computation (so nothing to do with the backward pass, as we said earlier). More precisely, we can see that it’s target 2 that creates the error, so this is a very good moment to check the number of labels of our model:\n\n```\ntrainer.model.config.num_labels```\n\nWith two labels, only 0s and 1s are allowed as targets, but according to the error message we got a 2. Getting a 2 is actually normal: if we remember the label names we extracted earlier, there were three, so we have indices 0, 1, and 2 in our dataset. The problem is that we didn’t tell that to our model, which should have been created with three labels. So let’s fix that!\n\n```\nfrom datasets import load_dataset\nimport evaluate\nfrom transformers import (\n AutoTokenizer,\n AutoModelForSequenceClassification,\n DataCollatorWithPadding,\n TrainingArguments,\n Trainer,\n)\n\nraw_datasets = load_dataset(\"glue\", \"mnli\")\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n\n\ndef preprocess_function(examples):\n return tokenizer(examples[\"premise\"], examples[\"hypothesis\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(preprocess_function, batched=True)\nmodel = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=3)\n\nargs = TrainingArguments(\n f\"distilbert-finetuned-mnli\",\n evaluation_strategy=\"epoch\",\n save_strategy=\"epoch\",\n learning_rate=2e-5,\n num_train_epochs=3,\n weight_decay=0.01,\n)\n\nmetric = evaluate.load(\"glue\", \"mnli\")\n\n\ndef compute_metrics(eval_pred):\n predictions, labels = eval_pred\n return metric.compute(predictions=predictions, references=labels)\n\n\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)\n\ntrainer = Trainer(\n model,\n args,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"validation_matched\"],\n compute_metrics=compute_metrics,\n data_collator=data_collator,\n tokenizer=tokenizer,\n)```\n\nWe aren’t including the `trainer.train()` line yet, to take the time to check that everything looks good. If we request a batch and pass it to our model, it now works without error!\n\n```\nfor batch in trainer.get_train_dataloader():\n break\n\noutputs = trainer.model.cpu()(**batch)```\n\nThe next step is then to move back to the GPU and check that everything still works:\n\n```\nimport torch\n\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\nbatch = {k: v.to(device) for k, v in batch.items()}\n\noutputs = trainer.model.to(device)(**batch)```\n\nIf you still get an error, make sure you restart your notebook and only execute the last version of the script.\n\n### [](#performing-one-optimization-step)Performing one optimization step\n\nNow that we know that we can build batches that actually go through the model, we are ready for the next step of the training pipeline: computing the gradients and performing an optimization step.\n\nThe first part is just a matter of calling the `backward()` method on the loss:\n\n```\nloss = outputs.loss\nloss.backward()```\n\nIt’s pretty rare to get an error at this stage, but if you do get one, make sure to go back to the CPU to get a helpful error message.\n\nTo perform the optimization step, we just need to create the `optimizer` and call its `step()` method:\n\n```\ntrainer.create_optimizer()\ntrainer.optimizer.step()```\n\nAgain, if you’re using the default optimizer in the `Trainer`, you shouldn’t get an error at this stage, but if you have a custom optimizer, there might be some problems to debug here. Don’t forget to go back to the CPU if you get a weird CUDA error at this stage. Speaking of CUDA errors, earlier we mentioned a special case. Let’s have a look at that now.\n\n### [](#dealing-with-cuda-out-of-memory-errors)Dealing with CUDA out-of-memory errors\n\nWhenever you get an error message that starts with `RuntimeError: CUDA out of memory`, this indicates that you are out of GPU memory. This is not directly linked to your code, and it can happen with a script that runs perfectly fine. This error means that you tried to put too many things in the internal memory of your GPU, and that resulted in an error. Like with other CUDA errors, you will need to restart your kernel to be in a spot where you can run your training again.\n\nTo solve this issue, you just need to use less GPU space — something that is often easier said than done. First, make sure you don’t have two models on the GPU at the same time (unless that’s required for your problem, of course). Then, you should probably reduce your batch size, as it directly affects the sizes of all the intermediate outputs of the model and their gradients. If the problem persists, consider using a smaller version of your model.\n\nIn the next part of the course, we’ll look at more advanced techniques that can help you reduce your memory footprint and let you fine-tune the biggest models.\n\n### [](#evaluating-the-model)Evaluating the model\n\nNow that we’ve solved all the issues with our code, everything is perfect and the training should run smoothly, right? Not so fast! If you run the `trainer.train()` command, everything will look good at first, but after a while you will get the following:\n\n```\nTypeError: only size-1 arrays can be converted to Python scalars```\n\nYou will realize this error appears during the evaluation phase, so this is the last thing we will need to debug.\n\nYou can run the evaluation loop of the `Trainer` independently form the training like this:\n\n```\nTypeError: only size-1 arrays can be converted to Python scalars```\n\n💡 You should always make sure you can run `trainer.evaluate()` before launching `trainer.train()`, to avoid wasting lots of compute resources before hitting an error.\n\nBefore attempting to debug a problem in the evaluation loop, you should first make sure that you’ve had a look at the data, are able to form a batch properly, and can run your model on it. We’ve completed all of those steps, so the following code can be executed without error:\n\n```\nfor batch in trainer.get_eval_dataloader():\n break\n\nbatch = {k: v.to(device) for k, v in batch.items()}\n\nwith torch.no_grad():\n outputs = trainer.model(**batch)```\n\nThe error comes later, at the end of the evaluation phase, and if we look at the traceback we see this:\n\n```\n~/git/datasets/src/datasets/metric.py in add_batch(self, predictions, references)\n 431 \"\"\"\n 432 batch = {\"predictions\": predictions, \"references\": references}\n--> 433 batch = self.info.features.encode_batch(batch)\n 434 if self.writer is None:\n 435 self._init_writer()```\n\nThis tells us that the error originates in the `datasets/metric.py` module — so this is a problem with our `compute_metrics()` function. It takes a tuple with the logits and the labels as NumPy arrays, so let’s try to feed it that:\n\n```\npredictions = outputs.logits.cpu().numpy()\nlabels = batch[\"labels\"].cpu().numpy()\n\ncompute_metrics((predictions, labels))```\n\n```\nTypeError: only size-1 arrays can be converted to Python scalars```\n\nWe get the same error, so the problem definitely lies with that function. If we look back at its code, we see it’s just forwarding the `predictions` and the `labels` to `metric.compute()`. So is there a problem with that method? Not really. Let’s have a quick look at the shapes:\n\n```\npredictions.shape, labels.shape```\n\nOur predictions are still logits, not the actual predictions, which is why the metric is returning this (somewhat obscure) error. The fix is pretty easy; we just have to add an argmax in the `compute_metrics()` function:\n\n```\nimport numpy as np\n\n\ndef compute_metrics(eval_pred):\n predictions, labels = eval_pred\n predictions = np.argmax(predictions, axis=1)\n return metric.compute(predictions=predictions, references=labels)\n\n\ncompute_metrics((predictions, labels))```\n\nNow our error is fixed! This was the last one, so our script will now train a model properly.\n\nFor reference, here is the completely fixed script:\n\n```\nimport numpy as np\nfrom datasets import load_dataset\nimport evaluate\nfrom transformers import (\n AutoTokenizer,\n AutoModelForSequenceClassification,\n DataCollatorWithPadding,\n TrainingArguments,\n Trainer,\n)\n\nraw_datasets = load_dataset(\"glue\", \"mnli\")\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n\n\ndef preprocess_function(examples):\n return tokenizer(examples[\"premise\"], examples[\"hypothesis\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(preprocess_function, batched=True)\nmodel = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=3)\n\nargs = TrainingArguments(\n f\"distilbert-finetuned-mnli\",\n evaluation_strategy=\"epoch\",\n save_strategy=\"epoch\",\n learning_rate=2e-5,\n num_train_epochs=3,\n weight_decay=0.01,\n)\n\nmetric = evaluate.load(\"glue\", \"mnli\")\n\n\ndef compute_metrics(eval_pred):\n predictions, labels = eval_pred\n predictions = np.argmax(predictions, axis=1)\n return metric.compute(predictions=predictions, references=labels)\n\n\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)\n\ntrainer = Trainer(\n model,\n args,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"validation_matched\"],\n compute_metrics=compute_metrics,\n data_collator=data_collator,\n tokenizer=tokenizer,\n)\ntrainer.train()```\n\nIn this instance, there are no more problems, and our script will fine-tune a model that should give reasonable results. But what can we do when the training proceeds without any error, and the model trained does not perform well at all? That’s the hardest part of machine learning, and we’ll show you a few techniques that can help.\n\n💡 If you’re using a manual training loop, the same steps apply to debug your training pipeline, but it’s easier to separate them. Make sure you have not forgotten the `model.eval()` or `model.train()` at the right places, or the `zero_grad()` at each step, however!\n\n## [](#debugging-silent-errors-during-training)Debugging silent errors during training\n\nWhat can we do to debug a training that completes without error but doesn’t get good results? We’ll give you some pointers here, but be aware that this kind of debugging is the hardest part of machine learning, and there is no magical answer.\n\n### [](#check-your-data-again)Check your data (again!)\n\nYour model will only learn something if it’s actually possible to learn anything from your data. If there is a bug that corrupts the data or the labels are attributed randomly, it’s very likely you won’t get any model training on your dataset. So always start by double-checking your decoded inputs and labels, and ask yourself the following questions:\n\n- Is the decoded data understandable?\n- Do you agree with the labels?\n- Is there one label that’s more common than the others?\n- What should the loss/metric be if the model predicted a random answer/always the same answer?\n\n⚠️ If you are doing distributed training, print samples of your dataset in each process and triple-check that you get the same thing. One common bug is to have some source of randomness in the data creation that makes each process have a different version of the dataset.\n\nAfter looking at your data, go through a few of the model’s predictions and decode them too. If the model is always predicting the same thing, it might be because your dataset is biased toward one category (for classification problems); techniques like oversampling rare classes might help.\n\nIf the loss/metric you get on your initial model is very different from the loss/metric you would expect for random predictions, double-check the way your loss or metric is computed, as there is probably a bug there. If you are using several losses that you add at the end, make sure they are of the same scale.\n\nWhen you are sure your data is perfect, you can see if the model is capable of training on it with one simple test.\n\n### [](#overfit-your-model-on-one-batch)Overfit your model on one batch\n\nOverfitting is usually something we try to avoid when training, as it means the model is not learning to recognize the general features we want it to but is instead just memorizing the training samples. However, trying to train your model on one batch over and over again is a good test to check if the problem as you framed it can be solved by the model you are attempting to train. It will also help you see if your initial learning rate is too high.\n\nDoing this once you have defined your `Trainer` is really easy; just grab a batch of training data, then run a small manual training loop only using that batch for something like 20 steps:\n\n```\nfor batch in trainer.get_train_dataloader():\n break\n\nbatch = {k: v.to(device) for k, v in batch.items()}\ntrainer.create_optimizer()\n\nfor _ in range(20):\n outputs = trainer.model(**batch)\n loss = outputs.loss\n loss.backward()\n trainer.optimizer.step()\n trainer.optimizer.zero_grad()```\n\n💡 If your training data is unbalanced, make sure to build a batch of training data containing all the labels.\n\nThe resulting model should have close-to-perfect results on the same `batch`. Let’s compute the metric on the resulting predictions:\n\n```\nwith torch.no_grad():\n outputs = trainer.model(**batch)\npreds = outputs.logits\nlabels = batch[\"labels\"]\n\ncompute_metrics((preds.cpu().numpy(), labels.cpu().numpy()))```\n\n100% accuracy, now this is a nice example of overfitting (meaning that if you try your model on any other sentence, it will very likely give you a wrong answer)!\n\nIf you don’t manage to have your model obtain perfect results like this, it means there is something wrong with the way you framed the problem or your data, so you should fix that. Only when you manage to pass the overfitting test can you be sure that your model can actually learn something.\n\n⚠️ You will have to recreate your model and your `Trainer` after this test, as the model obtained probably won’t be able to recover and learn something useful on your full dataset.\n\n### [](#dont-tune-anything-until-you-have-a-first-baseline)Don't tune anything until you have a first baseline\n\nHyperparameter tuning is always emphasized as being the hardest part of machine learning, but it’s just the last step to help you gain a little bit on the metric. Most of the time, the default hyperparameters of the `Trainer` will work just fine to give you good results, so don’t launch into a time-consuming and costly hyperparameter search until you have something that beats the baseline you have on your dataset.\n\nOnce you have a good enough model, you can start tweaking a bit. Don’t try launching a thousand runs with different hyperparameters, but compare a couple of runs with different values for one hyperparameter to get an idea of which has the greatest impact.\n\nIf you are tweaking the model itself, keep it simple and don’t try anything you can’t reasonably justify. Always make sure you go back to the overfitting test to verify that your change hasn’t had any unintended consequences.\n\n### [](#ask-for-help)Ask for help\n\nHopefully you will have found some advice in this section that helped you solve your issue, but if that’s not the case, remember you can always ask the community on the [forums](https://discuss.huggingface.co/).\n\nHere are some additional resources that may prove helpful:\n\n- [“Reproducibility as a vehicle for engineering best practices”](https://docs.google.com/presentation/d/1yHLPvPhUs2KGI5ZWo0sU-PKU3GimAk3iTsI38Z-B5Gw/edit#slide=id.p) by Joel Grus\n- [“Checklist for debugging neural networks”](https://towardsdatascience.com/checklist-for-debugging-neural-networks-d8b2a9434f21) by Cecelia Shao\n- [“How to unit test machine learning code”](https://medium.com/@keeper6928/how-to-unit-test-machine-learning-code-57cf6fd81765) by Chase Roberts\n- [“A Recipe for Training Neural Networks”](http://karpathy.github.io/2019/04/25/recipe/) by Andrej Karpathy\n\nOf course, not every problem you encounter when training neural nets is your own fault! If you encounter something in the 🤗 Transformers or 🤗 Datasets library that does not seem right, you may have encountered a bug. You should definitely tell us all about it, and in the next section we’ll explain exactly how to do that.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tDebugging the training pipeline - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Debugging the training pipeline

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Debugging the training pipeline

\"Ask \"Open \"Open

You’ve written a beautiful script to train or fine-tune a model on a given task, dutifully following the advice from Chapter 7. But when you launch the command trainer.train(), something horrible happens: you get an error 😱! Or worse, everything seems to be fine and the training runs without error, but the resulting model is crappy. In this section, we will show you what you can do to debug these kinds of issues.

Debugging the training pipeline

The problem when you encounter an error in trainer.train() is that it could come from multiple sources, as the Trainer usually puts together lots of things. It converts datasets to dataloaders, so the problem could be something wrong in your dataset, or some issue when trying to batch elements of the datasets together. Then it takes a batch of data and feeds it to the model, so the problem could be in the model code. After that, it computes the gradients and performs the optimization step, so the problem could also be in your optimizer. And even if everything goes well for training, something could still go wrong during the evaluation if there is a problem with your metric.

The best way to debug an error that arises in trainer.train() is to manually go through this whole pipeline to see where things went awry. The error is then often very easy to solve.

To demonstrate this, we will use the following script that (tries to) fine-tune a DistilBERT model on the MNLI dataset:

from datasets import load_dataset\nimport evaluate\nfrom transformers import (\n    AutoTokenizer,\n    AutoModelForSequenceClassification,\n    TrainingArguments,\n    Trainer,\n)\n\nraw_datasets = load_dataset(\"glue\", \"mnli\")\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n\n\ndef preprocess_function(examples):\n    return tokenizer(examples[\"premise\"], examples[\"hypothesis\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(preprocess_function, batched=True)\nmodel = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)\n\nargs = TrainingArguments(\n    f\"distilbert-finetuned-mnli\",\n    evaluation_strategy=\"epoch\",\n    save_strategy=\"epoch\",\n    learning_rate=2e-5,\n    num_train_epochs=3,\n    weight_decay=0.01,\n)\n\nmetric = evaluate.load(\"glue\", \"mnli\")\n\n\ndef compute_metrics(eval_pred):\n    predictions, labels = eval_pred\n    return metric.compute(predictions=predictions, references=labels)\n\n\ntrainer = Trainer(\n    model,\n    args,\n    train_dataset=raw_datasets[\"train\"],\n    eval_dataset=raw_datasets[\"validation_matched\"],\n    compute_metrics=compute_metrics,\n)\ntrainer.train()

If you try to execute it, you will be met with a rather cryptic error:

'ValueError: You have to specify either input_ids or inputs_embeds'

Check your data

This goes without saying, but if your data is corrupted, the Trainer is not going to be able to form batches, let alone train your model. So first things first, you need to have a look at what is inside your training set.

To avoid countless hours spent trying to fix something that is not the source of the bug, we recommend you use trainer.train_dataset for your checks and nothing else. So let’s do that here:

trainer.train_dataset[0]
{'hypothesis': 'Product and geography are what make cream skimming work. ',\n 'idx': 0,\n 'label': 1,\n 'premise': 'Conceptually cream skimming has two basic dimensions - product and geography.'}

Do you notice something wrong? This, in conjunction with the error message about input_ids missing, should make you realize those are texts, not numbers the model can make sense of. Here, the original error is very misleading because the Trainer automatically removes the columns that don’t match the model signature (that is, the arguments expected by the model). That means here, everything apart from the labels was discarded. There was thus no issue with creating batches and then sending them to the model, which in turn complained it didn’t receive the proper input.

Why wasn’t the data processed? We did use the Dataset.map() method on the datasets to apply the tokenizer on each sample. But if you look closely at the code, you will see that we made a mistake when passing the training and evaluation sets to the Trainer. Instead of using tokenized_datasets here, we used raw_datasets 🤦. So let’s fix this!

from datasets import load_dataset\nimport evaluate\nfrom transformers import (\n    AutoTokenizer,\n    AutoModelForSequenceClassification,\n    TrainingArguments,\n    Trainer,\n)\n\nraw_datasets = load_dataset(\"glue\", \"mnli\")\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n\n\ndef preprocess_function(examples):\n    return tokenizer(examples[\"premise\"], examples[\"hypothesis\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(preprocess_function, batched=True)\nmodel = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)\n\nargs = TrainingArguments(\n    f\"distilbert-finetuned-mnli\",\n    evaluation_strategy=\"epoch\",\n    save_strategy=\"epoch\",\n    learning_rate=2e-5,\n    num_train_epochs=3,\n    weight_decay=0.01,\n)\n\nmetric = evaluate.load(\"glue\", \"mnli\")\n\n\ndef compute_metrics(eval_pred):\n    predictions, labels = eval_pred\n    return metric.compute(predictions=predictions, references=labels)\n\n\ntrainer = Trainer(\n    model,\n    args,\n    train_dataset=tokenized_datasets[\"train\"],\n    eval_dataset=tokenized_datasets[\"validation_matched\"],\n    compute_metrics=compute_metrics,\n)\ntrainer.train()

This new code will now give a different error (progress!):

'ValueError: expected sequence of length 43 at dim 1 (got 37)'

Looking at the traceback, we can see the error happens in the data collation step:

~/git/transformers/src/transformers/data/data_collator.py in torch_default_data_collator(features)\n    105                 batch[k] = torch.stack([f[k] for f in features])\n    106             else:\n--> 107                 batch[k] = torch.tensor([f[k] for f in features])\n    108 \n    109     return batch

So, we should move to that. Before we do, however, let’s finish inspecting our data, just to be 100% sure it’s correct.

One thing you should always do when debugging a training session is have a look at the decoded inputs of your model. We can’t make sense of the numbers that we feed it directly, so we should look at what those numbers represent. In computer vision, for example, that means looking at the decoded pictures of the pixels you pass, in speech it means listening to the decoded audio samples, and for our NLP example here it means using our tokenizer to decode the inputs:

tokenizer.decode(trainer.train_dataset[0][\"input_ids\"])
'[CLS] conceptually cream skimming has two basic dimensions - product and geography. [SEP] product and geography are what make cream skimming work. [SEP]'

So that seems correct. You should do this for all the keys in the inputs:

trainer.train_dataset[0].keys()
dict_keys(['attention_mask', 'hypothesis', 'idx', 'input_ids', 'label', 'premise'])

Note that the keys that don’t correspond to inputs accepted by the model will be automatically discarded, so here we will only keep input_ids, attention_mask, and label (which will be renamed labels). To double-check the model signature, you can print the class of your model, then go check its documentation:

type(trainer.model)
transformers.models.distilbert.modeling_distilbert.DistilBertForSequenceClassification

So in our case, we can check the parameters accepted on this page. The Trainer will also log the columns it’s discarding.

We have checked that the input IDs are correct by decoding them. Next is the attention_mask:

trainer.train_dataset[0][\"attention_mask\"]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]

Since we didn’t apply padding in our preprocessing, this seems perfectly natural. To be sure there is no issue with that attention mask, let’s check it is the same length as our input IDs:

len(trainer.train_dataset[0][\"attention_mask\"]) == len(\n    trainer.train_dataset[0][\"input_ids\"]\n)
True

That’s good! Lastly, let’s check our label:

trainer.train_dataset[0][\"label\"]
1

Like the input IDs, this is a number that doesn’t really make sense on its own. As we saw before, the map between integers and label names is stored inside the names attribute of the corresponding feature of the dataset:

trainer.train_dataset.features[\"label\"].names
['entailment', 'neutral', 'contradiction']

So 1 means neutral, which means the two sentences we saw above are not in contradiction, and the first one does not imply the second one. That seems correct!

We don’t have token type IDs here, since DistilBERT does not expect them; if you have some in your model, you should also make sure that they properly match where the first and second sentences are in the input.

✏️ Your turn! Check that everything seems correct with the second element of the training dataset.

We are only doing the check on the training set here, but you should of course double-check the validation and test sets the same way.

Now that we know our datasets look good, it’s time to check the next step of the training pipeline.

From datasets to dataloaders

The next thing that can go wrong in the training pipeline is when the Trainer tries to form batches from the training or validation set. Once you are sure the Trainer’s datasets are correct, you can try to manually form a batch by executing the following (replace train with eval for the validation dataloader):

for batch in trainer.get_train_dataloader():\n    break

This code creates the training dataloader, then iterates through it, stopping at the first iteration. If the code executes without error, you have the first training batch that you can inspect, and if the code errors out, you know for sure the problem is in the dataloader, as is the case here:

~/git/transformers/src/transformers/data/data_collator.py in torch_default_data_collator(features)\n    105                 batch[k] = torch.stack([f[k] for f in features])\n    106             else:\n--> 107                 batch[k] = torch.tensor([f[k] for f in features])\n    108 \n    109     return batch\n\nValueError: expected sequence of length 45 at dim 1 (got 76)

Inspecting the last frame of the traceback should be enough to give you a clue, but let’s do a bit more digging. Most of the problems during batch creation arise because of the collation of examples into a single batch, so the first thing to check when in doubt is what collate_fn your DataLoader is using:

data_collator = trainer.get_train_dataloader().collate_fn\ndata_collator
<function transformers.data.data_collator.default_data_collator(features: List[InputDataClass], return_tensors='pt') -> Dict[str, Any]>

So this is the default_data_collator, but that’s not what we want in this case. We want to pad our examples to the longest sentence in the batch, which is done by the DataCollatorWithPadding collator. And this data collator is supposed to be used by default by the Trainer, so why is it not used here?

The answer is because we did not pass the tokenizer to the Trainer, so it couldn’t create the DataCollatorWithPadding we want. In practice, you should never hesitate to explicitly pass along the data collator you want to use, to make sure you avoid these kinds of errors. Let’s adapt our code to do exactly that:

from datasets import load_dataset\nimport evaluate\nfrom transformers import (\n    AutoTokenizer,\n    AutoModelForSequenceClassification,\n    DataCollatorWithPadding,\n    TrainingArguments,\n    Trainer,\n)\n\nraw_datasets = load_dataset(\"glue\", \"mnli\")\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n\n\ndef preprocess_function(examples):\n    return tokenizer(examples[\"premise\"], examples[\"hypothesis\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(preprocess_function, batched=True)\nmodel = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)\n\nargs = TrainingArguments(\n    f\"distilbert-finetuned-mnli\",\n    evaluation_strategy=\"epoch\",\n    save_strategy=\"epoch\",\n    learning_rate=2e-5,\n    num_train_epochs=3,\n    weight_decay=0.01,\n)\n\nmetric = evaluate.load(\"glue\", \"mnli\")\n\n\ndef compute_metrics(eval_pred):\n    predictions, labels = eval_pred\n    return metric.compute(predictions=predictions, references=labels)\n\n\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)\n\ntrainer = Trainer(\n    model,\n    args,\n    train_dataset=tokenized_datasets[\"train\"],\n    eval_dataset=tokenized_datasets[\"validation_matched\"],\n    compute_metrics=compute_metrics,\n    data_collator=data_collator,\n    tokenizer=tokenizer,\n)\ntrainer.train()

The good news? We don’t get the same error as before, which is definitely progress. The bad news? We get an infamous CUDA error instead:

RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`

This is bad because CUDA errors are extremely hard to debug in general. We will see in a minute how to solve this, but first let’s finish our analysis of batch creation.

If you are sure your data collator is the right one, you should try to apply it on a couple of samples of your dataset:

data_collator = trainer.get_train_dataloader().collate_fn\nbatch = data_collator([trainer.train_dataset[i] for i in range(4)])

This code will fail because the train_dataset contains string columns, which the Trainer usually removes. You can remove them manually, or if you want to replicate exactly what the Trainer is doing behind the scenes, you can call the private Trainer._remove_unused_columns() method that does that:

data_collator = trainer.get_train_dataloader().collate_fn\nactual_train_set = trainer._remove_unused_columns(trainer.train_dataset)\nbatch = data_collator([actual_train_set[i] for i in range(4)])

You should then be able to manually debug what happens inside the data collator if the error persists.

Now that we’ve debugged the batch creation process, it’s time to pass one through the model!

Going through the model

You should be able to get a batch by executing the following command:

for batch in trainer.get_train_dataloader():\n    break

If you’re running this code in a notebook, you may get a CUDA error that’s similar to the one we saw earlier, in which case you need to restart your notebook and reexecute the last snippet without the trainer.train() line. That’s the second most annoying thing about CUDA errors: they irremediably break your kernel. The most annoying thing about them is the fact that they are hard to debug.

Why is that? It has to do with the way GPUs work. They are extremely efficient at executing a lot of operations in parallel, but the drawback is that when one of those instructions results in an error, you don’t know it instantly. It’s only when the program calls a synchronization of the multiple processes on the GPU that it will realize something went wrong, so the error is actually raised at a place that has nothing to do with what created it. For instance, if we look at our previous traceback, the error was raised during the backward pass, but we will see in a minute that it actually stems from something in the forward pass.

So how do we debug those errors? The answer is easy: we don’t. Unless your CUDA error is an out-of-memory error (which means there is not enough memory in your GPU), you should always go back to the CPU to debug it.

To do this in our case, we just have to put the model back on the CPU and call it on our batch — the batch returned by the DataLoader has not been moved to the GPU yet:

outputs = trainer.model.cpu()(**batch)
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)\n   2386         )\n   2387     if dim == 2:\n-> 2388         ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\n   2389     elif dim == 4:\n   2390         ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\n\nIndexError: Target 2 is out of bounds.

So, the picture is getting clearer. Instead of having a CUDA error, we now have an IndexError in the loss computation (so nothing to do with the backward pass, as we said earlier). More precisely, we can see that it’s target 2 that creates the error, so this is a very good moment to check the number of labels of our model:

trainer.model.config.num_labels
2

With two labels, only 0s and 1s are allowed as targets, but according to the error message we got a 2. Getting a 2 is actually normal: if we remember the label names we extracted earlier, there were three, so we have indices 0, 1, and 2 in our dataset. The problem is that we didn’t tell that to our model, which should have been created with three labels. So let’s fix that!

from datasets import load_dataset\nimport evaluate\nfrom transformers import (\n    AutoTokenizer,\n    AutoModelForSequenceClassification,\n    DataCollatorWithPadding,\n    TrainingArguments,\n    Trainer,\n)\n\nraw_datasets = load_dataset(\"glue\", \"mnli\")\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n\n\ndef preprocess_function(examples):\n    return tokenizer(examples[\"premise\"], examples[\"hypothesis\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(preprocess_function, batched=True)\nmodel = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=3)\n\nargs = TrainingArguments(\n    f\"distilbert-finetuned-mnli\",\n    evaluation_strategy=\"epoch\",\n    save_strategy=\"epoch\",\n    learning_rate=2e-5,\n    num_train_epochs=3,\n    weight_decay=0.01,\n)\n\nmetric = evaluate.load(\"glue\", \"mnli\")\n\n\ndef compute_metrics(eval_pred):\n    predictions, labels = eval_pred\n    return metric.compute(predictions=predictions, references=labels)\n\n\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)\n\ntrainer = Trainer(\n    model,\n    args,\n    train_dataset=tokenized_datasets[\"train\"],\n    eval_dataset=tokenized_datasets[\"validation_matched\"],\n    compute_metrics=compute_metrics,\n    data_collator=data_collator,\n    tokenizer=tokenizer,\n)

We aren’t including the trainer.train() line yet, to take the time to check that everything looks good. If we request a batch and pass it to our model, it now works without error!

for batch in trainer.get_train_dataloader():\n    break\n\noutputs = trainer.model.cpu()(**batch)

The next step is then to move back to the GPU and check that everything still works:

import torch\n\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\nbatch = {k: v.to(device) for k, v in batch.items()}\n\noutputs = trainer.model.to(device)(**batch)

If you still get an error, make sure you restart your notebook and only execute the last version of the script.

Performing one optimization step

Now that we know that we can build batches that actually go through the model, we are ready for the next step of the training pipeline: computing the gradients and performing an optimization step.

The first part is just a matter of calling the backward() method on the loss:

loss = outputs.loss\nloss.backward()

It’s pretty rare to get an error at this stage, but if you do get one, make sure to go back to the CPU to get a helpful error message.

To perform the optimization step, we just need to create the optimizer and call its step() method:

trainer.create_optimizer()\ntrainer.optimizer.step()

Again, if you’re using the default optimizer in the Trainer, you shouldn’t get an error at this stage, but if you have a custom optimizer, there might be some problems to debug here. Don’t forget to go back to the CPU if you get a weird CUDA error at this stage. Speaking of CUDA errors, earlier we mentioned a special case. Let’s have a look at that now.

Dealing with CUDA out-of-memory errors

Whenever you get an error message that starts with RuntimeError: CUDA out of memory, this indicates that you are out of GPU memory. This is not directly linked to your code, and it can happen with a script that runs perfectly fine. This error means that you tried to put too many things in the internal memory of your GPU, and that resulted in an error. Like with other CUDA errors, you will need to restart your kernel to be in a spot where you can run your training again.

To solve this issue, you just need to use less GPU space — something that is often easier said than done. First, make sure you don’t have two models on the GPU at the same time (unless that’s required for your problem, of course). Then, you should probably reduce your batch size, as it directly affects the sizes of all the intermediate outputs of the model and their gradients. If the problem persists, consider using a smaller version of your model.

In the next part of the course, we’ll look at more advanced techniques that can help you reduce your memory footprint and let you fine-tune the biggest models.

Evaluating the model

Now that we’ve solved all the issues with our code, everything is perfect and the training should run smoothly, right? Not so fast! If you run the trainer.train() command, everything will look good at first, but after a while you will get the following:

# This will take a long time and error out, so you shouldn't run this cell\ntrainer.train()
TypeError: only size-1 arrays can be converted to Python scalars

You will realize this error appears during the evaluation phase, so this is the last thing we will need to debug.

You can run the evaluation loop of the Trainer independently form the training like this:

trainer.evaluate()
TypeError: only size-1 arrays can be converted to Python scalars

💡 You should always make sure you can run trainer.evaluate() before launching trainer.train(), to avoid wasting lots of compute resources before hitting an error.

Before attempting to debug a problem in the evaluation loop, you should first make sure that you’ve had a look at the data, are able to form a batch properly, and can run your model on it. We’ve completed all of those steps, so the following code can be executed without error:

for batch in trainer.get_eval_dataloader():\n    break\n\nbatch = {k: v.to(device) for k, v in batch.items()}\n\nwith torch.no_grad():\n    outputs = trainer.model(**batch)

The error comes later, at the end of the evaluation phase, and if we look at the traceback we see this:

~/git/datasets/src/datasets/metric.py in add_batch(self, predictions, references)\n    431         \"\"\"\n    432         batch = {\"predictions\": predictions, \"references\": references}\n--> 433         batch = self.info.features.encode_batch(batch)\n    434         if self.writer is None:\n    435             self._init_writer()

This tells us that the error originates in the datasets/metric.py module — so this is a problem with our compute_metrics() function. It takes a tuple with the logits and the labels as NumPy arrays, so let’s try to feed it that:

predictions = outputs.logits.cpu().numpy()\nlabels = batch[\"labels\"].cpu().numpy()\n\ncompute_metrics((predictions, labels))
TypeError: only size-1 arrays can be converted to Python scalars

We get the same error, so the problem definitely lies with that function. If we look back at its code, we see it’s just forwarding the predictions and the labels to metric.compute(). So is there a problem with that method? Not really. Let’s have a quick look at the shapes:

predictions.shape, labels.shape
((8, 3), (8,))

Our predictions are still logits, not the actual predictions, which is why the metric is returning this (somewhat obscure) error. The fix is pretty easy; we just have to add an argmax in the compute_metrics() function:

import numpy as np\n\n\ndef compute_metrics(eval_pred):\n    predictions, labels = eval_pred\n    predictions = np.argmax(predictions, axis=1)\n    return metric.compute(predictions=predictions, references=labels)\n\n\ncompute_metrics((predictions, labels))
{'accuracy': 0.625}

Now our error is fixed! This was the last one, so our script will now train a model properly.

For reference, here is the completely fixed script:

import numpy as np\nfrom datasets import load_dataset\nimport evaluate\nfrom transformers import (\n    AutoTokenizer,\n    AutoModelForSequenceClassification,\n    DataCollatorWithPadding,\n    TrainingArguments,\n    Trainer,\n)\n\nraw_datasets = load_dataset(\"glue\", \"mnli\")\n\nmodel_checkpoint = \"distilbert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n\n\ndef preprocess_function(examples):\n    return tokenizer(examples[\"premise\"], examples[\"hypothesis\"], truncation=True)\n\n\ntokenized_datasets = raw_datasets.map(preprocess_function, batched=True)\nmodel = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=3)\n\nargs = TrainingArguments(\n    f\"distilbert-finetuned-mnli\",\n    evaluation_strategy=\"epoch\",\n    save_strategy=\"epoch\",\n    learning_rate=2e-5,\n    num_train_epochs=3,\n    weight_decay=0.01,\n)\n\nmetric = evaluate.load(\"glue\", \"mnli\")\n\n\ndef compute_metrics(eval_pred):\n    predictions, labels = eval_pred\n    predictions = np.argmax(predictions, axis=1)\n    return metric.compute(predictions=predictions, references=labels)\n\n\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)\n\ntrainer = Trainer(\n    model,\n    args,\n    train_dataset=tokenized_datasets[\"train\"],\n    eval_dataset=tokenized_datasets[\"validation_matched\"],\n    compute_metrics=compute_metrics,\n    data_collator=data_collator,\n    tokenizer=tokenizer,\n)\ntrainer.train()

In this instance, there are no more problems, and our script will fine-tune a model that should give reasonable results. But what can we do when the training proceeds without any error, and the model trained does not perform well at all? That’s the hardest part of machine learning, and we’ll show you a few techniques that can help.

💡 If you’re using a manual training loop, the same steps apply to debug your training pipeline, but it’s easier to separate them. Make sure you have not forgotten the model.eval() or model.train() at the right places, or the zero_grad() at each step, however!

Debugging silent errors during training

What can we do to debug a training that completes without error but doesn’t get good results? We’ll give you some pointers here, but be aware that this kind of debugging is the hardest part of machine learning, and there is no magical answer.

Check your data (again!)

Your model will only learn something if it’s actually possible to learn anything from your data. If there is a bug that corrupts the data or the labels are attributed randomly, it’s very likely you won’t get any model training on your dataset. So always start by double-checking your decoded inputs and labels, and ask yourself the following questions:

  • Is the decoded data understandable?
  • Do you agree with the labels?
  • Is there one label that’s more common than the others?
  • What should the loss/metric be if the model predicted a random answer/always the same answer?

⚠️ If you are doing distributed training, print samples of your dataset in each process and triple-check that you get the same thing. One common bug is to have some source of randomness in the data creation that makes each process have a different version of the dataset.

After looking at your data, go through a few of the model’s predictions and decode them too. If the model is always predicting the same thing, it might be because your dataset is biased toward one category (for classification problems); techniques like oversampling rare classes might help.

If the loss/metric you get on your initial model is very different from the loss/metric you would expect for random predictions, double-check the way your loss or metric is computed, as there is probably a bug there. If you are using several losses that you add at the end, make sure they are of the same scale.

When you are sure your data is perfect, you can see if the model is capable of training on it with one simple test.

Overfit your model on one batch

Overfitting is usually something we try to avoid when training, as it means the model is not learning to recognize the general features we want it to but is instead just memorizing the training samples. However, trying to train your model on one batch over and over again is a good test to check if the problem as you framed it can be solved by the model you are attempting to train. It will also help you see if your initial learning rate is too high.

Doing this once you have defined your Trainer is really easy; just grab a batch of training data, then run a small manual training loop only using that batch for something like 20 steps:

for batch in trainer.get_train_dataloader():\n    break\n\nbatch = {k: v.to(device) for k, v in batch.items()}\ntrainer.create_optimizer()\n\nfor _ in range(20):\n    outputs = trainer.model(**batch)\n    loss = outputs.loss\n    loss.backward()\n    trainer.optimizer.step()\n    trainer.optimizer.zero_grad()

💡 If your training data is unbalanced, make sure to build a batch of training data containing all the labels.

The resulting model should have close-to-perfect results on the same batch. Let’s compute the metric on the resulting predictions:

with torch.no_grad():\n    outputs = trainer.model(**batch)\npreds = outputs.logits\nlabels = batch[\"labels\"]\n\ncompute_metrics((preds.cpu().numpy(), labels.cpu().numpy()))
{'accuracy': 1.0}

100% accuracy, now this is a nice example of overfitting (meaning that if you try your model on any other sentence, it will very likely give you a wrong answer)!

If you don’t manage to have your model obtain perfect results like this, it means there is something wrong with the way you framed the problem or your data, so you should fix that. Only when you manage to pass the overfitting test can you be sure that your model can actually learn something.

⚠️ You will have to recreate your model and your Trainer after this test, as the model obtained probably won’t be able to recover and learn something useful on your full dataset.

Don't tune anything until you have a first baseline

Hyperparameter tuning is always emphasized as being the hardest part of machine learning, but it’s just the last step to help you gain a little bit on the metric. Most of the time, the default hyperparameters of the Trainer will work just fine to give you good results, so don’t launch into a time-consuming and costly hyperparameter search until you have something that beats the baseline you have on your dataset.

Once you have a good enough model, you can start tweaking a bit. Don’t try launching a thousand runs with different hyperparameters, but compare a couple of runs with different values for one hyperparameter to get an idea of which has the greatest impact.

If you are tweaking the model itself, keep it simple and don’t try anything you can’t reasonably justify. Always make sure you go back to the overfitting test to verify that your change hasn’t had any unintended consequences.

Ask for help

Hopefully you will have found some advice in this section that helped you solve your issue, but if that’s not the case, remember you can always ask the community on the forums.

Here are some additional resources that may prove helpful:

Of course, not every problem you encounter when training neural nets is your own fault! If you encounter something in the 🤗 Transformers or 🤗 Datasets library that does not seem right, you may have encountered a bug. You should definitely tell us all about it, and in the next section we’ll explain exactly how to do that.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:33.870Z"} {"title":"How to write a good issue - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter8/5?fw=pt","markdown":"## [](#how-to-write-a-good-issue)How to write a good issue\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-8-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter8/section5.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter8/section5.ipynb)\n\nWhen you encounter something that doesn’t seem right with one of the Hugging Face libraries, you should definitely let us know so we can fix it (the same goes for any open source library, for that matter). If you are not completely certain whether the bug lies in your own code or one of our libraries, the first place to check is the [forums](https://discuss.huggingface.co/). The community will help you figure this out, and the Hugging Face team also closely watches the discussions there.\n\nWhen you are sure you have a bug in your hand, the first step is to build a minimal reproducible example.\n\n## [](#creating-a-minimal-reproducible-example)Creating a minimal reproducible example\n\nIt’s very important to isolate the piece of code that produces the bug, as no one in the Hugging Face team is a magician (yet), and they can’t fix what they can’t see. A minimal reproducible example should, as the name indicates, be reproducible. This means that it should not rely on any external files or data you may have. Try to replace the data you are using with some dummy values that look like your real ones and still produce the same error.\n\n🚨 Many issues in the 🤗 Transformers repository are unsolved because the data used to reproduce them is not accessible.\n\nOnce you have something that is self-contained, you can try to reduce it into even less lines of code, building what we call a _minimal reproducible example_. While this requires a bit more work on your side, you will almost be guaranteed to get help and a fix if you provide a nice, short bug reproducer.\n\nIf you feel comfortable enough, go inspect the source code where your bug happens. You might find a solution to your problem (in which case you can even suggest a pull request to fix it), but more generally, this can help the maintainers better understand the source when they read your report.\n\n## [](#filling-out-the-issue-template)Filling out the issue template\n\nWhen you file your issue, you will notice there is a template to fill out. We will follow the one for [🤗 Transformers issues](https://github.com/huggingface/transformers/issues/new/choose) here, but the same kind of information will be required if you report an issue in another repository. Don’t leave the template blank: taking the time to fill it in will maximize your chances of getting an answer and solving your problem.\n\nIn general, when filing an issue, always stay courteous. This is an open source project, so you are using free software, and no one has any obligation to help you. You may include what you feel is justified criticism in your issue, but then the maintainers may very well take it badly and not be in a rush help you. Make sure you read the [code of conduct](https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md) of the project.\n\n### [](#including-your-environment-information)Including your environment information\n\n🤗 Transformers provides a utility to get all the information we need about your environment. Just type the following in your terminal:\n\nand you should get something like this:\n\n```\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\n\n- `transformers` version: 4.12.0.dev0\n- Platform: Linux-5.10.61-1-MANJARO-x86_64-with-arch-Manjaro-Linux\n- Python version: 3.7.9\n- PyTorch version (GPU?): 1.8.1+cu111 (True)\n- Tensorflow version (GPU?): 2.5.0 (True)\n- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)\n- Jax version: 0.2.13\n- JaxLib version: 0.1.65\n- Using GPU in script?: \n- Using distributed or parallel set-up in script?: ```\n\nYou can also add a `!` at the beginning of the `transformers-cli env` command to execute it from a notebook cell, and then copy and paste the result at the beginning of your issue.\n\n### [](#tagging-people)Tagging people\n\nTagging people by typing an `@` followed by their GitHub handle will send them a notification so they will see your issue and might reply quicker. Use this with moderation, because the people you tag might not appreciate being notified if it’s something they have no direct link to. If you have looked at the source files related to your bug, you should tag the last person that made changes at the line you think is responsible for your problem (you can find this information by looking at said line on GitHub, selecting it, then clicking “View git blame”).\n\nOtherwise, the template offers suggestions of people to tag. In general, never tag more than three people!\n\n### [](#including-a-reproducible-example)Including a reproducible example\n\nIf you have managed to create a self-contained example that produces the bug, now is the time to include it! Type a line with three backticks followed by `python`, like this:\n\nthen paste in your minimal reproducible example and type a new line with three backticks. This will ensure your code is properly formatted.\n\nIf you didn’t manage to create a reproducible example, explain in clear steps how you got to your issue. Include a link to a Google Colab notebook where you got the error if you can. The more information you share, the better able the maintainers will be to reply to you.\n\nIn all cases, you should copy and paste the whole error message you are getting. If you’re working in Colab, remember that some of the frames may be automatically collapsed in the stack trace, so make sure you expand them before copying. Like with the code sample, put that error message between two lines with three backticks, so it’s properly formatted.\n\n### [](#describing-the-expected-behavior[[describing-the-expected-behavior]])Describing the expected behavior\\[\\[describing-the-expected-behavior\\]\\]\n\nExplain in a few lines what you expected to get, so that the maintainers get a full grasp of the problem. This part is generally pretty obvious, so it should fit in one sentence, but in some cases you may have a lot to say.\n\n## [](#and-then-what?[[and-then-what]])And then what?\\[\\[and-then-what\\]\\]\n\nOnce your issue is filed, make sure to quickly check everything looks okay. You can edit the issue if you made a mistake, or even change its title if you realize the problem is different from what you initially thought.\n\nThere is no point pinging people if you don’t get an answer. If no one helps you in a few days, it’s likely that no one could make sense of your problem. Don’t hesitate to go back to the reproducible example. Can you make it shorter and more to the point? If you don’t get an answer in a week, you can leave a message gently asking for help, especially if you’ve edited your issue to include more information on the problem.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tHow to write a good issue - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

How to write a good issue

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

How to write a good issue

\"Ask \"Open \"Open

When you encounter something that doesn’t seem right with one of the Hugging Face libraries, you should definitely let us know so we can fix it (the same goes for any open source library, for that matter). If you are not completely certain whether the bug lies in your own code or one of our libraries, the first place to check is the forums. The community will help you figure this out, and the Hugging Face team also closely watches the discussions there.

When you are sure you have a bug in your hand, the first step is to build a minimal reproducible example.

Creating a minimal reproducible example

It’s very important to isolate the piece of code that produces the bug, as no one in the Hugging Face team is a magician (yet), and they can’t fix what they can’t see. A minimal reproducible example should, as the name indicates, be reproducible. This means that it should not rely on any external files or data you may have. Try to replace the data you are using with some dummy values that look like your real ones and still produce the same error.

🚨 Many issues in the 🤗 Transformers repository are unsolved because the data used to reproduce them is not accessible.

Once you have something that is self-contained, you can try to reduce it into even less lines of code, building what we call a minimal reproducible example. While this requires a bit more work on your side, you will almost be guaranteed to get help and a fix if you provide a nice, short bug reproducer.

If you feel comfortable enough, go inspect the source code where your bug happens. You might find a solution to your problem (in which case you can even suggest a pull request to fix it), but more generally, this can help the maintainers better understand the source when they read your report.

Filling out the issue template

When you file your issue, you will notice there is a template to fill out. We will follow the one for 🤗 Transformers issues here, but the same kind of information will be required if you report an issue in another repository. Don’t leave the template blank: taking the time to fill it in will maximize your chances of getting an answer and solving your problem.

In general, when filing an issue, always stay courteous. This is an open source project, so you are using free software, and no one has any obligation to help you. You may include what you feel is justified criticism in your issue, but then the maintainers may very well take it badly and not be in a rush help you. Make sure you read the code of conduct of the project.

Including your environment information

🤗 Transformers provides a utility to get all the information we need about your environment. Just type the following in your terminal:

transformers-cli env

and you should get something like this:

Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\n\n- `transformers` version: 4.12.0.dev0\n- Platform: Linux-5.10.61-1-MANJARO-x86_64-with-arch-Manjaro-Linux\n- Python version: 3.7.9\n- PyTorch version (GPU?): 1.8.1+cu111 (True)\n- Tensorflow version (GPU?): 2.5.0 (True)\n- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)\n- Jax version: 0.2.13\n- JaxLib version: 0.1.65\n- Using GPU in script?: <fill in>\n- Using distributed or parallel set-up in script?: <fill in>

You can also add a ! at the beginning of the transformers-cli env command to execute it from a notebook cell, and then copy and paste the result at the beginning of your issue.

Tagging people

Tagging people by typing an @ followed by their GitHub handle will send them a notification so they will see your issue and might reply quicker. Use this with moderation, because the people you tag might not appreciate being notified if it’s something they have no direct link to. If you have looked at the source files related to your bug, you should tag the last person that made changes at the line you think is responsible for your problem (you can find this information by looking at said line on GitHub, selecting it, then clicking “View git blame”).

Otherwise, the template offers suggestions of people to tag. In general, never tag more than three people!

Including a reproducible example

If you have managed to create a self-contained example that produces the bug, now is the time to include it! Type a line with three backticks followed by python, like this:

```python

then paste in your minimal reproducible example and type a new line with three backticks. This will ensure your code is properly formatted.

If you didn’t manage to create a reproducible example, explain in clear steps how you got to your issue. Include a link to a Google Colab notebook where you got the error if you can. The more information you share, the better able the maintainers will be to reply to you.

In all cases, you should copy and paste the whole error message you are getting. If you’re working in Colab, remember that some of the frames may be automatically collapsed in the stack trace, so make sure you expand them before copying. Like with the code sample, put that error message between two lines with three backticks, so it’s properly formatted.

Describing the expected behavior[[describing-the-expected-behavior]]

Explain in a few lines what you expected to get, so that the maintainers get a full grasp of the problem. This part is generally pretty obvious, so it should fit in one sentence, but in some cases you may have a lot to say.

And then what?[[and-then-what]]

Once your issue is filed, make sure to quickly check everything looks okay. You can edit the issue if you made a mistake, or even change its title if you realize the problem is different from what you initially thought.

There is no point pinging people if you don’t get an answer. If no one helps you in a few days, it’s likely that no one could make sense of your problem. Don’t hesitate to go back to the reproducible example. Can you make it shorter and more to the point? If you don’t get an answer in a week, you can leave a message gently asking for help, especially if you’ve edited your issue to include more information on the problem.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:34.096Z"} {"title":"Part 2 completed! - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter8/6?fw=pt","markdown":"## [](#part-2-completed)Part 2 completed!\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-8-questions)\n\nCongratulations, you’ve made it through the second part of the course! We’re actively working on the third one, so subscribe to our [newsletter](https://huggingface.curated.co/) to make sure you don’t miss its release.\n\nYou should now be able to tackle a range of NLP tasks, and fine-tune or pretrain a model on them. Don’t forget to share your results with the community on the [Model Hub](https://huggingface.co/models).\n\nWe can’t wait to see what you will build with the knowledge that you’ve gained!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tPart 2 completed! - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Part 2 completed!

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Part 2 completed!

\"Ask

Congratulations, you’ve made it through the second part of the course! We’re actively working on the third one, so subscribe to our newsletter to make sure you don’t miss its release.

You should now be able to tackle a range of NLP tasks, and fine-tune or pretrain a model on them. Don’t forget to share your results with the community on the Model Hub.

We can’t wait to see what you will build with the knowledge that you’ve gained!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:34.374Z"} {"title":"End-of-chapter quiz - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter8/7?fw=pt","markdown":"## [](#end-of-chapter-quiz)End-of-chapter quiz\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-8-questions)\n\nLet’s test what you learned in this chapter!\n\n### [](#1.-in-which-order-should-you-read-a-python-traceback?)1\\. In which order should you read a Python traceback?\n\n### [](#2.-what-is-a-minimal-reproducible-example?)2\\. What is a minimal reproducible example?\n\n### [](#3.-suppose-you-try-to-run-the-following-code,-which-throws-an-error:)3\\. Suppose you try to run the following code, which throws an error:\n\n```\nfrom transformers import GPT3ForSequenceClassification\n\n# ImportError: cannot import name 'GPT3ForSequenceClassification' from 'transformers' (/Users/lewtun/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/__init__.py)\n# ---------------------------------------------------------------------------\n# ImportError Traceback (most recent call last)\n# /var/folders/28/k4cy5q7s2hs92xq7_h89_vgm0000gn/T/ipykernel_30848/333858878.py in \n# ----> 1 from transformers import GPT3ForSequenceClassification\n\n# ImportError: cannot import name 'GPT3ForSequenceClassification' from 'transformers' (/Users/lewtun/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/__init__.py)```\n\nWhich of the following might be a good choice for the title of a forum topic to ask for help?\n\n### [](#4.-suppose-you’ve-tried-to-run-trainer.train()-and-are-faced-with-a-cryptic-error-that-doesn’t-tell-you-exactly-where-the-error-is-coming-from.-which-of-the-following-is-the-first-place-you-should-look-for-errors-in-your-training-pipeline?)4\\. Suppose you’ve tried to run `trainer.train()` and are faced with a cryptic error that doesn’t tell you exactly where the error is coming from. Which of the following is the first place you should look for errors in your training pipeline?\n\n### [](#5.-what-is-the-best-way-to-debug-a-cuda-error?)5\\. What is the best way to debug a CUDA error?\n\n### [](#6.-what-is-the-best-way-to-get-an-issue-on-github-fixed?)6\\. What is the best way to get an issue on GitHub fixed?\n\n### [](#7.-why-is-overfitting-to-one-batch-usually-a-good-debugging-technique?)7\\. Why is overfitting to one batch usually a good debugging technique?\n\n### [](#8.-why-is-it-a-good-idea-to-include-details-on-your-compute-environment-with-transformers-cli-env-when-creating-a-new-issue-in-the-🤗-transformers-repo?)8\\. Why is it a good idea to include details on your compute environment with `transformers-cli env` when creating a new issue in the 🤗 Transformers repo?","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tEnd-of-chapter quiz - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

End-of-chapter quiz

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

End-of-chapter quiz

\"Ask

Let’s test what you learned in this chapter!

1. In which order should you read a Python traceback?

2. What is a minimal reproducible example?

3. Suppose you try to run the following code, which throws an error:

from transformers import GPT3ForSequenceClassification\n\n# ImportError: cannot import name 'GPT3ForSequenceClassification' from 'transformers' (/Users/lewtun/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/__init__.py)\n# ---------------------------------------------------------------------------\n# ImportError                               Traceback (most recent call last)\n# /var/folders/28/k4cy5q7s2hs92xq7_h89_vgm0000gn/T/ipykernel_30848/333858878.py in <module>\n# ----> 1 from transformers import GPT3ForSequenceClassification\n\n# ImportError: cannot import name 'GPT3ForSequenceClassification' from 'transformers' (/Users/lewtun/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/__init__.py)

Which of the following might be a good choice for the title of a forum topic to ask for help?

trainer.train()-and-are-faced-with-a-cryptic-error-that-doesn’t-tell-you-exactly-where-the-error-is-coming-from.-which-of-the-following-is-the-first-place-you-should-look-for-errors-in-your-training-pipeline?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#4.-suppose-you’ve-tried-to-run-trainer.train()-and-are-faced-with-a-cryptic-error-that-doesn’t-tell-you-exactly-where-the-error-is-coming-from.-which-of-the-following-is-the-first-place-you-should-look-for-errors-in-your-training-pipeline?\"> 4. Suppose you’ve tried to run trainer.train() and are faced with a cryptic error that doesn’t tell you exactly where the error is coming from. Which of the following is the first place you should look for errors in your training pipeline?

5. What is the best way to debug a CUDA error?

6. What is the best way to get an issue on GitHub fixed?

7. Why is overfitting to one batch usually a good debugging technique?

transformers-cli-env-when-creating-a-new-issue-in-the-🤗-transformers-repo?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#8.-why-is-it-a-good-idea-to-include-details-on-your-compute-environment-with-transformers-cli-env-when-creating-a-new-issue-in-the-🤗-transformers-repo?\"> 8. Why is it a good idea to include details on your compute environment with transformers-cli env when creating a new issue in the 🤗 Transformers repo?

\n\t\t\t\t
Part 2 completed!\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tNext chapter
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:34.509Z"} {"title":"Introduction to Gradio - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter9/1?fw=pt","markdown":"## [](#introduction-to-gradio)Introduction to Gradio\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-9-questions)\n\nIn this chapter we will be learning about how to build **interactive demos** for your machine learning models.\n\nWhy build a demo or a GUI for your machine learning model in the first place? Demos allow:\n\n- **Machine learning developers** to easily present their work to a wide audience including non-technical teams or customers\n- **Researchers** to more easily reproduce machine learning models and behavior\n- **Quality testers** or **end users** to more easily identify and debug failure points of models\n- **Diverse users** to discover algorithmic biases in models\n\nWe’ll be using the Gradio library to build demos for our models. Gradio allows you to build, customize, and share web-based demos for any machine learning model, entirely in Python.\n\nHere are some examples of machine learning demos built with Gradio:\n\n- A **sketch recognition** model that takes in a sketch and outputs labels of what it thinks is being drawn:\n\n- An extractive **question answering** model that takes in a context paragraph and a quest and outputs a response and a probability score (we discussed this kind of model [in Chapter 7](/course/chapter7/7)):\n\n- A **background removal** model that takes in an image and outputs the image with the background removed:\n\nThis chapter is broken down into sections which include both _concepts_ and _applications_. After you learn the concept in each section, you’ll apply it to build a particular kind of demo, ranging from image classification to speech recognition. By the time you finish this chapter, you’ll be able to build these demos (and many more!) in just a few lines of Python code.\n\n👀 Check out [Hugging Face Spaces](https://huggingface.co/spaces) to see many recent examples of machine learning demos built by the machine learning community!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tIntroduction to Gradio - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Introduction to Gradio

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Introduction to Gradio

\"Ask

In this chapter we will be learning about how to build interactive demos for your machine learning models.

Why build a demo or a GUI for your machine learning model in the first place? Demos allow:

  • Machine learning developers to easily present their work to a wide audience including non-technical teams or customers
  • Researchers to more easily reproduce machine learning models and behavior
  • Quality testers or end users to more easily identify and debug failure points of models
  • Diverse users to discover algorithmic biases in models

We’ll be using the Gradio library to build demos for our models. Gradio allows you to build, customize, and share web-based demos for any machine learning model, entirely in Python.

Here are some examples of machine learning demos built with Gradio:

  • A sketch recognition model that takes in a sketch and outputs labels of what it thinks is being drawn:
  • An extractive question answering model that takes in a context paragraph and a quest and outputs a response and a probability score (we discussed this kind of model in Chapter 7):
  • A background removal model that takes in an image and outputs the image with the background removed:

This chapter is broken down into sections which include both concepts and applications. After you learn the concept in each section, you’ll apply it to build a particular kind of demo, ranging from image classification to speech recognition. By the time you finish this chapter, you’ll be able to build these demos (and many more!) in just a few lines of Python code.

👀 Check out Hugging Face Spaces to see many recent examples of machine learning demos built by the machine learning community!
\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:36.312Z"} {"title":"Understanding the Interface class - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter9/3?fw=pt","markdown":"## [](#understanding-the-interface-class)Understanding the Interface class\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-9-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter9/section3.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter9/section3.ipynb)\n\nIn this section, we will take a closer look at the `Interface` class, and understand the main parameters used to create one.\n\n## [](#how-to-create-an-interface)How to create an Interface\n\nYou’ll notice that the `Interface` class has 3 required parameters:\n\n`Interface(fn, inputs, outputs, ...)`\n\nThese parameters are:\n\n- `fn`: the prediction function that is wrapped by the Gradio interface. This function can take one or more parameters and return one or more values\n- `inputs`: the input component type(s). Gradio provides many pre-built components such as`\"image\"` or `\"mic\"`.\n- `outputs`: the output component type(s). Again, Gradio provides many pre-built components e.g. `\"image\"` or `\"label\"`.\n\nFor a complete list of components, [see the Gradio docs](https://gradio.app/docs) . Each pre-built component can be customized by instantiating the class corresponding to the component.\n\nFor example, as we saw in the [previous section](/course/chapter9/2), instead of passing in `\"textbox\"` to the `inputs` parameter, you can pass in a `Textbox(lines=7, label=\"Prompt\")` component to create a textbox with 7 lines and a label.\n\nLet’s take a look at another example, this time with an `Audio` component.\n\n## [](#a-simple-example-with-audio)A simple example with audio\n\nAs mentioned earlier, Gradio provides many different inputs and outputs. So let’s build an `Interface` that works with audio.\n\nIn this example, we’ll build an audio-to-audio function that takes an audio file and simply reverses it.\n\nWe will use for the input the `Audio` component. When using the `Audio` component, you can specify whether you want the `source` of the audio to be a file that the user uploads or a microphone that the user records their voice with. In this case, let’s set it to a `\"microphone\"`. Just for fun, we’ll add a label to our `Audio` that says “Speak here…“.\n\nIn addition, we’d like to receive the audio as a numpy array so that we can easily “reverse” it. So we’ll set the `\"type\"` to be `\"numpy\"`, which passes the input data as a tuple of (`sample_rate`, `data`) into our function.\n\nWe will also use the `Audio` output component which can automatically render a tuple with a sample rate and numpy array of data as a playable audio file. In this case, we do not need to do any customization, so we will use the string shortcut `\"audio\"`.\n\n```\nimport numpy as np\nimport gradio as gr\n\n\ndef reverse_audio(audio):\n sr, data = audio\n reversed_audio = (sr, np.flipud(data))\n return reversed_audio\n\n\nmic = gr.Audio(source=\"microphone\", type=\"numpy\", label=\"Speak here...\")\ngr.Interface(reverse_audio, mic, \"audio\").launch()```\n\nThe code above will produce an interface like the one below (if your browser doesn’t ask you for microphone permissions, [open the demo in a separate tab](https://huggingface.co/spaces/course-demos/audio-reverse).)\n\nYou should now be able to record your voice and hear yourself speaking in reverse - spooky 👻!\n\n## [](#handling-multiple-inputs-and-outputs)Handling multiple inputs and outputs\n\nLet’s say we had a more complicated function, with multiple inputs and outputs. In the example below, we have a function that takes a dropdown index, a slider value, and number, and returns an audio sample of a musical tone.\n\nTake a look how we pass a list of input and output components, and see if you can follow along what’s happening.\n\nThe key here is that when you pass:\n\n- a list of input components, each component corresponds to a parameter in order.\n- a list of output coponents, each component corresponds to a returned value.\n\nThe code snippet below shows how three input components line up with the three arguments of the `generate_tone()` function:\n\n```\nimport numpy as np\nimport gradio as gr\n\nnotes = [\"C\", \"C#\", \"D\", \"D#\", \"E\", \"F\", \"F#\", \"G\", \"G#\", \"A\", \"A#\", \"B\"]\n\n\ndef generate_tone(note, octave, duration):\n sr = 48000\n a4_freq, tones_from_a4 = 440, 12 * (octave - 4) + (note - 9)\n frequency = a4_freq * 2 ** (tones_from_a4 / 12)\n duration = int(duration)\n audio = np.linspace(0, duration, duration * sr)\n audio = (20000 * np.sin(audio * (2 * np.pi * frequency))).astype(np.int16)\n return (sr, audio)\n\n\ngr.Interface(\n generate_tone,\n [\n gr.Dropdown(notes, type=\"index\"),\n gr.Slider(minimum=4, maximum=6, step=1),\n gr.Textbox(type=\"number\", value=1, label=\"Duration in seconds\"),\n ],\n \"audio\",\n).launch()```\n\n### [](#the-launch-method)The `launch()` method\n\nSo far, we have used the `launch()` method to launch the interface, but we haven’t really discussed what it does.\n\nBy default, the `launch()` method will launch the demo in a web server that is running locally. If you are running your code in a Jupyter or Colab notebook, then Gradio will embed the demo GUI in the notebook so you can easily use it.\n\nYou can customize the behavior of `launch()` through different parameters:\n\n- `inline` - whether to display the interface inline on Python notebooks.\n- `inbrowser` - whether to automatically launch the interface in a new tab on the default browser.\n- `share` - whether to create a publicly shareable link from your computer for the interface. Kind of like a Google Drive link!\n\nWe’ll cover the `share` parameter in a lot more detail in the next section!\n\n## [](#lets-apply-it)✏️ Let's apply it!\n\nLet’s build an interface that allows you to demo a **speech-recognition** model. To make it interesting, we will accept _either_ a mic input or an uploaded file.\n\nAs usual, we’ll load our speech recognition model using the `pipeline()` function from 🤗 Transformers. If you need a quick refresher, you can go back to [that section in Chapter 1](/course/chapter1/3). Next, we’ll implement a `transcribe_audio()` function that processes the audio and returns the transcription. Finally, we’ll wrap this function in an `Interface` with the `Audio` components for the inputs and just text for the output. Altogether, the code for this application is the following:\n\n```\nfrom transformers import pipeline\nimport gradio as gr\n\nmodel = pipeline(\"automatic-speech-recognition\")\n\n\ndef transcribe_audio(mic=None, file=None):\n if mic is not None:\n audio = mic\n elif file is not None:\n audio = file\n else:\n return \"You must either provide a mic recording or a file\"\n transcription = model(audio)[\"text\"]\n return transcription\n\n\ngr.Interface(\n fn=transcribe_audio,\n inputs=[\n gr.Audio(source=\"microphone\", type=\"filepath\", optional=True),\n gr.Audio(source=\"upload\", type=\"filepath\", optional=True),\n ],\n outputs=\"text\",\n).launch()```\n\nIf your browser doesn’t ask you for microphone permissions, [open the demo in a separate tab](https://huggingface.co/spaces/course-demos/audio-reverse).\n\nThat’s it! You can now use this interface to transcribe audio. Notice here that by passing in the `optional` parameter as `True`, we allow the user to either provide a microphone or an audio file (or neither, but that will return an error message).\n\nKeep going to see how to share your interface with others!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tUnderstanding the Interface class - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Understanding the Interface class

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Understanding the Interface class

\"Ask \"Open \"Open

In this section, we will take a closer look at the Interface class, and understand the\nmain parameters used to create one.

How to create an Interface

You’ll notice that the Interface class has 3 required parameters:

Interface(fn, inputs, outputs, ...)

These parameters are:

  • fn: the prediction function that is wrapped by the Gradio interface. This function can take one or more parameters and return one or more values
  • inputs: the input component type(s). Gradio provides many pre-built components such as\"image\" or \"mic\".
  • outputs: the output component type(s). Again, Gradio provides many pre-built components e.g. \"image\" or \"label\".

For a complete list of components, see the Gradio docs . Each pre-built component can be customized by instantiating the class corresponding to the component.

For example, as we saw in the previous section,\ninstead of passing in \"textbox\" to the inputs parameter, you can pass in a Textbox(lines=7, label=\"Prompt\") component to create a textbox with 7 lines and a label.

Let’s take a look at another example, this time with an Audio component.

A simple example with audio

As mentioned earlier, Gradio provides many different inputs and outputs.\nSo let’s build an Interface that works with audio.

In this example, we’ll build an audio-to-audio function that takes an\naudio file and simply reverses it.

We will use for the input the Audio component. When using the Audio component,\nyou can specify whether you want the source of the audio to be a file that the user\nuploads or a microphone that the user records their voice with. In this case, let’s\nset it to a \"microphone\". Just for fun, we’ll add a label to our Audio that says\n“Speak here…“.

In addition, we’d like to receive the audio as a numpy array so that we can easily\n“reverse” it. So we’ll set the \"type\" to be \"numpy\", which passes the input\ndata as a tuple of (sample_rate, data) into our function.

We will also use the Audio output component which can automatically\nrender a tuple with a sample rate and numpy array of data as a playable audio file.\nIn this case, we do not need to do any customization, so we will use the string\nshortcut \"audio\".

import numpy as np\nimport gradio as gr\n\n\ndef reverse_audio(audio):\n    sr, data = audio\n    reversed_audio = (sr, np.flipud(data))\n    return reversed_audio\n\n\nmic = gr.Audio(source=\"microphone\", type=\"numpy\", label=\"Speak here...\")\ngr.Interface(reverse_audio, mic, \"audio\").launch()

The code above will produce an interface like the one below (if your browser doesn’t\nask you for microphone permissions, open the demo in a separate tab.)

You should now be able to record your voice and hear yourself speaking in reverse - spooky 👻!

Handling multiple inputs and outputs

Let’s say we had a more complicated function, with multiple inputs and outputs.\nIn the example below, we have a function that takes a dropdown index, a slider value, and number,\nand returns an audio sample of a musical tone.

Take a look how we pass a list of input and output components,\nand see if you can follow along what’s happening.

The key here is that when you pass:

  • a list of input components, each component corresponds to a parameter in order.
  • a list of output coponents, each component corresponds to a returned value.

The code snippet below shows how three input components line up with the three arguments of the generate_tone() function:

import numpy as np\nimport gradio as gr\n\nnotes = [\"C\", \"C#\", \"D\", \"D#\", \"E\", \"F\", \"F#\", \"G\", \"G#\", \"A\", \"A#\", \"B\"]\n\n\ndef generate_tone(note, octave, duration):\n    sr = 48000\n    a4_freq, tones_from_a4 = 440, 12 * (octave - 4) + (note - 9)\n    frequency = a4_freq * 2 ** (tones_from_a4 / 12)\n    duration = int(duration)\n    audio = np.linspace(0, duration, duration * sr)\n    audio = (20000 * np.sin(audio * (2 * np.pi * frequency))).astype(np.int16)\n    return (sr, audio)\n\n\ngr.Interface(\n    generate_tone,\n    [\n        gr.Dropdown(notes, type=\"index\"),\n        gr.Slider(minimum=4, maximum=6, step=1),\n        gr.Textbox(type=\"number\", value=1, label=\"Duration in seconds\"),\n    ],\n    \"audio\",\n).launch()

The launch() method

So far, we have used the launch() method to launch the interface, but we\nhaven’t really discussed what it does.

By default, the launch() method will launch the demo in a web server that\nis running locally. If you are running your code in a Jupyter or Colab notebook, then\nGradio will embed the demo GUI in the notebook so you can easily use it.

You can customize the behavior of launch() through different parameters:

  • inline - whether to display the interface inline on Python notebooks.
  • inbrowser - whether to automatically launch the interface in a new tab on the default browser.
  • share - whether to create a publicly shareable link from your computer for the interface. Kind of like a Google Drive link!

We’ll cover the share parameter in a lot more detail in the next section!

✏️ Let's apply it!

Let’s build an interface that allows you to demo a speech-recognition model.\nTo make it interesting, we will accept either a mic input or an uploaded file.

As usual, we’ll load our speech recognition model using the pipeline() function from 🤗 Transformers.\nIf you need a quick refresher, you can go back to that section in Chapter 1. Next, we’ll implement a transcribe_audio() function that processes the audio and returns the transcription. Finally, we’ll wrap this function in an Interface with the Audio components for the inputs and just text for the output. Altogether, the code for this application is the following:

from transformers import pipeline\nimport gradio as gr\n\nmodel = pipeline(\"automatic-speech-recognition\")\n\n\ndef transcribe_audio(mic=None, file=None):\n    if mic is not None:\n        audio = mic\n    elif file is not None:\n        audio = file\n    else:\n        return \"You must either provide a mic recording or a file\"\n    transcription = model(audio)[\"text\"]\n    return transcription\n\n\ngr.Interface(\n    fn=transcribe_audio,\n    inputs=[\n        gr.Audio(source=\"microphone\", type=\"filepath\", optional=True),\n        gr.Audio(source=\"upload\", type=\"filepath\", optional=True),\n    ],\n    outputs=\"text\",\n).launch()

If your browser doesn’t ask you for microphone permissions, open the demo in a separate tab.

That’s it! You can now use this interface to transcribe audio. Notice here that\nby passing in the optional parameter as True, we allow the user to either\nprovide a microphone or an audio file (or neither, but that will return an error message).

Keep going to see how to share your interface with others!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:37.994Z"} {"title":"Sharing demos with others - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter9/4?fw=pt","markdown":"## [](#sharing-demos-with-others)Sharing demos with others\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-9-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter9/section4.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter9/section4.ipynb)\n\nNow that you’ve built a demo, you’ll probably want to share it with others. Gradio demos can be shared in two ways: using a **_temporary share link_** or **_permanent hosting on Spaces_**.\n\nWe’ll cover both of these approaches shortly. But before you share your demo, you may want to polish it up 💅.\n\n### [](#polishing-your-gradio-demo)Polishing your Gradio demo:\n\n![Overview of a gradio interface](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter9/gradio-demo-overview.png) ![Overview of a gradio interface](https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter9/gradio-demo-overview-dark.png)\n\nTo add additional content to your demo, the `Interface` class supports some optional parameters:\n\n- `title`: you can give a title to your demo, which appears _above_ the input and output components.\n- `description`: you can give a description (in text, Markdown, or HTML) for the interface, which appears above the input and output components and below the title.\n- `article`: you can also write an expanded article (in text, Markdown, or HTML) explaining the interface. If provided, it appears _below_ the input and output components.\n- `theme`: don’t like the default colors? Set the theme to use one of `default`, `huggingface`, `grass`, `peach`. You can also add the `dark-` prefix, e.g. `dark-peach` for dark theme (or just `dark` for the default dark theme).\n- `examples`: to make your demo _way easier to use_, you can provide some example inputs for the function. These appear below the UI components and can be used to populate the interface. These should be provided as a nested list, in which the outer list consists of samples and each inner list consists of an input corresponding to each input component.\n- `live`: if you want to make your demo “live”, meaning that your model reruns every time the input changes, you can set `live=True`. This makes sense to use with quick models (we’ll see an example at the end of this section) Using the options above, we end up with a more complete interface. Run the code below so you can chat with Rick and Morty:\n\n```\ntitle = \"Ask Rick a Question\"\ndescription = \"\"\"\nThe bot was trained to answer questions based on Rick and Morty dialogues. Ask Rick anything!\n\n\"\"\"\n\narticle = \"Check out [the original Rick and Morty Bot](https://huggingface.co/spaces/kingabzpro/Rick_and_Morty_Bot) that this demo is based off of.\"\n\ngr.Interface(\n fn=predict,\n inputs=\"textbox\",\n outputs=\"text\",\n title=title,\n description=description,\n article=article,\n examples=[[\"What are you doing?\"], [\"Where should we time travel to?\"]],\n).launch()```\n\nUsing the options above, we end up with a more complete interface. Try the interface below:\n\n### [](#sharing-your-demo-with-temporary-links)Sharing your demo with temporary links\n\nNow that we have a working demo of our machine learning model, let's learn how to easily share a link to our interface. Interfaces can be easily shared publicly by setting \\`share=True\\` in the \\`launch()\\` method:\n\n```\ngr.Interface(classify_image, \"image\", \"label\").launch(share=True)```\n\nThis generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser for up to 72 hours. Because the processing happens on your device (as long as your device stays on!), you don’t have to worry about packaging any dependencies. If you’re working out of a Google Colab notebook, a share link is always automatically created. It usually looks something like this: **XXXXX.gradio.app**. Although the link is served through a Gradio link, we are only a proxy for your local server, and do not store any data sent through the interfaces.\n\nKeep in mind, however, that these links are publicly accessible, meaning that anyone can use your model for prediction! Therefore, make sure not to expose any sensitive information through the functions you write, or allow any critical changes to occur on your device. If you set `share=False` (the default), only a local link is created.\n\n### [](#hosting-your-demo-on-hugging-face-spaces)Hosting your demo on Hugging Face Spaces\n\nA share link that you can pass around to collegues is cool, but how can you permanently host your demo and have it exist in its own “space” on the internet?\n\nHugging Face Spaces provides the infrastructure to permanently host your Gradio model on the internet, **for free**! Spaces allows you to create and push to a (public or private) repo, where your Gradio interface code will exist in an `app.py` file. [Read a step-by-step tutorial](https://huggingface.co/blog/gradio-spaces) to get started, or watch an example video below.\n\n## [](#lets-apply-it)✏️ Let's apply it!\n\nUsing what we just learned in the sections so far, let’s create the sketch recognition demo we saw in [section one of this chapter](/course/chapter9/1). Let’s add some customization to our interface and set `share=True` to create a public link we can pass around.\n\nWe can load the labels from [class\\_names.txt](https://huggingface.co/spaces/dawood/Sketch-Recognition/blob/main/class_names.txt) and load the pre-trained pytorch model from [pytorch\\_model.bin](https://huggingface.co/spaces/dawood/Sketch-Recognition/blob/main/pytorch_model.bin). Download these files by following the link and clicking download on the top left corner of the file preview. Let’s take a look at the code below to see how we use these files to load our model and create a `predict()` function:\n\n```\nfrom pathlib import Path\nimport torch\nimport gradio as gr\nfrom torch import nn\n\nLABELS = Path(\"class_names.txt\").read_text().splitlines()\n\nmodel = nn.Sequential(\n nn.Conv2d(1, 32, 3, padding=\"same\"),\n nn.ReLU(),\n nn.MaxPool2d(2),\n nn.Conv2d(32, 64, 3, padding=\"same\"),\n nn.ReLU(),\n nn.MaxPool2d(2),\n nn.Conv2d(64, 128, 3, padding=\"same\"),\n nn.ReLU(),\n nn.MaxPool2d(2),\n nn.Flatten(),\n nn.Linear(1152, 256),\n nn.ReLU(),\n nn.Linear(256, len(LABELS)),\n)\nstate_dict = torch.load(\"pytorch_model.bin\", map_location=\"cpu\")\nmodel.load_state_dict(state_dict, strict=False)\nmodel.eval()\n\n\ndef predict(im):\n x = torch.tensor(im, dtype=torch.float32).unsqueeze(0).unsqueeze(0) / 255.0\n with torch.no_grad():\n out = model(x)\n probabilities = torch.nn.functional.softmax(out[0], dim=0)\n values, indices = torch.topk(probabilities, 5)\n return {LABELS[i]: v.item() for i, v in zip(indices, values)}```\n\nNow that we have a `predict()` function. The next step is to define and launch our gradio interface:\n\n```\ninterface = gr.Interface(\n predict,\n inputs=\"sketchpad\",\n outputs=\"label\",\n theme=\"huggingface\",\n title=\"Sketch Recognition\",\n description=\"Who wants to play Pictionary? Draw a common object like a shovel or a laptop, and the algorithm will guess in real time!\",\n article=\"

Sketch Recognition | Demo Model

\",\n live=True,\n)\ninterface.launch(share=True)```\n\nNotice the `live=True` parameter in `Interface`, which means that the sketch demo makes a prediction every time someone draws on the sketchpad (no submit button!).\n\nFurthermore, we also set the `share=True` argument in the `launch()` method. This will create a public link that you can send to anyone! When you send this link, the user on the other side can try out the sketch recognition model. To reiterate, you could also host the model on Hugging Face Spaces, which is how we are able to embed the demo above.\n\nNext up, we’ll cover other ways that Gradio can be used with the Hugging Face ecosystem!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tSharing demos with others - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Sharing demos with others

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Sharing demos with others

\"Ask \"Open \"Open

Now that you’ve built a demo, you’ll probably want to share it with others. Gradio demos\ncan be shared in two ways: using a temporary share link or permanent hosting on Spaces.

We’ll cover both of these approaches shortly. But before you share your demo, you may want to polish it up 💅.

Polishing your Gradio demo:

\"Overview \"Overview

To add additional content to your demo, the Interface class supports some optional parameters:

  • title: you can give a title to your demo, which appears above the input and output components.
  • description: you can give a description (in text, Markdown, or HTML) for the interface, which appears above the input and output components and below the title.
  • article: you can also write an expanded article (in text, Markdown, or HTML) explaining the interface. If provided, it appears below the input and output components.
  • theme: don’t like the default colors? Set the theme to use one of default, huggingface, grass, peach. You can also add the dark- prefix, e.g. dark-peach for dark theme (or just dark for the default dark theme).
  • examples: to make your demo way easier to use, you can provide some example inputs for the function. These appear below the UI components and can be used to populate the interface. These should be provided as a nested list, in which the outer list consists of samples and each inner list consists of an input corresponding to each input component.
  • live: if you want to make your demo “live”, meaning that your model reruns every time the input changes, you can set live=True. This makes sense to use with quick models (we’ll see an example at the end of this section)\nUsing the options above, we end up with a more complete interface. Run the code below so you can chat with Rick and Morty:
title = \"Ask Rick a Question\"\ndescription = \"\"\"\nThe bot was trained to answer questions based on Rick and Morty dialogues. Ask Rick anything!\n<img src=\"https://huggingface.co/spaces/course-demos/Rick_and_Morty_QA/resolve/main/rick.png\" width=200px>\n\"\"\"\n\narticle = \"Check out [the original Rick and Morty Bot](https://huggingface.co/spaces/kingabzpro/Rick_and_Morty_Bot) that this demo is based off of.\"\n\ngr.Interface(\n    fn=predict,\n    inputs=\"textbox\",\n    outputs=\"text\",\n    title=title,\n    description=description,\n    article=article,\n    examples=[[\"What are you doing?\"], [\"Where should we time travel to?\"]],\n).launch()

Using the options above, we end up with a more complete interface. Try the interface below:

Sharing your demo with temporary links

\n\nNow that we have a working demo of our machine learning model, let's learn how to easily share a link to our interface.\nInterfaces can be easily shared publicly by setting `share=True` in the `launch()` method:\n\n\t
gr.Interface(classify_image, \"image\", \"label\").launch(share=True)

This generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser for up to 72 hours. Because the processing happens on your device (as long as your device stays on!), you don’t have to worry about packaging any dependencies. If you’re working out of a Google Colab notebook, a share link is always automatically created. It usually looks something like this: XXXXX.gradio.app. Although the link is served through a Gradio link, we are only a proxy for your local server, and do not store any data sent through the interfaces.

Keep in mind, however, that these links are publicly accessible, meaning that anyone can use your model for prediction! Therefore, make sure not to expose any sensitive information through the functions you write, or allow any critical changes to occur on your device. If you set share=False (the default), only a local link is created.

Hosting your demo on Hugging Face Spaces

A share link that you can pass around to collegues is cool, but how can you permanently host your demo and have it exist in its own “space” on the internet?

Hugging Face Spaces provides the infrastructure to permanently host your Gradio model on the internet, for free! Spaces allows you to create and push to a (public or private) repo,\nwhere your Gradio\ninterface code will exist in an app.py file. Read a step-by-step tutorial to get started, or watch an example video below.

✏️ Let's apply it!

Using what we just learned in the sections so far, let’s create the sketch recognition demo we saw in section one of this chapter. Let’s add some customization to our interface and set share=True to create a public link we can pass around.

We can load the labels from class_names.txt and load the pre-trained pytorch model from pytorch_model.bin. Download these files by following the link and clicking download on the top left corner of the file preview. Let’s take a look at the code below to see how we use these files to load our model and create a predict() function:

from pathlib import Path\nimport torch\nimport gradio as gr\nfrom torch import nn\n\nLABELS = Path(\"class_names.txt\").read_text().splitlines()\n\nmodel = nn.Sequential(\n    nn.Conv2d(1, 32, 3, padding=\"same\"),\n    nn.ReLU(),\n    nn.MaxPool2d(2),\n    nn.Conv2d(32, 64, 3, padding=\"same\"),\n    nn.ReLU(),\n    nn.MaxPool2d(2),\n    nn.Conv2d(64, 128, 3, padding=\"same\"),\n    nn.ReLU(),\n    nn.MaxPool2d(2),\n    nn.Flatten(),\n    nn.Linear(1152, 256),\n    nn.ReLU(),\n    nn.Linear(256, len(LABELS)),\n)\nstate_dict = torch.load(\"pytorch_model.bin\", map_location=\"cpu\")\nmodel.load_state_dict(state_dict, strict=False)\nmodel.eval()\n\n\ndef predict(im):\n    x = torch.tensor(im, dtype=torch.float32).unsqueeze(0).unsqueeze(0) / 255.0\n    with torch.no_grad():\n        out = model(x)\n    probabilities = torch.nn.functional.softmax(out[0], dim=0)\n    values, indices = torch.topk(probabilities, 5)\n    return {LABELS[i]: v.item() for i, v in zip(indices, values)}

Now that we have a predict() function. The next step is to define and launch our gradio interface:

interface = gr.Interface(\n    predict,\n    inputs=\"sketchpad\",\n    outputs=\"label\",\n    theme=\"huggingface\",\n    title=\"Sketch Recognition\",\n    description=\"Who wants to play Pictionary? Draw a common object like a shovel or a laptop, and the algorithm will guess in real time!\",\n    article=\"<p style='text-align: center'>Sketch Recognition | Demo Model</p>\",\n    live=True,\n)\ninterface.launch(share=True)

Notice the live=True parameter in Interface, which means that the sketch demo makes\na prediction every time someone draws on the sketchpad (no submit button!).

Furthermore, we also set the share=True argument in the launch() method.\nThis will create a public link that you can\nsend to anyone! When you send this link, the user on the other side can try out the\nsketch recognition model. To reiterate, you could also host the model on Hugging Face Spaces,\nwhich is how we are able to embed the demo above.

Next up, we’ll cover other ways that Gradio can be used with the Hugging Face ecosystem!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:38.118Z"} {"title":"Building your first demo - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter9/2?fw=pt","markdown":"## [](#building-your-first-demo)Building your first demo\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-9-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter9/section2.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter9/section2.ipynb)\n\nLet’s start by installing Gradio! Since it is a Python package, simply run:\n\n`$ pip install gradio`\n\nYou can run Gradio anywhere, be it from your favourite Python IDE, to Jupyter notebooks or even in Google Colab 🤯! So install Gradio wherever you run Python!\n\nLet’s get started with a simple “Hello World” example to get familiar with the Gradio syntax:\n\n```\nimport gradio as gr\n\n\ndef greet(name):\n return \"Hello \" + name\n\n\ndemo = gr.Interface(fn=greet, inputs=\"text\", outputs=\"text\")\n\ndemo.launch()```\n\nLet’s walk through the code above:\n\n- First, we define a function called `greet()`. In this case, it is a simple function that adds “Hello” before your name, but it can be _any_ Python function in general. For example, in machine learning applications, this function would _call a model to make a prediction_ on an input and return the output.\n- Then, we create a Gradio `Interface` with three arguments, `fn`, `inputs`, and `outputs`. These arguments define the prediction function, as well as the _type_ of input and output components we would like. In our case, both components are simple text boxes.\n- We then call the `launch()` method on the `Interface` that we created.\n\nIf you run this code, the interface below will appear automatically within a Jupyter/Colab notebook, or pop in a browser on **[http://localhost:7860](http://localhost:7860/)** if running from a script.\n\nTry using this GUI right now with your own name or some other input!\n\nYou’ll notice that in this GUI, Gradio automatically inferred the name of the input parameter (`name`) and applied it as a label on top of the textbox. What if you’d like to change that? Or if you’d like to customize the textbox in some other way? In that case, you can instantiate a class object representing the input component.\n\nTake a look at the example below:\n\n```\nimport gradio as gr\n\n\ndef greet(name):\n return \"Hello \" + name\n\n\n\ntextbox = gr.Textbox(label=\"Type your name here:\", placeholder=\"John Doe\", lines=2)\n\ngr.Interface(fn=greet, inputs=textbox, outputs=\"text\").launch()```\n\nHere, we’ve created an input textbox with a label, a placeholder, and a set number of lines. You could do the same for the output textbox, but we’ll leave that for now.\n\nWe’ve seen that with just a few lines of code, Gradio lets you create a simple interface around any function with any kind of inputs or outputs. In this section, we’ve started with a simple textbox, but in the next sections, we’ll cover other kinds of inputs and outputs. Let’s now take a look at including some NLP in a Gradio application.\n\n## [](#including-model-predictions)🤖 Including model predictions\n\nLet’s now build a simple interface that allows you to demo a **text-generation** model like GPT-2.\n\nWe’ll load our model using the `pipeline()` function from 🤗 Transformers. If you need a quick refresher, you can go back to [that section in Chapter 1](/course/chapter1/3#text-generation).\n\nFirst, we define a prediction function that takes in a text prompt and returns the text completion:\n\n```\nfrom transformers import pipeline\n\nmodel = pipeline(\"text-generation\")\n\n\ndef predict(prompt):\n completion = model(prompt)[0][\"generated_text\"]\n return completion```\n\nThis function completes prompts that you provide, and you can run it with your own input prompts to see how it works. Here is an example (you might get a different completion):\n\n```\npredict(\"My favorite programming language is\")```\n\n```\n>> My favorite programming language is Haskell. I really enjoyed the Haskell language, but it doesn't have all the features that can be applied to any other language. For example, all it does is compile to a byte array.```\n\nNow that we have a function for generating predictions, we can create and launch an `Interface` in the same way we did earlier:\n\n```\nimport gradio as gr\n\ngr.Interface(fn=predict, inputs=\"text\", outputs=\"text\").launch()```\n\nThat’s it! You can now use this interface to generate text using the GPT-2 model as shown below 🤯.\n\nKeep reading to see how to build other kinds of demos with Gradio!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tBuilding your first demo - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Building your first demo

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Building your first demo

\"Ask \"Open \"Open

Let’s start by installing Gradio! Since it is a Python package, simply run:

$ pip install gradio

You can run Gradio anywhere, be it from your favourite Python IDE, to Jupyter notebooks or even in Google Colab 🤯!\nSo install Gradio wherever you run Python!

Let’s get started with a simple “Hello World” example to get familiar with the Gradio syntax:

import gradio as gr\n\n\ndef greet(name):\n    return \"Hello \" + name\n\n\ndemo = gr.Interface(fn=greet, inputs=\"text\", outputs=\"text\")\n\ndemo.launch()

Let’s walk through the code above:

  • First, we define a function called greet(). In this case, it is a simple function that adds “Hello” before your name, but it can be any Python function in general. For example, in machine learning applications, this function would call a model to make a prediction on an input and return the output.
  • Then, we create a Gradio Interface with three arguments, fn, inputs, and outputs. These arguments define the prediction function, as well as the type of input and output components we would like. In our case, both components are simple text boxes.
  • We then call the launch() method on the Interface that we created.

If you run this code, the interface below will appear automatically within a Jupyter/Colab notebook, or pop in a browser on http://localhost:7860 if running from a script.

Try using this GUI right now with your own name or some other input!

You’ll notice that in this GUI, Gradio automatically inferred the name of the input parameter (name)\nand applied it as a label on top of the textbox. What if you’d like to change that?\nOr if you’d like to customize the textbox in some other way? In that case, you can\ninstantiate a class object representing the input component.

Take a look at the example below:

import gradio as gr\n\n\ndef greet(name):\n    return \"Hello \" + name\n\n\n# We instantiate the Textbox class\ntextbox = gr.Textbox(label=\"Type your name here:\", placeholder=\"John Doe\", lines=2)\n\ngr.Interface(fn=greet, inputs=textbox, outputs=\"text\").launch()

Here, we’ve created an input textbox with a label, a placeholder, and a set number of lines.\nYou could do the same for the output textbox, but we’ll leave that for now.

We’ve seen that with just a few lines of code, Gradio lets you create a simple interface around any function\nwith any kind of inputs or outputs. In this section, we’ve started with a\nsimple textbox, but in the next sections, we’ll cover other kinds of inputs and outputs. Let’s now take a look at including some NLP in a Gradio application.

🤖 Including model predictions

Let’s now build a simple interface that allows you to demo a text-generation model like GPT-2.

We’ll load our model using the pipeline() function from 🤗 Transformers.\nIf you need a quick refresher, you can go back to that section in Chapter 1.

First, we define a prediction function that takes in a text prompt and returns the text completion:

from transformers import pipeline\n\nmodel = pipeline(\"text-generation\")\n\n\ndef predict(prompt):\n    completion = model(prompt)[0][\"generated_text\"]\n    return completion

This function completes prompts that you provide, and you can run it with your own input prompts to see how it works. Here is an example (you might get a different completion):

predict(\"My favorite programming language is\")
>> My favorite programming language is Haskell. I really enjoyed the Haskell language, but it doesn't have all the features that can be applied to any other language. For example, all it does is compile to a byte array.

Now that we have a function for generating predictions, we can create and launch an Interface in the same way we did earlier:

import gradio as gr\n\ngr.Interface(fn=predict, inputs=\"text\", outputs=\"text\").launch()

That’s it! You can now use this interface to generate text using the GPT-2 model as shown below 🤯.

Keep reading to see how to build other kinds of demos with Gradio!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:38.176Z"} {"title":"Integrations with the Hugging Face Hub - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter9/5?fw=pt","markdown":"## [](#integrations-with-the-hugging-face-hub)Integrations with the Hugging Face Hub\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-9-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter9/section5.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter9/section5.ipynb)\n\nTo make your life even easier, Gradio integrates directly with Hugging Face Hub and Hugging Face Spaces. You can load demos from the Hub and Spaces with only _one line of code_.\n\n### [](#loading-models-from-the-hugging-face-hub)Loading models from the Hugging Face Hub\n\nTo start with, choose one of the thousands of models Hugging Face offers through the Hub, as described in \\[Chapter 4\\](/course/chapter4/2).\n\nUsing the special `Interface.load()` method, you pass `\"model/\"` (or, equivalently, `\"huggingface/\"`) followed by the model name. For example, here is the code to build a demo for [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B), a large language model, add a couple of example inputs:\n\n```\nimport gradio as gr\n\ntitle = \"GPT-J-6B\"\ndescription = \"Gradio Demo for GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. 'GPT-J' refers to the class of model, while '6B' represents the number of trainable parameters. To use it, simply add your text, or click one of the examples to load them. Read more at the links below.\"\narticle = \"

GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model

\"\n\ngr.Interface.load(\n \"huggingface/EleutherAI/gpt-j-6B\",\n inputs=gr.Textbox(lines=5, label=\"Input Text\"),\n title=title,\n description=description,\n article=article,\n).launch()```\n\nThe code above will produce the interface below:\n\nLoading a model in this way uses Hugging Face’s [Inference API](https://huggingface.co/inference-api), instead of loading the model in memory. This is ideal for huge models like GPT-J or T0pp which require lots of RAM.\n\n### [](#loading-from-hugging-face-spaces)Loading from Hugging Face Spaces\n\nTo load any Space from the Hugging Face Hub and recreate it locally, you can pass \\`spaces/\\` to the \\`Interface\\`, followed by the name of the Space.\n\nRemember the demo from section 1 that removes the background of an image? Let’s load it from Hugging Face Spaces:\n\n```\ngr.Interface.load(\"spaces/abidlabs/remove-bg\").launch()```\n\nOne of the cool things about loading demos from the Hub or Spaces is that you customize them by overriding any of the parameters. Here, we add a title and get it to work with a webcam instead:\n\n```\ngr.Interface.load(\n \"spaces/abidlabs/remove-bg\", inputs=\"webcam\", title=\"Remove your webcam background!\"\n).launch()```\n\nNow that we’ve explored a few ways to integrate Gradio with the Hugging Face Hub, let’s take a look at some advanced features of the `Interface` class. That’s the topic of the next section!","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tIntegrations with the Hugging Face Hub - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Integrations with the Hugging Face Hub

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Integrations with the Hugging Face Hub

\"Ask \"Open \"Open

To make your life even easier, Gradio integrates directly with Hugging Face Hub and Hugging Face Spaces.\nYou can load demos from the Hub and Spaces with only one line of code.

Loading models from the Hugging Face Hub

\n\nTo start with, choose one of the thousands of models Hugging Face offers through the Hub, as described in [Chapter 4](/course/chapter4/2).\n

Using the special Interface.load() method, you pass \"model/\" (or, equivalently, \"huggingface/\")\nfollowed by the model name.\nFor example, here is the code to build a demo for GPT-J, a large language model, add a couple of example inputs:

import gradio as gr\n\ntitle = \"GPT-J-6B\"\ndescription = \"Gradio Demo for GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. 'GPT-J' refers to the class of model, while '6B' represents the number of trainable parameters. To use it, simply add your text, or click one of the examples to load them. Read more at the links below.\"\narticle = \"<p style='text-align: center'><a href='https://github.com/kingoflolz/mesh-transformer-jax' target='_blank'>GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model</a></p>\"\n\ngr.Interface.load(\n    \"huggingface/EleutherAI/gpt-j-6B\",\n    inputs=gr.Textbox(lines=5, label=\"Input Text\"),\n    title=title,\n    description=description,\n    article=article,\n).launch()

The code above will produce the interface below:

Loading a model in this way uses Hugging Face’s Inference API,\ninstead of loading the model in memory. This is ideal for huge models like GPT-J or T0pp which\nrequire lots of RAM.

Loading from Hugging Face Spaces

\n\nTo load any Space from the Hugging Face Hub and recreate it locally, you can pass `spaces/` to the `Interface`, followed by the name of the Space.\n

Remember the demo from section 1 that removes the background of an image? Let’s load it from Hugging Face Spaces:

gr.Interface.load(\"spaces/abidlabs/remove-bg\").launch()

One of the cool things about loading demos from the Hub or Spaces is that you customize them\nby overriding any of the\nparameters. Here, we add a title and get it to work with a webcam instead:

gr.Interface.load(\n    \"spaces/abidlabs/remove-bg\", inputs=\"webcam\", title=\"Remove your webcam background!\"\n).launch()

Now that we’ve explored a few ways to integrate Gradio with the Hugging Face Hub, let’s take a look at some advanced features of the Interface class. That’s the topic of the next section!

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:38.256Z"} {"title":"Gradio, check! - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter9/8?fw=pt","markdown":"## [](#gradio-check)Gradio, check!\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-9-questions)\n\nThis wraps up the chapter on building cool ML demos with Gradio - we hope you enjoyed it! To recap, in this chapter we learned:\n\n- How to create Gradio demos with the high-level `Interface` API, and how to configure different input and output modalities.\n- Different ways to share Gradio demos, through temporary links and hosting on [Hugging Face Spaces](https://huggingface.co/spaces).\n- How to integrate Gradio demos with models and Spaces on the Hugging Face Hub.\n- Advanced features like storing state in a demo or providing authentication.\n- How to have full control of the data flow and layout of your demo with Gradio Blocks.\n\nIf you’d like to test your understanding of the concepts covered in this chapter, check out the quiz in the next section!\n\n## [](#where-to-next)Where to next?\n\nIf you want to learn more about Gradio you can\n\n- Take a look at [Demos](https://github.com/gradio-app/gradio/tree/main/demo) in the repo, there are quite a lot of examples there.\n- See the [Guides](https://gradio.app/guides/) page, where you can find guides about cool and advanced features.\n- Check the [Docs](https://gradio.app/docs/) page to learn the details.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tGradio, check! - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Gradio, check!

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Gradio, check!

\"Ask

This wraps up the chapter on building cool ML demos with Gradio - we hope you enjoyed it! To recap, in this chapter we learned:

  • How to create Gradio demos with the high-level Interface API, and how to configure different input and output modalities.
  • Different ways to share Gradio demos, through temporary links and hosting on Hugging Face Spaces.
  • How to integrate Gradio demos with models and Spaces on the Hugging Face Hub.
  • Advanced features like storing state in a demo or providing authentication.
  • How to have full control of the data flow and layout of your demo with Gradio Blocks.

If you’d like to test your understanding of the concepts covered in this chapter, check out the quiz in the next section!

Where to next?

If you want to learn more about Gradio you can

  • Take a look at Demos in the repo, there are quite a lot of examples there.
  • See the Guides page, where you can find guides about cool and advanced features.
  • Check the Docs page to learn the details.
\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:39.247Z"} {"title":"End-of-chapter quiz - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter9/9?fw=pt","markdown":"## [](#end-of-chapter-quiz)End-of-chapter quiz\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-9-questions)\n\nLet’s test what you learned in this chapter!\n\n### [](#1.-what-can-you-use-gradio-to-do?)1\\. What can you use Gradio to do?\n\n### [](#2.-gradio-only-works-with-pytorch-models)2\\. Gradio ONLY works with PyTorch models\n\n### [](#3.-where-can-you-launch-a-gradio-demo-from?)3\\. Where can you launch a Gradio demo from?\n\n### [](#4.-gradio-is-designed-primarily-for-nlp-models)4\\. Gradio is designed primarily for NLP models\n\n### [](#5.-which-of-the-following-features-are-supported-by-gradio?)5\\. Which of the following features are supported by Gradio?\n\n### [](#6.-which-of-the-following-are-valid-ways-of-loading-a-hugging-face-model-from-hub-or-spaces?)6\\. Which of the following are valid ways of loading a Hugging Face model from Hub or Spaces?\n\n### [](#7.-select-all-the-steps-necessary-for-adding-state-to-your-gradio-interface)7\\. Select all the steps necessary for adding state to your Gradio interface\n\n### [](#8.-which-of-the-following-are-components-included-in-the-gradio-library?)8\\. Which of the following are components included in the Gradio library?\n\n### [](#9.-what-does-gradio-blocks-allow-you-to-do?)9\\. What does Gradio `Blocks` allow you to do?\n\n### 10\\. You can share a public link to a `Blocks` demo and host a `Blocks` demo on Hugging Face spaces.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tEnd-of-chapter quiz - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

End-of-chapter quiz

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

End-of-chapter quiz

\"Ask

Let’s test what you learned in this chapter!

1. What can you use Gradio to do?

2. Gradio ONLY works with PyTorch models

3. Where can you launch a Gradio demo from?

4. Gradio is designed primarily for NLP models

5. Which of the following features are supported by Gradio?

6. Which of the following are valid ways of loading a Hugging Face model from Hub or Spaces?

7. Select all the steps necessary for adding state to your Gradio interface

8. Which of the following are components included in the Gradio library?

blocks-allow-you-to-do?\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#9.-what-does-gradio-blocks-allow-you-to-do?\"> 9. What does Gradio Blocks allow you to do?

blocks-demo-and-host-a-blocks-demo-on-hugging-face-spaces.\" class=\"header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full\" href=\"#10.-you-can-share-a-public-link-to-a-blocks-demo-and-host-a-blocks-demo-on-hugging-face-spaces.\"> 10. You can share a public link to a Blocks demo and host a Blocks demo on Hugging Face spaces.

\n\t\t\t\t
Gradio, check!\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tNext chapter
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:39.397Z"} {"title":"Advanced Interface features - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter9/6?fw=pt","markdown":"## [](#advanced-interface-features)Advanced Interface features\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-9-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter9/section6.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter9/section6.ipynb)\n\nNow that we can build and share a basic interface, let’s explore some more advanced features such as state, and interpretation.\n\n### [](#using-state-to-persist-data)Using state to persist data\n\nGradio supports _session state_, where data persists across multiple submits within a page load. Session state is useful for building demos of, for example, chatbots where you want to persist data as the user interacts with the model. Note that session state does not share data between different users of your model.\n\nTo store data in a session state, you need to do three things:\n\n1. Pass in an _extra parameter_ into your function, which represents the state of the interface.\n2. At the end of the function, return the updated value of the state as an _extra return value_.\n3. Add the ‘state’ input and ‘state’ output components when creating your `Interface`.\n\nSee the chatbot example below:\n\n```\nimport random\n\nimport gradio as gr\n\n\ndef chat(message, history):\n history = history or []\n if message.startswith(\"How many\"):\n response = random.randint(1, 10)\n elif message.startswith(\"How\"):\n response = random.choice([\"Great\", \"Good\", \"Okay\", \"Bad\"])\n elif message.startswith(\"Where\"):\n response = random.choice([\"Here\", \"There\", \"Somewhere\"])\n else:\n response = \"I don't know\"\n history.append((message, response))\n return history, history\n\n\niface = gr.Interface(\n chat,\n [\"text\", \"state\"],\n [\"chatbot\", \"state\"],\n allow_screenshot=False,\n allow_flagging=\"never\",\n)\niface.launch()```\n\nNotice how the state of the output component persists across submits. Note: you can pass in a default value to the state parameter, which is used as the initial value of the state.\n\n### [](#using-interpretation-to-understand-predictions)Using interpretation to understand predictions\n\nMost machine learning models are black boxes and the internal logic of the function is hidden from the end user. To encourage transparency, we’ve made it very easy to add interpretation to your model by simply setting the interpretation keyword in the Interface class to default. This allows your users to understand what parts of the input are responsible for the output. Take a look at the simple interface below which shows an image classifier that also includes interpretation:\n\n```\nimport requests\nimport tensorflow as tf\n\nimport gradio as gr\n\ninception_net = tf.keras.applications.MobileNetV2() \n\n\nresponse = requests.get(\"https://git.io/JJkYN\")\nlabels = response.text.split(\"\\n\")\n\n\ndef classify_image(inp):\n inp = inp.reshape((-1, 224, 224, 3))\n inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)\n prediction = inception_net.predict(inp).flatten()\n return {labels[i]: float(prediction[i]) for i in range(1000)}\n\n\nimage = gr.Image(shape=(224, 224))\nlabel = gr.Label(num_top_classes=3)\n\ntitle = \"Gradio Image Classifiction + Interpretation Example\"\ngr.Interface(\n fn=classify_image, inputs=image, outputs=label, interpretation=\"default\", title=title\n).launch()```\n\nTest the interpretation function by submitting an input then clicking Interpret under the output component.\n\nBesides the default interpretation method Gradio provides, you can also specify `shap` for the `interpretation` parameter and set the `num_shap` parameter. This uses Shapley-based interpretation, which you can read more about [here](https://christophm.github.io/interpretable-ml-book/shap.html). Lastly, you can also pass in your own interpretation function into the `interpretation` parameter. See an example in Gradio’s getting started page [here](https://gradio.app/getting_started/).\n\nThis wraps up our deep dive into the `Interface` class of Gradio. As we’ve seen, this class makes it simple to create machine learning demos in a few lines of Python code. However, sometimes you’ll want to customise your demo by changing the layout or chaining multiple prediction functions together. Wouldn’t it be nice if we could somehow split the `Interface` into customizable “blocks”? Fortunately, there is! That’s the topic of the final section.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tAdvanced Interface features - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Advanced Interface features

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Advanced Interface features

\"Ask \"Open \"Open

Now that we can build and share a basic interface, let’s explore some more advanced features such as state, and interpretation.

Using state to persist data

Gradio supports session state, where data persists across multiple submits within a\npage load. Session state is useful for building demos of, for example, chatbots where you want to\npersist data as the user interacts with the model. Note that session state does not share data between different users of your model.

To store data in a session state, you need to do three things:

  1. Pass in an extra parameter into your function, which represents the state of the interface.
  2. At the end of the function, return the updated value of the state as an extra return value.
  3. Add the ‘state’ input and ‘state’ output components when creating your Interface.

See the chatbot example below:

import random\n\nimport gradio as gr\n\n\ndef chat(message, history):\n    history = history or []\n    if message.startswith(\"How many\"):\n        response = random.randint(1, 10)\n    elif message.startswith(\"How\"):\n        response = random.choice([\"Great\", \"Good\", \"Okay\", \"Bad\"])\n    elif message.startswith(\"Where\"):\n        response = random.choice([\"Here\", \"There\", \"Somewhere\"])\n    else:\n        response = \"I don't know\"\n    history.append((message, response))\n    return history, history\n\n\niface = gr.Interface(\n    chat,\n    [\"text\", \"state\"],\n    [\"chatbot\", \"state\"],\n    allow_screenshot=False,\n    allow_flagging=\"never\",\n)\niface.launch()

Notice how the state of the output component persists across submits.\nNote: you can pass in a default value to the state parameter,\nwhich is used as the initial value of the state.

Using interpretation to understand predictions

Most machine learning models are black boxes and the internal logic of the function is hidden from the end user. To encourage transparency, we’ve made it very easy to add interpretation to your model by simply setting the interpretation keyword in the Interface class to default. This allows your users to understand what parts of the input are responsible for the output. Take a look at the simple interface below which shows an image classifier that also includes interpretation:

import requests\nimport tensorflow as tf\n\nimport gradio as gr\n\ninception_net = tf.keras.applications.MobileNetV2()  # load the model\n\n# Download human-readable labels for ImageNet.\nresponse = requests.get(\"https://git.io/JJkYN\")\nlabels = response.text.split(\"\\n\")\n\n\ndef classify_image(inp):\n    inp = inp.reshape((-1, 224, 224, 3))\n    inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)\n    prediction = inception_net.predict(inp).flatten()\n    return {labels[i]: float(prediction[i]) for i in range(1000)}\n\n\nimage = gr.Image(shape=(224, 224))\nlabel = gr.Label(num_top_classes=3)\n\ntitle = \"Gradio Image Classifiction + Interpretation Example\"\ngr.Interface(\n    fn=classify_image, inputs=image, outputs=label, interpretation=\"default\", title=title\n).launch()

Test the interpretation function by submitting an input then clicking Interpret under the output component.

Besides the default interpretation method Gradio provides, you can also specify shap for the interpretation parameter and set the num_shap parameter. This uses Shapley-based interpretation, which you can read more about here.\nLastly, you can also pass in your own interpretation function into the interpretation parameter. See an example in Gradio’s getting started page here.

This wraps up our deep dive into the Interface class of Gradio. As we’ve seen, this class makes it simple to create machine learning demos in a few lines of Python code. However, sometimes you’ll want to customise your demo by changing the layout or chaining multiple prediction functions together. Wouldn’t it be nice if we could somehow split the Interface into customizable “blocks”? Fortunately, there is! That’s the topic of the final section.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:39.882Z"} {"title":"Introduction to Gradio Blocks - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/chapter9/7?fw=pt","markdown":"## [](#introduction-to-gradio-blocks)Introduction to Gradio Blocks\n\n[![Ask a Question](https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4=)](https://discuss.huggingface.co/t/chapter-9-questions) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter9/section7.ipynb) [![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter9/section7.ipynb)\n\nIn the previous sections we have explored and created demos using the `Interface` class. In this section we will introduce our **newly developed** low-level API called `gradio.Blocks`.\n\nNow, what’s the difference between `Interface` and `Blocks`?\n\n- ⚡ `Interface`: a high-level API that allows you to create a full machine learning demo simply by providing a list of inputs and outputs.\n \n- 🧱 `Blocks`: a low-level API that allows you to have full control over the data flows and layout of your application. You can build very complex, multi-step applications using `Blocks` (as in “building blocks”).\n \n\n### [](#why-blocks-)Why Blocks 🧱?\n\nAs we saw in the previous sections, the `Interface` class allows you to easily create full-fledged machine learning demos with just a few lines of code. The `Interface` API is extremely easy to use but lacks the flexibility that the `Blocks` API provides. For example, you might want to:\n\n- Group together related demos as multiple tabs in one web application\n- Change the layout of your demo, e.g. to specify where the inputs and outputs are located\n- Have multi-step interfaces, in which the output of one model becomes the input to the next model, or have more flexible data flows in general\n- Change a component’s properties (for example, the choices in a dropdown) or its visibility based on user input\n\nWe will explore all of these concepts below.\n\n### [](#creating-a-simple-demo-using-blocks)Creating a simple demo using Blocks\n\nAfter you have installed Gradio, run the code below as a Python script, a Jupyter notebook, or a Colab notebook.\n\n```\nimport gradio as gr\n\n\ndef flip_text(x):\n return x[::-1]\n\n\ndemo = gr.Blocks()\n\nwith demo:\n gr.Markdown(\n \"\"\"\n # Flip Text!\n Start typing below to see the output.\n \"\"\"\n )\n input = gr.Textbox(placeholder=\"Flip this text\")\n output = gr.Textbox()\n\n input.change(fn=flip_text, inputs=input, outputs=output)\n\ndemo.launch()```\n\nThis simple example above introduces 4 concepts that underlie Blocks:\n\n1. Blocks allow you to build web applications that combine markdown, HTML, buttons, and interactive components simply by instantiating objects in Python inside of a `with gradio.Blocks` context.\n \n 🙋If you're not familiar with the \\`with\\` statement in Python, we recommend checking out the excellent \\[tutorial\\](https://realpython.com/python-with-statement/) from Real Python. Come back here after reading that 🤗\n \n The order in which you instantiate components matters as each element gets rendered into the web app in the order it was created. (More complex layouts are discussed below)\n2. You can define regular Python functions anywhere in your code and run them with user input using `Blocks`. In our example, we have a simple function that “flips” the input text, but you can write any Python function, from a simple calculation to processing the predictions from a machine learning model.\n \n3. You can assign events to any `Blocks` component. This will run your function when the component is clicked, changed, etc. When you assign an event, you pass in three parameters: `fn`: the function that should be called, `inputs`: the (list) of input component(s), and `outputs`: the (list) of output components that should be called.\n \n In the example above, we run the `flip_text()` function when the value in the `Textbox` named input `input` changes. The event reads the value in `input`, passes it as the name parameter to `flip_text()`, which then returns a value that gets assigned to our second `Textbox` named `output`.\n \n To see a list of events that each component supports, see the Gradio [documentation](https://www.gradio.app/docs/).\n \n4. Blocks automatically figures out whether a component should be interactive (accept user input) or not, based on the event triggers you define. In our example, the first textbox is interactive, since its value is used by the `flip_text()` function. The second textbox is not interactive, since its value is never used as an input. In some cases, you might want to override this, which you can do by passing a boolean to the `interactive` parameter of the component (e.g. `gr.Textbox(placeholder=\"Flip this text\", interactive=True)`).\n \n\n### [](#customizing-the-layout-of-your-demo)Customizing the layout of your demo\n\nHow can we use `Blocks` to customize the layout of our demo? By default, `Blocks` renders the components that you create vertically in one column. You can change that by creating additional columns `with gradio.Column():` or rows `with gradio.Row():` and creating components within those contexts.\n\nHere’s what you should keep in mind: any components created under a `Column` (this is also the default) will be laid out vertically. Any component created under a `Row` will be laid out horizontally, similar to the [flexbox model in web development](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Basic_Concepts_of_Flexbox).\n\nFinally, you can also create tabs for your demo by using the `with gradio.Tabs()` context manager. Within this context, you can create multiple tabs by specifying `with gradio.TabItem(name_of_tab):` children. Any component created inside of a `with gradio.TabItem(name_of_tab):` context appears in that tab.\n\nNow let’s add a `flip_image()` function to our demo and add a new tab that flips images. Below is an example with 2 tabs and also uses a Row:\n\n```\nimport numpy as np\nimport gradio as gr\n\ndemo = gr.Blocks()\n\n\ndef flip_text(x):\n return x[::-1]\n\n\ndef flip_image(x):\n return np.fliplr(x)\n\n\nwith demo:\n gr.Markdown(\"Flip text or image files using this demo.\")\n with gr.Tabs():\n with gr.TabItem(\"Flip Text\"):\n with gr.Row():\n text_input = gr.Textbox()\n text_output = gr.Textbox()\n text_button = gr.Button(\"Flip\")\n with gr.TabItem(\"Flip Image\"):\n with gr.Row():\n image_input = gr.Image()\n image_output = gr.Image()\n image_button = gr.Button(\"Flip\")\n\n text_button.click(flip_text, inputs=text_input, outputs=text_output)\n image_button.click(flip_image, inputs=image_input, outputs=image_output)\n\ndemo.launch()```\n\nYou’ll notice that in this example, we’ve also created a `Button` component in each tab, and we’ve assigned a click event to each button, which is what actually runs the function.\n\n### [](#exploring-events-and-state)Exploring events and state\n\nJust as you can control the layout, `Blocks` gives you fine-grained control over what events trigger function calls. Each component and many layouts have specific events that they support.\n\nFor example, the `Textbox` component has 2 events: `change()` (when the value inside of the textbox changes), and `submit()` (when a user presses the enter key while focused on the textbox). More complex components can have even more events: for example, the `Audio` component also has separate events for when the audio file is played, cleared, paused, etc. See the documentation for the events each component supports.\n\nYou can attach event trigger to none, one, or more of these events. You create an event trigger by calling the name of the event on the component instance as a function — e.g. `textbox.change(...)` or `btn.click(...)`. The function takes in three parameters, as discussed above:\n\n- `fn`: the function to run\n- `inputs`: a (list of) component(s) whose values should supplied as the input parameters to the function. Each component’s value gets mapped to the corresponding function parameter, in order. This parameter can be None if the function does not take any parameters.\n- `outputs`: a (list of) component(s) whose values should be updated based on the values returned by the function. Each return value sets the corresponding component’s value, in order. This parameter can be None if the function does not return anything.\n\nYou can even make the input and output component be the same component, as we do in this example that uses a GPT model to do text completion:\n\n```\nimport gradio as gr\n\napi = gr.Interface.load(\"huggingface/EleutherAI/gpt-j-6B\")\n\n\ndef complete_with_gpt(text):\n \n return text[:-50] + api(text[-50:])\n\n\nwith gr.Blocks() as demo:\n textbox = gr.Textbox(placeholder=\"Type here and press enter...\", lines=4)\n btn = gr.Button(\"Generate\")\n\n btn.click(complete_with_gpt, textbox, textbox)\n\ndemo.launch()```\n\n### [](#creating-multi-step-demos)Creating multi-step demos\n\nIn some cases, you might want a _multi-step demo_, in which you reuse the output of one function as the input to the next. This is really easy to do with `Blocks`, as you can use a component for the input of one event trigger but the output of another. Take a look at the text component in the example below, its value is the result of a speech-to-text model, but also gets passed into a sentiment analysis model:\n\n```\nfrom transformers import pipeline\n\nimport gradio as gr\n\nasr = pipeline(\"automatic-speech-recognition\", \"facebook/wav2vec2-base-960h\")\nclassifier = pipeline(\"text-classification\")\n\n\ndef speech_to_text(speech):\n text = asr(speech)[\"text\"]\n return text\n\n\ndef text_to_sentiment(text):\n return classifier(text)[0][\"label\"]\n\n\ndemo = gr.Blocks()\n\nwith demo:\n audio_file = gr.Audio(type=\"filepath\")\n text = gr.Textbox()\n label = gr.Label()\n\n b1 = gr.Button(\"Recognize Speech\")\n b2 = gr.Button(\"Classify Sentiment\")\n\n b1.click(speech_to_text, inputs=audio_file, outputs=text)\n b2.click(text_to_sentiment, inputs=text, outputs=label)\n\ndemo.launch()```\n\n### [](#updating-component-properties)Updating Component Properties\n\nSo far, we have seen how to create events to update the value of another component. But what happens if you want to change other properties of a component, like the visibility of a textbox or the choices in a radio button group? You can do this by returning a component class’s `update()` method instead of a regular return value from your function.\n\nThis is most easily illustrated with an example:\n\n```\nimport gradio as gr\n\n\ndef change_textbox(choice):\n if choice == \"short\":\n return gr.Textbox.update(lines=2, visible=True)\n elif choice == \"long\":\n return gr.Textbox.update(lines=8, visible=True)\n else:\n return gr.Textbox.update(visible=False)\n\n\nwith gr.Blocks() as block:\n radio = gr.Radio(\n [\"short\", \"long\", \"none\"], label=\"What kind of essay would you like to write?\"\n )\n text = gr.Textbox(lines=2, interactive=True)\n\n radio.change(fn=change_textbox, inputs=radio, outputs=text)\n block.launch()```\n\nWe just explored all the core concepts of `Blocks`! Just like with `Interfaces`, you can create cool demos that can be shared by using `share=True` in the `launch()` method or deployed on [Hugging Face Spaces](https://huggingface.co/spaces).","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tIntroduction to Gradio Blocks - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Introduction to Gradio Blocks

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Introduction to Gradio Blocks

\"Ask \"Open \"Open

In the previous sections we have explored and created demos using the Interface class. In this section we will introduce our newly developed low-level API called gradio.Blocks.

Now, what’s the difference between Interface and Blocks?

  • Interface: a high-level API that allows you to create a full machine learning demo simply by providing a list of inputs and outputs.

  • 🧱 Blocks: a low-level API that allows you to have full control over the data flows and layout of your application. You can build very complex, multi-step applications using Blocks (as in “building blocks”).

Why Blocks 🧱?

As we saw in the previous sections, the Interface class allows you to easily create full-fledged machine learning demos with just a few lines of code. The Interface API is extremely easy to use but lacks the flexibility that the Blocks API provides. For example, you might want to:

  • Group together related demos as multiple tabs in one web application
  • Change the layout of your demo, e.g. to specify where the inputs and outputs are located
  • Have multi-step interfaces, in which the output of one model becomes the input to the next model, or have more flexible data flows in general
  • Change a component’s properties (for example, the choices in a dropdown) or its visibility based on user input

We will explore all of these concepts below.

Creating a simple demo using Blocks

After you have installed Gradio, run the code below as a Python script, a Jupyter notebook, or a Colab notebook.

import gradio as gr\n\n\ndef flip_text(x):\n    return x[::-1]\n\n\ndemo = gr.Blocks()\n\nwith demo:\n    gr.Markdown(\n        \"\"\"\n    # Flip Text!\n    Start typing below to see the output.\n    \"\"\"\n    )\n    input = gr.Textbox(placeholder=\"Flip this text\")\n    output = gr.Textbox()\n\n    input.change(fn=flip_text, inputs=input, outputs=output)\n\ndemo.launch()

This simple example above introduces 4 concepts that underlie Blocks:

  1. Blocks allow you to build web applications that combine markdown, HTML, buttons, and interactive components simply by instantiating objects in Python inside of a with gradio.Blocks context.

    🙋If you're not familiar with the `with` statement in Python, we recommend checking out the excellent [tutorial](https://realpython.com/python-with-statement/) from Real Python. Come back here after reading that 🤗
    \nThe order in which you instantiate components matters as each element gets rendered into the web app in the order it was created. (More complex layouts are discussed below)
  2. You can define regular Python functions anywhere in your code and run them with user input using Blocks. In our example, we have a simple function that “flips” the input text, but you can write any Python function, from a simple calculation to processing the predictions from a machine learning model.

  3. You can assign events to any Blocks component. This will run your function when the component is clicked, changed, etc. When you assign an event, you pass in three parameters: fn: the function that should be called, inputs: the (list) of input component(s), and outputs: the (list) of output components that should be called.

    In the example above, we run the flip_text() function when the value in the Textbox named input input changes. The event reads the value in input, passes it as the name parameter to flip_text(), which then returns a value that gets assigned to our second Textbox named output.

    To see a list of events that each component supports, see the Gradio documentation.

  4. Blocks automatically figures out whether a component should be interactive (accept user input) or not, based on the event triggers you define. In our example, the first textbox is interactive, since its value is used by the flip_text() function. The second textbox is not interactive, since its value is never used as an input. In some cases, you might want to override this, which you can do by passing a boolean to the interactive parameter of the component (e.g. gr.Textbox(placeholder=\"Flip this text\", interactive=True)).

Customizing the layout of your demo

How can we use Blocks to customize the layout of our demo? By default, Blocks renders the components that you create vertically in one column. You can change that by creating additional columns with gradio.Column(): or rows with gradio.Row(): and creating components within those contexts.

Here’s what you should keep in mind: any components created under a Column (this is also the default) will be laid out vertically. Any component created under a Row will be laid out horizontally, similar to the flexbox model in web development.

Finally, you can also create tabs for your demo by using the with gradio.Tabs() context manager. Within this context, you can create multiple tabs by specifying with gradio.TabItem(name_of_tab): children. Any component created inside of a with gradio.TabItem(name_of_tab): context appears in that tab.

Now let’s add a flip_image() function to our demo and add a new tab that flips images. Below is an example with 2 tabs and also uses a Row:

import numpy as np\nimport gradio as gr\n\ndemo = gr.Blocks()\n\n\ndef flip_text(x):\n    return x[::-1]\n\n\ndef flip_image(x):\n    return np.fliplr(x)\n\n\nwith demo:\n    gr.Markdown(\"Flip text or image files using this demo.\")\n    with gr.Tabs():\n        with gr.TabItem(\"Flip Text\"):\n            with gr.Row():\n                text_input = gr.Textbox()\n                text_output = gr.Textbox()\n            text_button = gr.Button(\"Flip\")\n        with gr.TabItem(\"Flip Image\"):\n            with gr.Row():\n                image_input = gr.Image()\n                image_output = gr.Image()\n            image_button = gr.Button(\"Flip\")\n\n    text_button.click(flip_text, inputs=text_input, outputs=text_output)\n    image_button.click(flip_image, inputs=image_input, outputs=image_output)\n\ndemo.launch()

You’ll notice that in this example, we’ve also created a Button component in each tab, and we’ve assigned a click event to each button, which is what actually runs the function.

Exploring events and state

Just as you can control the layout, Blocks gives you fine-grained control over what events trigger function calls. Each component and many layouts have specific events that they support.

For example, the Textbox component has 2 events: change() (when the value inside of the textbox changes), and submit() (when a user presses the enter key while focused on the textbox). More complex components can have even more events: for example, the Audio component also has separate events for when the audio file is played, cleared, paused, etc. See the documentation for the events each component supports.

You can attach event trigger to none, one, or more of these events. You create an event trigger by calling the name of the event on the component instance as a function — e.g. textbox.change(...) or btn.click(...). The function takes in three parameters, as discussed above:

  • fn: the function to run
  • inputs: a (list of) component(s) whose values should supplied as the input parameters to the function. Each component’s value gets mapped to the corresponding function parameter, in order. This parameter can be None if the function does not take any parameters.
  • outputs: a (list of) component(s) whose values should be updated based on the values returned by the function. Each return value sets the corresponding component’s value, in order. This parameter can be None if the function does not return anything.

You can even make the input and output component be the same component, as we do in this example that uses a GPT model to do text completion:

import gradio as gr\n\napi = gr.Interface.load(\"huggingface/EleutherAI/gpt-j-6B\")\n\n\ndef complete_with_gpt(text):\n    # Use the last 50 characters of the text as context\n    return text[:-50] + api(text[-50:])\n\n\nwith gr.Blocks() as demo:\n    textbox = gr.Textbox(placeholder=\"Type here and press enter...\", lines=4)\n    btn = gr.Button(\"Generate\")\n\n    btn.click(complete_with_gpt, textbox, textbox)\n\ndemo.launch()

Creating multi-step demos

In some cases, you might want a multi-step demo, in which you reuse the output of one function as the input to the next. This is really easy to do with Blocks, as you can use a component for the input of one event trigger but the output of another. Take a look at the text component in the example below, its value is the result of a speech-to-text model, but also gets passed into a sentiment analysis model:

from transformers import pipeline\n\nimport gradio as gr\n\nasr = pipeline(\"automatic-speech-recognition\", \"facebook/wav2vec2-base-960h\")\nclassifier = pipeline(\"text-classification\")\n\n\ndef speech_to_text(speech):\n    text = asr(speech)[\"text\"]\n    return text\n\n\ndef text_to_sentiment(text):\n    return classifier(text)[0][\"label\"]\n\n\ndemo = gr.Blocks()\n\nwith demo:\n    audio_file = gr.Audio(type=\"filepath\")\n    text = gr.Textbox()\n    label = gr.Label()\n\n    b1 = gr.Button(\"Recognize Speech\")\n    b2 = gr.Button(\"Classify Sentiment\")\n\n    b1.click(speech_to_text, inputs=audio_file, outputs=text)\n    b2.click(text_to_sentiment, inputs=text, outputs=label)\n\ndemo.launch()

Updating Component Properties

So far, we have seen how to create events to update the value of another component. But what happens if you want to change other properties of a component, like the visibility of a textbox or the choices in a radio button group? You can do this by returning a component class’s update() method instead of a regular return value from your function.

This is most easily illustrated with an example:

import gradio as gr\n\n\ndef change_textbox(choice):\n    if choice == \"short\":\n        return gr.Textbox.update(lines=2, visible=True)\n    elif choice == \"long\":\n        return gr.Textbox.update(lines=8, visible=True)\n    else:\n        return gr.Textbox.update(visible=False)\n\n\nwith gr.Blocks() as block:\n    radio = gr.Radio(\n        [\"short\", \"long\", \"none\"], label=\"What kind of essay would you like to write?\"\n    )\n    text = gr.Textbox(lines=2, interactive=True)\n\n    radio.change(fn=change_textbox, inputs=radio, outputs=text)\n    block.launch()

We just explored all the core concepts of Blocks! Just like with Interfaces, you can create cool demos that can be shared by using share=True in the launch() method or deployed on Hugging Face Spaces.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:40.025Z"} {"title":"Live sessions and workshops - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/events/1?fw=pt","markdown":"NLP Course documentation\n\nLive sessions and workshops\n\n3\\. Fine-tuning a pretrained model\n\n4\\. Sharing models and tokenizers\n\n5\\. The 🤗 Datasets library\n\n6\\. The 🤗 Tokenizers library\n\n9\\. Building and sharing demos new\n\n## [](#live-sessions-and-workshops)Live sessions and workshops\n\nFor the release of parts 1 and 2 of the course, we organized several live coding sessions and workshops. You can find links to the recordings of these sessions and workshops below.\n\n## [](#live-coding-sessions)Live coding sessions\n\nFor the first session, Sylvain goes through Chapter 1 of the course with you, explaining it step by step:\n\nIn the second session, it is Lewis’ turn to present Chapter 2:\n\nBecause Chapter 2 is so cool, Sylvain has also given a walkthrough of it!\n\nFor Chapter 3, Lewis returns to guide you through the code:\n\nFinally, Omar concludes the live sessions related to the first part of the course by tackling chapter 4:\n\n## [](#workshops)Workshops\n\nIn the first workshop, Merve welcomes Lewis to discuss section 7 of chapter 7 about [question answering](https://huggingface.co/course/chapter7/7?fw=pt).\n\nFor the second workshop, Merve hosts Leandro to talk about chapter 7, section 6 on [training a causal language model from scratch](https://huggingface.co/course/chapter7/6?fw=pt) with an application with [CodeParrot](https://huggingface.co/codeparrot).","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tLive sessions and workshops - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Live sessions and workshops

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Live sessions and workshops

For the release of parts 1 and 2 of the course, we organized several live coding sessions and workshops. You can find links to the recordings of these sessions and workshops below.

Live coding sessions

For the first session, Sylvain goes through Chapter 1 of the course with you, explaining it step by step:

In the second session, it is Lewis’ turn to present Chapter 2:

Because Chapter 2 is so cool, Sylvain has also given a walkthrough of it!

For Chapter 3, Lewis returns to guide you through the code:

Finally, Omar concludes the live sessions related to the first part of the course by tackling chapter 4:

Workshops

In the first workshop, Merve welcomes Lewis to discuss section 7 of chapter 7 about question answering.

For the second workshop, Merve hosts Leandro to talk about chapter 7, section 6 on training a causal language model from scratch with an application with CodeParrot.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:41.011Z"} {"title":"Gradio Blocks Party - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/events/3?fw=pt","markdown":"## [](#gradio-blocks-party)Gradio Blocks Party\n\nAlong with the release of the Gradio chapter of the course, Hugging Face hosted a community event on building cool machine learning demos using the new Gradio Blocks feature.\n\nYou can find all the demos that the community created under the [`Gradio-Blocks`](https://huggingface.co/Gradio-Blocks) organisation on the Hub. Here’s a few examples from the winners:\n\n**Natural language to SQL**","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tGradio Blocks Party - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Gradio Blocks Party

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Gradio Blocks Party

Along with the release of the Gradio chapter of the course, Hugging Face hosted a community event on building cool machine learning demos using the new Gradio Blocks feature.

You can find all the demos that the community created under the Gradio-Blocks organisation on the Hub. Here’s a few examples from the winners:

Natural language to SQL

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:42.821Z"} {"title":"Part 2 Release Event - Hugging Face NLP Course","url":"https://huggingface.co/learn/nlp-course/events/2?fw=pt","markdown":"## [](#part-2-release-event)Part 2 Release Event\n\nFor the release of part 2 of the course, we organized a live event with two days of talks before a fine-tuning sprint. If you missed it, you can catch up with the talks which are all listed below!\n\n## [](#day-1-a-high-level-view-of-transformers-and-how-to-train-them)Day 1: A high-level view of Transformers and how to train them\n\n**Thomas Wolf:** _Transfer Learning and the birth of the Transformers library_\n\n![A visual summary of Thom's talk](https://i.imgur.com/9eq8oUi.png)\n\nThomas Wolf is co-founder and Chief Science Officer of Hugging Face. The tools created by Thomas Wolf and the Hugging Face team are used across more than 5,000 research organisations including Facebook Artificial Intelligence Research, Google Research, DeepMind, Amazon Research, Apple, the Allen Institute for Artificial Intelligence as well as most university departments. Thomas Wolf is the initiator and senior chair of the largest research collaboration that has ever existed in Artificial Intelligence: [“BigScience”](https://bigscience.huggingface.co/), as well as a set of widely used [libraries and tools](https://github.com/huggingface/). Thomas Wolf is also a prolific educator, a thought leader in the field of Artificial Intelligence and Natural Language Processing, and a regular invited speaker to conferences all around the world [https://thomwolf.io](https://thomwolf.io/).\n\n**Jay Alammar:** _A gentle visual intro to Transformers models_\n\n![A visual summary of Jay's talk](https://i.imgur.com/rOZAuE9.png)\n\nThrough his popular ML blog, Jay has helped millions of researchers and engineers visually understand machine learning tools and concepts from the basic (ending up in NumPy, Pandas docs) to the cutting-edge (Transformers, BERT, GPT-3).\n\n**Margaret Mitchell:** _On Values in ML Development_\n\n![A visual summary of Margaret's talk](https://i.imgur.com/NuIsnY3.png)\n\nMargaret Mitchell is a researcher working on Ethical AI, currently focused on the ins and outs of ethics-informed AI development in tech. She has published over 50 papers on natural language generation, assistive technology, computer vision, and AI ethics, and holds multiple patents in the areas of conversation generation and sentiment classification. She previously worked at Google AI as a Staff Research Scientist, where she founded and co-led Google’s Ethical AI group, focused on foundational AI ethics research and operationalizing AI ethics Google-internally. Before joining Google, she was a researcher at Microsoft Research, focused on computer vision-to-language generation; and was a postdoc at Johns Hopkins, focused on Bayesian modeling and information extraction. She holds a PhD in Computer Science from the University of Aberdeen and a Master’s in computational linguistics from the University of Washington. While earning her degrees, she also worked from 2005-2012 on machine learning, neurological disorders, and assistive technology at Oregon Health and Science University. She has spearheaded a number of workshops and initiatives at the intersections of diversity, inclusion, computer science, and ethics. Her work has received awards from Secretary of Defense Ash Carter and the American Foundation for the Blind, and has been implemented by multiple technology companies. She likes gardening, dogs, and cats.\n\n**Matthew Watson and Chen Qian:** _NLP workflows with Keras_\n\n![A visual summary of Matt and Chen's talk](https://i.imgur.com/1vD2az8.png)\n\nMatthew Watson is a machine learning engineer on the Keras team, with a focus on high-level modeling APIs. He studied Computer Graphics during undergrad and a Masters at Stanford University. An almost English major who turned towards computer science, he is passionate about working across disciplines and making NLP accessible to a wider audience.\n\nChen Qian is a software engineer from Keras team, with a focus on high-level modeling APIs. Chen got a Master degree of Electrical Engineering from Stanford University, and he is especially interested in simplifying code implementations of ML tasks and large-scale ML.\n\n**Mark Saroufim:** _How to Train a Model with Pytorch_\n\n![A visual summary of Mark's talk](https://i.imgur.com/TPmlkm8.png)\n\nMark Saroufim is a Partner Engineer at Pytorch working on OSS production tools including TorchServe and Pytorch Enterprise. In his past lives, Mark was an Applied Scientist and Product Manager at Graphcore, [yuri.ai](http://yuri.ai/), Microsoft and NASA’s JPL. His primary passion is to make programming more fun.\n\n**Jakob Uszkoreit:** _It Ain’t Broke So ~Don’t Fix~ Let’s Break It_\n\n![A visual summary of Jakob's talk](https://i.imgur.com/5dWQeNB.png)\n\nJakob Uszkoreit is the co-founder of Inceptive. Inceptive designs RNA molecules for vaccines and therapeutics using large-scale deep learning in a tight loop with high throughput experiments with the goal of making RNA-based medicines more accessible, more effective and more broadly applicable. Previously, Jakob worked at Google for more than a decade, leading research and development teams in Google Brain, Research and Search working on deep learning fundamentals, computer vision, language understanding and machine translation.\n\n## [](#day-2-the-tools-to-use)Day 2: The tools to use\n\n**Lewis Tunstall:** _Simple Training with the 🤗 Transformers Trainer_\n\nLewis is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the wider community. He is also a co-author of the O’Reilly book [Natural Language Processing with Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098136789/). You can follow him on Twitter (@\\_lewtun) for NLP tips and tricks!\n\n**Matthew Carrigan:** _New TensorFlow Features for 🤗 Transformers and 🤗 Datasets_\n\nMatt is responsible for TensorFlow maintenance at Transformers, and will eventually lead a coup against the incumbent PyTorch faction which will likely be co-ordinated via his Twitter account @carrigmat.\n\n**Lysandre Debut:** _The Hugging Face Hub as a means to collaborate on and share Machine Learning projects_\n\n![A visual summary of Lysandre's talk](https://i.imgur.com/TarIPCz.png)\n\nLysandre is a Machine Learning Engineer at Hugging Face where he is involved in many open source projects. His aim is to make Machine Learning accessible to everyone by developing powerful tools with a very simple API.\n\n**Lucile Saulnier:** _Get your own tokenizer with 🤗 Transformers & 🤗 Tokenizers_\n\nLucile is a machine learning engineer at Hugging Face, developing and supporting the use of open source tools. She is also actively involved in many research projects in the field of Natural Language Processing such as collaborative training and BigScience.\n\n**Sylvain Gugger:** _Supercharge your PyTorch training loop with 🤗 Accelerate_\n\nSylvain is a Research Engineer at Hugging Face and one of the core maintainers of 🤗 Transformers and the developer behind 🤗 Accelerate. He likes making model training more accessible.\n\n**Merve Noyan:** _Showcase your model demos with 🤗 Spaces_\n\nMerve is a developer advocate at Hugging Face, working on developing tools and building content around them to democratize machine learning for everyone.\n\n**Abubakar Abid:** _Building Machine Learning Applications Fast_\n\n![A visual summary of Abubakar's talk](https://i.imgur.com/qWIFeiF.png)\n\nAbubakar Abid is the CEO of [Gradio](www.gradio.app). He received his Bachelor’s of Science in Electrical Engineering and Computer Science from MIT in 2015, and his PhD in Applied Machine Learning from Stanford in 2021. In his role as the CEO of Gradio, Abubakar works on making machine learning models easier to demo, debug, and deploy.\n\n**Mathieu Desvé:** _AWS ML Vision: Making Machine Learning Accessible to all Customers_\n\n![A visual summary of Mathieu's talk](https://i.imgur.com/oLdZTKy.png)\n\nTechnology enthusiast, maker on my free time. I like challenges and solving problem of clients and users, and work with talented people to learn every day. Since 2004, I work in multiple positions switching from frontend, backend, infrastructure, operations and managements. Try to solve commons technical and managerial issues in agile manner.\n\n**Philipp Schmid:** _Managed Training with Amazon SageMaker and 🤗 Transformers_\n\nPhilipp Schmid is a Machine Learning Engineer and Tech Lead at Hugging Face, where he leads the collaboration with the Amazon SageMaker team. He is passionate about democratizing and productionizing cutting-edge NLP models and improving the ease of use for Deep Learning.","html":"\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t \n\n\t\tPart 2 Release Event - Hugging Face NLP Course\n\n\t\t\n\t\n\t\n\t\t
\n\t\n\t
\n\t\n\t\n\t
\n\n\t

NLP Course documentation

Part 2 Release Event

1,182
\n\t\t
\n\t\t\t
\"Hugging\n\t\t
Join the Hugging Face community
\n\t\t

and get access to the augmented documentation experience\n\t\t

\n\t\t\n\t\t
\n\t\t\t

to get started

\n\t\t\t\t

Part 2 Release Event

For the release of part 2 of the course, we organized a live event with two days of talks before a fine-tuning sprint. If you missed it, you can catch up with the talks which are all listed below!

Day 1: A high-level view of Transformers and how to train them

Thomas Wolf: Transfer Learning and the birth of the Transformers library

\"A

Thomas Wolf is co-founder and Chief Science Officer of Hugging Face. The tools created by Thomas Wolf and the Hugging Face team are used across more than 5,000 research organisations including Facebook Artificial Intelligence Research, Google Research, DeepMind, Amazon Research, Apple, the Allen Institute for Artificial Intelligence as well as most university departments. Thomas Wolf is the initiator and senior chair of the largest research collaboration that has ever existed in Artificial Intelligence: “BigScience”, as well as a set of widely used libraries and tools. Thomas Wolf is also a prolific educator, a thought leader in the field of Artificial Intelligence and Natural Language Processing, and a regular invited speaker to conferences all around the world https://thomwolf.io.

Jay Alammar: A gentle visual intro to Transformers models

\"A

Through his popular ML blog, Jay has helped millions of researchers and engineers visually understand machine learning tools and concepts from the basic (ending up in NumPy, Pandas docs) to the cutting-edge (Transformers, BERT, GPT-3).

Margaret Mitchell: On Values in ML Development

\"A

Margaret Mitchell is a researcher working on Ethical AI, currently focused on the ins and outs of ethics-informed AI development in tech. She has published over 50 papers on natural language generation, assistive technology, computer vision, and AI ethics, and holds multiple patents in the areas of conversation generation and sentiment classification. She previously worked at Google AI as a Staff Research Scientist, where she founded and co-led Google’s Ethical AI group, focused on foundational AI ethics research and operationalizing AI ethics Google-internally. Before joining Google, she was a researcher at Microsoft Research, focused on computer vision-to-language generation; and was a postdoc at Johns Hopkins, focused on Bayesian modeling and information extraction. She holds a PhD in Computer Science from the University of Aberdeen and a Master’s in computational linguistics from the University of Washington. While earning her degrees, she also worked from 2005-2012 on machine learning, neurological disorders, and assistive technology at Oregon Health and Science University. She has spearheaded a number of workshops and initiatives at the intersections of diversity, inclusion, computer science, and ethics. Her work has received awards from Secretary of Defense Ash Carter and the American Foundation for the Blind, and has been implemented by multiple technology companies. She likes gardening, dogs, and cats.

Matthew Watson and Chen Qian: NLP workflows with Keras

\"A

Matthew Watson is a machine learning engineer on the Keras team, with a focus on high-level modeling APIs. He studied Computer Graphics during undergrad and a Masters at Stanford University. An almost English major who turned towards computer science, he is passionate about working across disciplines and making NLP accessible to a wider audience.

Chen Qian is a software engineer from Keras team, with a focus on high-level modeling APIs. Chen got a Master degree of Electrical Engineering from Stanford University, and he is especially interested in simplifying code implementations of ML tasks and large-scale ML.

Mark Saroufim: How to Train a Model with Pytorch

\"A

Mark Saroufim is a Partner Engineer at Pytorch working on OSS production tools including TorchServe and Pytorch Enterprise. In his past lives, Mark was an Applied Scientist and Product Manager at Graphcore, yuri.ai, Microsoft and NASA’s JPL. His primary passion is to make programming more fun.

Jakob Uszkoreit: It Ain’t Broke So Don’t Fix Let’s Break It

\"A

Jakob Uszkoreit is the co-founder of Inceptive. Inceptive designs RNA molecules for vaccines and therapeutics using large-scale deep learning in a tight loop with high throughput experiments with the goal of making RNA-based medicines more accessible, more effective and more broadly applicable. Previously, Jakob worked at Google for more than a decade, leading research and development teams in Google Brain, Research and Search working on deep learning fundamentals, computer vision, language understanding and machine translation.

Day 2: The tools to use

Lewis Tunstall: Simple Training with the 🤗 Transformers Trainer

Lewis is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the wider community. He is also a co-author of the O’Reilly book Natural Language Processing with Transformers. You can follow him on Twitter (@_lewtun) for NLP tips and tricks!

Matthew Carrigan: New TensorFlow Features for 🤗 Transformers and 🤗 Datasets

Matt is responsible for TensorFlow maintenance at Transformers, and will eventually lead a coup against the incumbent PyTorch faction which will likely be co-ordinated via his Twitter account @carrigmat.

Lysandre Debut: The Hugging Face Hub as a means to collaborate on and share Machine Learning projects

\"A

Lysandre is a Machine Learning Engineer at Hugging Face where he is involved in many open source projects. His aim is to make Machine Learning accessible to everyone by developing powerful tools with a very simple API.

Lucile Saulnier: Get your own tokenizer with 🤗 Transformers & 🤗 Tokenizers

Lucile is a machine learning engineer at Hugging Face, developing and supporting the use of open source tools. She is also actively involved in many research projects in the field of Natural Language Processing such as collaborative training and BigScience.

Sylvain Gugger: Supercharge your PyTorch training loop with 🤗 Accelerate

Sylvain is a Research Engineer at Hugging Face and one of the core maintainers of 🤗 Transformers and the developer behind 🤗 Accelerate. He likes making model training more accessible.

Merve Noyan: Showcase your model demos with 🤗 Spaces

Merve is a developer advocate at Hugging Face, working on developing tools and building content around them to democratize machine learning for everyone.

Abubakar Abid: Building Machine Learning Applications Fast

\"A

Abubakar Abid is the CEO of Gradio. He received his Bachelor’s of Science in Electrical Engineering and Computer Science from MIT in 2015, and his PhD in Applied Machine Learning from Stanford in 2021. In his role as the CEO of Gradio, Abubakar works on making machine learning models easier to demo, debug, and deploy.

Mathieu Desvé: AWS ML Vision: Making Machine Learning Accessible to all Customers

\"A

Technology enthusiast, maker on my free time. I like challenges and solving problem of clients and users, and work with talented people to learn every day. Since 2004, I work in multiple positions switching from frontend, backend, infrastructure, operations and managements. Try to solve commons technical and managerial issues in agile manner.

Philipp Schmid: Managed Training with Amazon SageMaker and 🤗 Transformers

Philipp Schmid is a Machine Learning Engineer and Tech Lead at Hugging Face, where he leads the collaboration with the Amazon SageMaker team. He is passionate about democratizing and productionizing cutting-edge NLP models and improving the ease of use for Deep Learning.

\n\t\t\t\t
\n\t\t
\n\t
\n\t
\n\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\n\n","crawlDate":"2023-06-27T20:00:44.492Z"}