URL
stringlengths
30
87
Headline
stringlengths
11
143
Authors
stringlengths
5
190
Publication Date
stringlengths
11
18
Article Text
stringlengths
140
47.6k
https://huggingface.co/blog/ethical-charter-multimodal
Putting ethical principles at the core of the research lifecycle
Lucile Saulnier, Siddharth Karamcheti, Hugo Laurençon, Leo Tronchon, Thomas Wang, Victor Sanh, Amanpreet Singh, Giada Pistilli, Sasha Luccioni, Yacine Jernite, Margaret Mitchell, Douwe Kiela
May 19, 2022
Ethical charter - Multimodal project Purpose of the ethical charter It has been well documented that machine learning research and applications can potentially lead to "data privacy issues, algorithmic biases, automation risks and malicious uses" (NeurIPS 2021 ethics guidelines). The purpose of this short document is...
https://huggingface.co/blog/deep-rl-q-part1
An Introduction to Q-Learning Part 1
Thomas Simonini
May 18, 2022
Unit 2, part 1 of the Deep Reinforcement Learning Class with Hugging Face 🤗⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here....
https://huggingface.co/blog/sasha-luccioni-interview
Machine Learning Experts - Sasha Luccioni
Britney Muller
May 17, 2022
🤗 Welcome to Machine Learning Experts - Sasha Luccioni 🚀 If you're interested in learning how ML Experts, like Sasha, can help accelerate your ML roadmap visit: hf.co/support.Hey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is Sasha Luccioni. Sasha is a Research Scient...
https://huggingface.co/blog/fellowship
Announcing the Hugging Face Fellowship Program
Merve Noyan, Omar Espejel
May 17, 2022
The Fellowship is a network of exceptional people from different backgrounds who contribute to the Machine Learning open-source ecosystem 🚀. The goal of the program is to empower key contributors to enable them to scale their impact while inspiring others to contribute as well. How the Fellowship works 🙌🏻 This is H...
https://huggingface.co/blog/gradio-blocks
Gradio 3.0 is Out!
Abubakar Abid
May 16, 2022
Machine learning demos are an increasingly vital part of releasing a model. Demos allow anyone — not just ML engineers — to try out a model in the browser, give feedback on predictions, and build trust in the model if it performs well. More than 600,000 ML demos have been built with the Gradio library since its first v...
https://huggingface.co/blog/ml-director-insights-2
Director of Machine Learning Insights [Part 2: SaaS Edition]
Britney Muller
May 13, 2022
If you or your team are interested in building ML solutions faster visit hf.co/support today!👋 Welcome to Part 2 of our Director of Machine Learning Insights [Series]. Check out Part 1 here.Directors of Machine Learning have a unique seat at the AI table spanning the perspective of various roles and responsibilities. ...
https://huggingface.co/blog/ambassadors
Student Ambassador Program’s call for applications is open!
Violette Lepercq
May 13, 2022
Student Ambassador Program’s call for applications is open!Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesStudent Ambassador Program’s call for applications is open!
https://huggingface.co/blog/optimum-inference
Accelerated Inference with Optimum and Transformers Pipelines
Philipp Schmid
May 10, 2022
Inference has landed in Optimum with support for Hugging Face Transformers pipelines, including text-generation using ONNX Runtime.The adoption of BERT and Transformers continues to grow. Transformer-based models are now not only achieving state-of-the-art performance in Natural Language Processing but also for Compute...
https://huggingface.co/blog/series-c
We Raised $100 Million for Open & Collaborative Machine Learning 🚀
Hugging Face
May 9, 2022
We Raised $100 Million for Open & Collaborative Machine Learning 🚀Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesWe Raised $100 Million for Open & Collaborative Machine Learning 🚀
https://huggingface.co/blog/fastai
Welcome fastai to the Hugging Face Hub
Omar Espejel
May 6, 2022
Making neural nets uncool again... and sharing themFew have done as much as the fast.ai ecosystem to make Deep Learning accessible. Our mission at Hugging Face is to democratize good Machine Learning. Let's make exclusivity in access to Machine Learning, including pre-trained models, a thing of the past and let's push ...
https://huggingface.co/blog/deep-rl-intro
An Introduction to Deep Reinforcement Learning
Thomas Simonini, Omar Sanseviero
May 4, 2022
Chapter 1 of the Deep Reinforcement Learning Class with Hugging Face 🤗⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.⚠️ A ...
https://huggingface.co/blog/pytorch-fsdp
Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel
Sourab Mangrulkar, Sylvain Gugger
May 2, 2022
In this post we will look at how we can leverage Accelerate Library for training large models which enables users to leverage the latest features of PyTorch FullyShardedDataParallel (FSDP).Motivation 🤗With the ever increasing scale, size and parameters of the Machine Learning (ML) models, ML practitioners are finding ...
https://huggingface.co/blog/opinion-classification-with-kili
Opinion Classification with Kili and HuggingFace AutoTrain
Alper
April 28, 2022
Introduction Understanding your users’ needs is crucial in any user-related business. But it also requires a lot of hard work and analysis, which is quite expensive. Why not leverage Machine Learning then? With much less coding by using Auto ML.In this article, we will leverage HuggingFace AutoTrain and Kili to build a...
https://huggingface.co/blog/ml-director-insights
Director of Machine Learning Insights [Part 1]
Britney Muller
April 27, 2022
Few seats at the Machine Learning table span both technical skills, problem solving and business acumen like Directors of Machine LearningDirectors of Machine Learning and/or Data Science are often expected to design ML systems, have deep knowledge of mathematics, familiarity with ML frameworks, rich data architecture ...
https://huggingface.co/blog/getting-started-habana
Getting Started with Transformers on Habana Gaudi
Julien Simon
April 26, 2022
A couple of weeks ago, we've had the pleasure to announce that Habana Labs and Hugging Face would partner to accelerate Transformer model training.Habana Gaudi accelerators deliver up to 40% better price performance for training machine learning models compared to the latest GPU-based Amazon EC2 instances. We are super...
https://huggingface.co/blog/education
Introducing Hugging Face for Education 🤗
Violette Lepercq
April 25, 2022
Given that machine learning will make up the overwhelming majority of software development and that non-technical people will be exposed to AI systems more and more, one of the main challenges of AI is adapting and enhancing employee skills. It is also becoming necessary to support teaching staff in proactively taking ...
https://huggingface.co/blog/supercharge-customer-service-with-machine-learning
Supercharged Customer Service with Machine Learning
Patrick von Platen
April 25, 2022
In this blog post, we will simulate a real-world customer service use case and use tools machine learning tools of the Hugging Face ecosystem to address it.We strongly recommend using this notebook as a template/example to solve your real-world use case.Defining Task, Dataset & ModelBefore jumping into the actual codin...
https://huggingface.co/blog/carbon-emissions-on-the-hub
CO2 Emissions and the 🤗 Hub: Leading the Charge
Sasha Luccioni, Zachary Mueller, Nate Raw
April 22, 2022
What are CO2 Emissions and why are they important?Climate change is one of the greatest challenges that we are facing and reducing emissions of greenhouse gases such as carbon dioxide (CO2) is an important part of tackling this problem. Training and deploying machine learning models will emit CO2 due to the energy usag...
https://huggingface.co/blog/lewis-tunstall-interview
Machine Learning Experts - Lewis Tunstall
Britney Muller
April 13, 2022
🤗 Welcome to Machine Learning Experts - Lewis TunstallHey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is Lewis Tunstall. Lewis is a Machine Learning Engineer at Hugging Face where he works on applying Transformers to automate business processes and solve MLOps challeng...
https://huggingface.co/blog/habana
Habana Labs and Hugging Face Partner to Accelerate Transformer Model Training
Susan Lansing
April 12, 2022
Habana Labs and Hugging Face Partner to Accelerate Transformer Model TrainingHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesHabana Labs and Hugging Face Partner to Accelerate Transformer Model Training
https://huggingface.co/blog/transformers-design-philosophy
Don't Repeat Yourself*
Patrick von Platen
April 5, 2022
🤗 Transformers Design Philosophy"Don't repeat yourself", or DRY, is a well-known principle of software development. The principle originates from "The pragmatic programmer", one of the most read books on code design.The principle's simple message makes obvious sense: Don't rewrite a logic that already exists somewhere...
https://huggingface.co/blog/decision-transformers
Introducing Decision Transformers on Hugging Face 🤗
Edward Beeching, Thomas Simonini
March 28, 2022
At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. Recently, we have integrated Deep RL frameworks such as Stable-Baselines3. And today we are happy to announce that we integrated the Decision Transformer, an Offline Reinforcement Learning method, into the...
https://huggingface.co/blog/meg-mitchell-interview
Machine Learning Experts - Margaret Mitchell
Britney Muller
March 23, 2022
Hey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is none other than Margaret Mitchell (Meg for short). Meg founded & co-led Google’s Ethical AI Group, is a pioneer in the field of Machine Learning, has published over 50 papers, and is a leading researcher in Ethical AI.Y...
https://huggingface.co/blog/ai-residency
Announcing the 🤗 AI Research Residency Program 🎉 🎉 🎉
Douwe Kiela
March 22, 2022
The 🤗 Research Residency Program is a 9-month opportunity to launch or advance your career in machine learning research 🚀. The goal of the residency is to help you grow into an impactful AI researcher. Residents will work alongside Researchers from our Science Team. Together, you will pick a research problem and then...
https://huggingface.co/blog/fine-tune-segformer
Fine-Tune a Semantic Segmentation Model with a Custom Dataset
Tobias Cornille, Niels Rogge
March 17, 2022
This guide shows how you can fine-tune Segformer, a state-of-the-art semantic segmentation model. Our goal is to build a model for a pizza delivery robot, so it can see where to drive and recognize obstacles 🍕🤖. We'll first label a set of sidewalk images on Segments.ai. Then we'll fine-tune a pre-trained SegFormer mo...
https://huggingface.co/blog/bert-inferentia-sagemaker
Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia
Philipp Schmid
March 16, 2022
notebook: sagemaker/18_inferentia_inferenceThe adoption of BERT and Transformers continues to grow. Transformer-based models are now not only achieving state-of-the-art performance in Natural Language Processing but also for Computer Vision, Speech, and Time-Series. 💬 🖼 🎤 ⏳Companies are now slowly moving from the ex...
https://huggingface.co/blog/image-search-datasets
Image search with 🤗 datasets
Daniel van Strien
March 16, 2022
🤗 datasets is a library that makes it easy to access and share datasets. It also makes it easy to process data efficiently -- including working with data which doesn't fit into memory.When datasets was first launched, it was associated mostly with text data. However, recently, datasets has added increased support for ...
https://huggingface.co/blog/constrained-beam-search
Guiding Text Generation with Constrained Beam Search in 🤗 Transformers
Chan Woo Kim
March 11, 2022
IntroductionThis blog post assumes that the reader is familiar with text generation methods using the different variants of beam search, as explained in the blog post: "How to generate text: using different decoding methods for language generation with Transformers"Unlike ordinary beam search, constrained beam search a...
https://huggingface.co/blog/bert-101
BERT 101 🤗 State Of The Art NLP Model Explained
Britney Muller
March 2, 2022
What is BERT?BERT, short for Bidirectional Encoder Representations from Transformers, is a Machine Learning (ML) model for natural language processing. It was developed in 2018 by researchers at Google AI Language and serves as a swiss army knife solution to 11+ of the most common language tasks, such as sentiment anal...
https://huggingface.co/blog/fine-tune-vit
Fine-Tune ViT for Image Classification with 🤗 Transformers
Nate Raw
February 11, 2022
Just as transformers-based models have revolutionized NLP, we're now seeing an explosion of papers applying them to all sorts of other domains. One of the most revolutionary of these was the Vision Transformer (ViT), which was introduced in June 2021 by a team of researchers at Google Brain.This paper explored how you ...
https://huggingface.co/blog/sentiment-analysis-python
Getting Started with Sentiment Analysis using Python
Federico Pascual
February 2, 2022
Sentiment analysis is the automated process of tagging data according to their sentiment, such as positive, negative and neutral. Sentiment analysis allows companies to analyze data at scale, detect insights and automate processes.In the past, sentiment analysis used to be limited to researchers, machine learning engin...
https://huggingface.co/blog/asr-chunking
Making automatic speech recognition work on large files with Wav2Vec2 in 🤗 Transformers
Nicolas Patry
February 1, 2022
Wav2Vec2 is a popular pre-trained model for speech recognition.Released in September 2020by Meta AI Research, the novel architecture catalyzed progress inself-supervised pretraining for speech recognition, e.g. G. Ng etal., 2021, Chen et al,2021, Hsu et al.,2021 and Babu et al.,2021. On the Hugging Face Hub,Wav2Vec2's ...
https://huggingface.co/blog/searching-the-hub
Supercharged Searching on the Hugging Face Hub
Zachary Mueller
January 25, 2022
The huggingface_hub library is a lightweight interface that provides a programmatic approach to exploring the hosting endpoints Hugging Face provides: models, datasets, and Spaces.Up until now, searching on the Hub through this interface was tricky to pull off, and there were many aspects of it a user had to "just know...
https://huggingface.co/blog/sb3
Welcome Stable-baselines3 to the Hugging Face Hub 🤗
Thomas Simonini
January 21, 2022
At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. That’s why we’re happy to announce that we integrated Stable-Baselines3 to the Hugging Face Hub.Stable-Baselines3 is one of the most popular PyTorch Deep Reinforcement Learning library that makes it easy t...
https://huggingface.co/blog/infinity-cpu-performance
Case Study: Millisecond Latency using Hugging Face Infinity and modern CPUs
Philipp Schmid, Jeff Boudier, Morgan Funtowicz
January 13, 2022
Inference Endpoints to easily deploy models on dedicated infrastructure managed by Hugging Face.Our open-source optimization libraries, 🤗 Optimum Intel and 🤗 Optimum ONNX Runtime, to get the highest efficiency out of training and running models for inference.Hugging Face Expert Acceleration Program, a commercial serv...
https://huggingface.co/blog/wav2vec2-with-ngram
Boosting Wav2Vec2 with n-grams in 🤗 Transformers
Patrick von Platen
January 12, 2022
Wav2Vec2 is a popular pre-trained model for speech recognition.Released in September 2020by Meta AI Research, the novel architecture catalyzed progress inself-supervised pretraining for speech recognition, e.g. G. Ng etal., 2021, Chen et al,2021, Hsu et al.,2021 and Babu et al.,2021. On the Hugging Face Hub,Wav2Vec2's ...
https://huggingface.co/blog/gptj-sagemaker
Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker
Philipp Schmid
January 11, 2022
Almost 6 months ago to the day, EleutherAI released GPT-J 6B, an open-source alternative to OpenAIs GPT-3. GPT-J 6B is the 6 billion parameter successor to EleutherAIs GPT-NEO family, a family of transformer-based language models based on the GPT architecture for text generation.EleutherAI's primary goal is to train a ...
https://huggingface.co/blog/autonlp-prodigy
Active Learning with AutoNLP and Prodigy
Abhishek Thakur
December 23, 2021
Active learning in the context of Machine Learning is a process in which you iteratively add labeled data, retrain a model and serve it to the end user. It is an endless process and requires human interaction for labeling/creating the data. In this article, we will discuss how to use AutoNLP and Prodigy to build an act...
https://huggingface.co/blog/gradio-joins-hf
Gradio is joining Hugging Face!
Abubakar Abid
December 21, 2021
Gradio is joining Hugging Face!Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesGradio is joining Hugging Face!
https://huggingface.co/blog/perceiver
Perceiver IO: a scalable, fully-attentional model that works on any modality
Niels Rogge
December 15, 2021
We've added Perceiver IO to Transformers, the first Transformer-based neural network that works on all kinds of modalities (text, images, audio, video, point clouds,...) and combinations thereof. Take a look at the following Spaces to view some examples:predicting optical flow between imagesclassifying images.We also p...
https://huggingface.co/blog/codeparrot
Training CodeParrot 🦜 from Scratch
Christo
December 8, 2021
In this blog post we'll take a look at what it takes to build the technology behind GitHub CoPilot, an application that provides suggestions to programmers as they code. In this step by step guide, we'll learn how to train a large GPT-2 model called CodeParrot 🦜, entirely from scratch. CodeParrot can auto-complete you...
https://huggingface.co/blog/snowball-fight
Introducing Snowball Fight ☃️, our First ML-Agents Environment
Thomas Simonini
December 2, 2021
We're excited to share our first custom Deep Reinforcement Learning environment: Snowball Fight 1vs1 🎉.Snowball Fight is a game made with Unity ML-Agents, where you shoot snowballs against a Deep Reinforcement Learning agent. The game is hosted on Hugging Face Spaces. 👉 You can play it online hereIn this post, we'll ...
https://huggingface.co/blog/graphcore-getting-started
Getting Started with Hugging Face Transformers for IPUs with Optimum
Tim Santos, Julien Simon
November 30, 2021
Transformer models have proven to be extremely efficient on a wide range of machine learning tasks, such as natural language processing, audio processing, and computer vision. However, the prediction speed of these large models can make them impractical for latency-sensitive use cases like conversational applications o...
https://huggingface.co/blog/data-measurements-tool
Introducing the 🤗 Data Measurements Tool: an Interactive Tool for Looking at Datasets
Sasha Luccioni, Yacine Jernite, Margaret Mitchell
November 29, 2021
tl;dr: We made a tool you can use online to build, measure, and compare datasets.Click to access the 🤗 Data Measurements Tool here.As developers of a fast-growing unified repository for Machine Learning datasets (Lhoest et al. 2021), the 🤗 Hugging Face team has been working on supporting good practices for dataset do...
https://huggingface.co/blog/accelerating-pytorch
Accelerating PyTorch distributed fine-tuning with Intel technologies
Julien Simon
November 19, 2021
For all their amazing performance, state of the art deep learning models often take a long time to train. In order to speed up training jobs, engineering teams rely on distributed training, a divide-and-conquer technique where clustered servers each keep a copy of the model, train it on a subset of the training set, an...
https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
Fine-tuning XLS-R for Multi-Lingual ASR with 🤗 Transformers
Patrick von Platen
November 15, 2021
New (11/2021): This blog post has been updated to feature XLSR'ssuccessor, called XLS-R.Wav2Vec2 is a pretrained model for Automatic Speech Recognition(ASR) and was released in September2020by Alexei Baevski, Michael Auli, and Alex Conneau. Soon after thesuperior performance of Wav2Vec2 was demonstrated on one of the m...
https://huggingface.co/blog/bert-cpu-scaling-part-2
Scaling up BERT-like model Inference on modern CPU - Part 2
Ella Charlaix, Jeff Boudier, Morgan Funtowicz, Michael Benayoun
November 4, 2021
Introduction: Using Intel Software to Optimize AI Efficiency on CPUAs we detailed in our previous blog post, Intel Xeon CPUs provide a set of features especially designed for AI workloads such as AVX512 or VNNI (Vector Neural Network Instructions) for efficient inference using integer quantized neural network for infer...
https://huggingface.co/blog/course-launch-event
Course Launch Community Event
Sylvain Gugger
October 26, 2021
We are excited to share that after a lot of work from the Hugging Face team, part 2 of the Hugging Face Course will be released on November 15th! Part 1 focused on teaching you how to use a pretrained model, fine-tune it on a text classification task then upload the result to the Model Hub. Part 2 will focus on all the...
https://huggingface.co/blog/large-language-models
Large Language Models: A New Moore's Law?
Julien Simon
October 26, 2021
A few days ago, Microsoft and NVIDIA introduced Megatron-Turing NLG 530B, a Transformer-based model hailed as "the world’s largest and most powerful generative language model."This is an impressive show of Machine Learning engineering, no doubt about it. Yet, should we be excited about this mega-model trend? I, for one...
https://huggingface.co/blog/1b-sentence-embeddings
Train a Sentence Embedding Model with 1 Billion Training Pairs
Antoine SIMOULIN
October 25, 2021
Sentence embedding is a method that maps sentences to vectors of real numbers. Ideally, these vectors would capture the semantic of a sentence and be highly generic. Such representations could then be used for many downstream applications such as clustering, text mining, or question answering.We developed state-of-the-...
https://huggingface.co/blog/the-age-of-ml-as-code
The Age of Machine Learning As Code Has Arrived
Julien Simon
October 20, 2021
The 2021 edition of the State of AI Report came out last week. So did the Kaggle State of Machine Learning and Data Science Survey. There's much to be learned and discussed in these reports, and a couple of takeaways caught my attention."AI is increasingly being applied to mission critical infrastructure like national ...
https://huggingface.co/blog/fine-tune-clip-rsicd
Fine tuning CLIP with Remote Sensing (Satellite) images and captions
Arto, Dev Vidhani, Goutham, Mayank Bhaskar, Sujit Pal
October 13, 2021
Fine tuning CLIP with Remote Sensing (Satellite) images and captionsIn July this year, Hugging Face organized a Flax/JAX Community Week, and invited the community to submit projects to train Hugging Face transformers models in the areas of Natural Language Processing (NLP) and Computer Vision (CV).Participants used Ten...
https://huggingface.co/blog/streamlit-spaces
Hosting your Models and Datasets on Hugging Face Spaces using Streamlit
Merve Noyan
October 5, 2021
Showcase your Datasets and Models using Streamlit on Hugging Face SpacesStreamlit allows you to visualize datasets and build demos of Machine Learning models in a neat way. In this blog post we will walk you through hosting models and datasets and serving your Streamlit applications in Hugging Face Spaces. Building dem...
https://huggingface.co/blog/gradio-spaces
Showcase Your Projects in Spaces using Gradio
Merve Noyan
October 5, 2021
It's so easy to demonstrate a Machine Learning project thanks to Gradio. In this blog post, we'll walk you through:the recent Gradio integration that helps you demo models from the Hub seamlessly with few lines of code leveraging the Inference API.how to use Hugging Face Spaces to host demos of your own models.Hugging ...
https://huggingface.co/blog/summer-at-huggingface
Summer At Hugging Face 😎
Hugging Face
September 24, 2021
Summer is now officially over and these last few months have been quite busy at Hugging Face. From new features in the Hub to research and Open Source development, our team has been working hard to empower the community through open and collaborative technology. In this blog post you'll catch up on everything that happ...
https://huggingface.co/blog/graphcore
Hugging Face and Graphcore partner for IPU-optimized Transformers
Sally Doherty
September 14, 2021
Graphcore and Hugging Face are two companies with a common goal – to make it easier for innovators to harness the power of machine intelligence. Hugging Face’s Hardware Partner Program will allow developers using Graphcore systems to deploy state-of-the-art Transformer models, optimised for our Intelligence Processing ...
https://huggingface.co/blog/hardware-partners-program
Introducing 🤗 Optimum: The Optimization Toolkit for Transformers at Scale
Morgan Funtowicz, Ella Charlaix, Michael Benayoun, Jeff Boudier
September 14, 2021
This post is the first step of a journey for Hugging Face to democratizestate-of-the-art Machine Learning production performance.To get there, we will work hand in hand with ourHardware Partners, as we have with Intel below.Join us in this journey, and follow Optimum, our new open source library!Why 🤗 Optimum?🤯 Scali...
https://huggingface.co/blog/collaborative-training
Deep Learning over the Internet: Training Language Models Collaboratively
Max Ryabinin, Lucile Saulnier
July 15, 2021
Modern language models often require a significant amount of compute for pretraining, making it impossible to obtain them without access to tens and hundreds of GPUs or TPUs. Though in theory it might be possible to combine the resources of multiple individuals, in practice, such distributed training methods have previ...
https://huggingface.co/blog/spacy
Welcome spaCy to the Hugging Face Hub
Omar Sanseviero, Ines Montani
July 13, 2021
spaCy is a popular library for advanced Natural Language Processing used widely across industry. spaCy makes it easy to use and train pipelines for tasks like named entity recognition, text classification, part of speech tagging and more, and lets you build powerful applications to process and analyze large volumes of ...
https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker
Deploy Hugging Face models easily with Amazon SageMaker 🏎
No authors found
July 8, 2021
Earlier this year we announced a strategic collaboration with Amazon to make it easier for companies to use Hugging Face in Amazon SageMaker, and ship cutting-edge Machine Learning features faster. We introduced new Hugging Face Deep Learning Containers (DLCs) to train Hugging Face Transformer models in Amazon SageMake...
https://huggingface.co/blog/sentence-transformers-in-the-hub
Sentence Transformers in the Hugging Face Hub
Omar Sanseviero, Nils Reimers
June 28, 2021
Over the past few weeks, we've built collaborations with many Open Source frameworks in the machine learning ecosystem. One that gets us particularly excited is Sentence Transformers.Sentence Transformers is a framework for sentence, paragraph and image embeddings. This allows to derive semantically meaningful embeddin...
https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api
Few-shot learning in practice: GPT-Neo and the 🤗 Accelerated Inference API
Philipp Schmid
June 3, 2021
In many Machine Learning applications, the amount of available labeled data is a barrier to producing a high-performing model. The latest developments in NLP show that you can overcome this limitation by providing a few examples at inference time with a large language model - a technique known as Few-Shot Learning. In ...
https://huggingface.co/blog/gradio
Using & Mixing Hugging Face Models with Gradio 2.0
Abubakar Abid
May 25, 2021
Using & Mixing Hugging Face Models with Gradio 2.0Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesUsing & Mixing Hugging Face Models with Gradio 2.0
https://huggingface.co/blog/bert-cpu-scaling-part-1
Scaling up BERT-like model Inference on modern CPU - Part 1
Morgan Funtowicz
April 20, 2021
1. Context and MotivationsBack in October 2019, my colleague Lysandre Debut published a comprehensive (at the time) inference performance benchmarking blog (1).Since then, 🤗 transformers (2) welcomed a tremendous numberof new architectures and thousands of new models were added to the 🤗 hub (3)which now counts more t...
https://huggingface.co/blog/accelerate-library
Introducing 🤗 Accelerate
Sylvain Gugger
April 16, 2021
🤗 AccelerateRun your raw PyTorch training scripts on any kind of device.Most high-level libraries above PyTorch provide support for distributed training and mixed precision, but the abstraction they introduce require a user to learn a new API if they want to customize the underlying training loop. 🤗 Accelerate was cr...
https://huggingface.co/blog/sagemaker-distributed-training-seq2seq
Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker
Philipp Schmid
April 8, 2021
In case you missed it: on March 25th we announced a collaboration with Amazon SageMaker to make it easier to create State-of-the-Art Machine Learning models, and ship cutting-edge NLP features faster. Together with the SageMaker team, we built 🤗 Transformers optimized Deep Learning Containers to accelerate training of...
https://huggingface.co/blog/big-bird
Understanding BigBird's Block Sparse Attention
Vasudev Gupta
March 31, 2021
IntroductionTransformer-based models have shown to be very useful for many NLP tasks. However, a major limitation of transformers-based models is its O(n2)O(n^2)O(n2) time & memory complexity (where nnn is sequence length). Hence, it's computationally very expensive to apply transformer-based models on long sequences n...
https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
The Partnership: Amazon SageMaker and Hugging Face
No authors found
March 23, 2021
Look at these smiles!Today, we announce a strategic partnership between Hugging Face and Amazon to make it easier for companies to leverage State of the Art Machine Learning models, and ship cutting-edge NLP features faster.Through this partnership, Hugging Face is leveraging Amazon Web Services as its Preferred Cloud ...
https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds
My Journey to a serverless transformers pipeline on Google Cloud
Dominici
March 18, 2021
This article will discuss my journey to deploy the transformers sentiment-analysis pipeline on Google Cloud. We will start with a quick introduction to transformers and then move to the technical part of the implementation. Finally, we'll summarize this implementation and review what we have achieved.The GoalI wanted t...
https://huggingface.co/blog/fine-tune-wav2vec2-english
Fine-Tune Wav2Vec2 for English ASR with 🤗 Transformers
Patrick von Platen
March 12, 2021
Wav2Vec2 is a pretrained model for Automatic Speech Recognition (ASR)and was released in September2020by Alexei Baevski, Michael Auli, and Alex Conneau.Using a novel contrastive pretraining objective, Wav2Vec2 learnspowerful speech representations from more than 50.000 hours of unlabeledspeech. Similar, to BERT's maske...
https://huggingface.co/blog/long-range-transformers
Hugging Face Reads, Feb. 2021 - Long-range Transformers
Victor Sanh
March 9, 2021
Co-written by Teven Le Scao, Patrick Von Platen, Suraj Patil, Yacine Jernite and Victor Sanh.Each month, we will choose a topic to focus on, reading a set of four papers recently published on the subject. We will then write a short blog post summarizing their findings and the common trends between them, and questions w...
https://huggingface.co/blog/simple-considerations
🚧 Simple considerations for simple people building fancy neural networks
Victor Sanh
February 25, 2021
Photo by Henry & Co. on UnsplashAs machine learning continues penetrating all aspects of the industry, neural networks have never been so hyped. For instance, models like GPT-3 have been all over social media in the past few weeks and continue to make headlines outside of tech news outlets with fear-mongering titles.An...
https://huggingface.co/blog/ray-rag
Retrieval Augmented Generation with Huggingface Transformers and Ray
Ray Project (Anyscale)
February 10, 2021
Huggingface Transformers recently added the Retrieval Augmented Generation (RAG) model, a new NLP architecture that leverages external documents (like Wikipedia) to augment its knowledge and achieve state of the art results on knowledge-intensive tasks. In this blog post, we introduce the integration of Ray, a library ...
https://huggingface.co/blog/pytorch-xla
Hugging Face on PyTorch / XLA TPUs: Faster and cheaper training
Daniel JinYoung Sohn, Lysandre
February 9, 2021
Training Your Favorite Transformers on Cloud TPUs using PyTorch / XLAThe PyTorch-TPU project originated as a collaborative effort between the Facebook PyTorch and Google TPU teams and officially launched at the 2019 PyTorch Developer Conference 2019. Since then, we’ve worked with the Hugging Face team to bring first-cl...
https://huggingface.co/blog/tf-serving
Faster TensorFlow models in Hugging Face Transformers
Julien Plu
January 26, 2021
In the last few months, the Hugging Face team has been working hard on improving Transformers’ TensorFlow models to make them more robust and faster. The recent improvements are mainly focused on two aspects:Computational performance: BERT, RoBERTa, ELECTRA and MPNet have been improved in order to have a much faster co...
https://huggingface.co/blog/zero-deepspeed-fairscale
Fit More and Train Faster With ZeRO via DeepSpeed and FairScale
Stas Bekman
January 19, 2021
A guest blog post by Hugging Face fellow Stas BekmanAs recent Machine Learning models have been growing much faster than the amount of GPU memory added to newly released cards, many users are unable to train or even just load some of those huge models onto their hardware. While there is an ongoing effort to distill som...
https://huggingface.co/blog/accelerated-inference
How we sped up transformer inference 100x for 🤗 API customers
No authors found
January 18, 2021
🤗 Transformers has become the default library for data scientists all around the world to explore state of the art NLP models and build new NLP features. With over 5,000 pre-trained and fine-tuned models available, in over 250 languages, it is a rich playground, easily accessible whichever framework you are working in...
https://huggingface.co/blog/ray-tune
Hyperparameter Search with Transformers and Ray Tune
Ray Project (Anyscale)
November 2, 2020
With cutting edge research implementations, thousands of trained models easily accessible, the Hugging Face transformers library has become critical to the success and growth of natural language processing today.For any machine learning model to achieve good performance, users often need to implement some form of param...
https://huggingface.co/blog/pytorch_block_sparse
Block Sparse Matrices for Smaller and Faster Language Models
François Lagunas
September 10, 2020
Saving space and time, one zero at a time In previous blogposts we introduced sparse matrices and what they could do to improve neural networks.The basic assumption is that full dense layers are often overkill and can be pruned without a significant loss in precision.In some cases sparse linear layers can even improve ...
https://huggingface.co/blog/how-to-generate
How to generate text: using different decoding methods for language generation with Transformers
Patrick von Platen
March 1, 2020
Note: Edited on July 2023 with up-to-date references and examples.IntroductionIn recent years, there has been an increasing interest in open-endedlanguage generation thanks to the rise of large transformer-basedlanguage models trained on millions of webpages, including OpenAI's ChatGPTand Meta's LLaMA.The results on co...
https://huggingface.co/blog/how-to-train
How to train a new language model from scratch using Transformers and Tokenizers
Julien Chaumond
February 14, 2020
Over the past few months, we made several improvements to our transformers and tokenizers libraries, with the goal of making it easier than ever to train a new language model from scratch.In this post we’ll demo how to train a “small” model (84 M parameters = 6 layers, 768 hidden size, 12 attention heads) – that’s the ...