id stringlengths 36 36 | status stringclasses 1
value | inserted_at timestamp[us] | updated_at timestamp[us] | _server_id stringlengths 36 36 | title stringlengths 11 142 | authors stringlengths 3 297 | filename stringlengths 5 62 | content stringlengths 2 64.1k | content_class.responses listlengths 1 1 | content_class.responses.users listlengths 1 1 | content_class.responses.status listlengths 1 1 | content_class.suggestion listlengths 1 4 | content_class.suggestion.agent null | content_class.suggestion.score null |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
aa6cd850-deb8-434a-8e48-3b9b83f59850 | completed | 2025-01-16T03:08:37.719373 | 2025-01-16T13:36:03.943863 | 04931499-a195-4dbe-8e88-3615fb461334 | Data is better together: Enabling communities to collectively build better datasets together using Argilla and Hugging Face Spaces | davanstrien, dvilasuero | community-datasets.md | Recently, Argilla and Hugging Face [launched](https://huggingface.co/posts/dvilasuero/680660181190026) `Data is Better Together`, an experiment to collectively build a preference dataset of prompt rankings. In a few days, we had:
- 350 community contributors labeling data
- Over 11,000 prompt ratings
See the [progre... | [
[
"llm",
"data",
"community",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"data",
"community",
"tools",
"llm"
] | null | null |
3d7d7a2d-491b-449f-ba3b-510a45e1ead4 | completed | 2025-01-16T03:08:37.719391 | 2025-01-19T19:00:17.290954 | fdfa8e88-1b3f-43c9-905a-510602a63ee3 | A Security Review of Gradio 5 | abidlabs, pngwn | gradio-5-security.md | **We audited Gradio 5 so that your machine learning apps are safe!**
In the past few years, [Gradio](https://github.com/gradio-app/gradio/) (>6 million monthly Pypi installs) has become the default way to build machine learning web applications in Python. In just a few lines of code, you can create a user interface fo... | [
[
"mlops",
"implementation",
"security",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"security",
"tools",
"implementation",
"mlops"
] | null | null |
dc3ec0f4-c053-491d-8c35-0938492e1238 | completed | 2025-01-16T03:08:37.719401 | 2025-01-19T17:14:34.129868 | 078c94d6-25c8-47bc-9402-90bbea13d14d | Showcase Your Projects in Spaces using Gradio | merve | gradio-spaces.md | It's so easy to demonstrate a Machine Learning project thanks to [Gradio](https://gradio.app/).
In this blog post, we'll walk you through:
- the recent Gradio integration that helps you demo models from the Hub seamlessly with few lines of code leveraging the [Inference API](https://huggingface.co/inference-api).
- h... | [
[
"mlops",
"implementation",
"tools",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"mlops",
"implementation",
"tools",
"integration"
] | null | null |
aa30786c-27c9-4929-9e95-5c2516aed772 | completed | 2025-01-16T03:08:37.719411 | 2025-01-19T18:49:32.224478 | 80f1fa1e-c44c-432b-96e3-e313679d4c1a | Introducing smolagents: simple agents that write actions in code. | m-ric, merve, thomwolf | smolagents.md | Today we are launching [`smolagents`](https://github.com/huggingface/smolagents), a very simple library that unlocks agentic capabilities for language models. Here’s a glimpse:
```python
from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel
agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=HfApiMod... | [
[
"llm",
"implementation",
"tools",
"text_generation"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"implementation",
"tools",
"text_generation"
] | null | null |
df2462d0-e003-4f15-ac32-7363e169e427 | completed | 2025-01-16T03:08:37.719420 | 2025-01-16T03:17:50.594906 | 07dece9f-a414-48df-8173-23243786b9cd | MTEB: Massive Text Embedding Benchmark | Muennighoff | mteb.md | MTEB is a massive benchmark for measuring the performance of text embedding models on diverse embedding tasks.
The 🥇 [leaderboard](https://huggingface.co/spaces/mteb/leaderboard) provides a holistic view of the best text embedding models out there on a variety of tasks.
The 📝 [paper](https://arxiv.org/abs/2210.073... | [
[
"data",
"research",
"benchmarks",
"tools",
"text_classification"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"benchmarks",
"research",
"tools",
"data"
] | null | null |
f01bfc90-3615-45c6-a448-debd0ddd13d1 | completed | 2025-01-16T03:08:37.719429 | 2025-01-16T03:19:26.902694 | 510bfb44-c7a6-4eea-9b34-c0a929d2d0e7 | Porting fairseq wmt19 translation system to transformers | stas | porting-fsmt.md | ##### A guest blog post by Stas Bekman
This article is an attempt to document how [fairseq wmt19 translation system](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) was ported to [`transformers`](https://github.com/huggingface/transformers/).
I was looking for some interesting project to work on and [... | [
[
"transformers",
"research",
"implementation"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"translation",
"implementation",
"research"
] | null | null |
a31d084d-090e-4d29-a190-2c087869171a | completed | 2025-01-16T03:08:37.719439 | 2025-01-19T18:47:44.828763 | 0e7993a0-8558-44d2-af5f-b858e6aff2cd | Introducing the Open Ko-LLM Leaderboard: Leading the Korean LLM Evaluation Ecosystem | Chanjun, hunkim, clefourrier | leaderboard-upstage.md | In the fast-evolving landscape of Large Language Models (LLMs), building an “ecosystem” has never been more important. This trend is evident in several major developments like Hugging Face's democratizing NLP and Upstage building a Generative AI ecosystem.
Inspired by these industry milestones, in September of 2023, a... | [
[
"llm",
"research",
"benchmarks",
"community"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"benchmarks",
"community",
"research"
] | null | null |
512bb096-2538-4be8-8ebd-8866cd1bc14c | completed | 2025-01-16T03:08:37.719448 | 2025-01-19T19:13:54.373112 | db443612-33f7-4ad6-8684-01c4413a97a0 | Deploying 🤗 ViT on Kubernetes with TF Serving | chansung, sayakpaul | deploy-tfserving-kubernetes.md | In the [<u>previous post</u>](https://huggingface.co/blog/tf-serving-vision), we showed how
to deploy a [<u>Vision Transformer (ViT)</u>](https://huggingface.co/docs/transformers/main/en/model_doc/vit)
model from 🤗 Transformers locally with TensorFlow Serving. We covered
topics like embedding preprocessing and postpro... | [
[
"computer_vision",
"transformers",
"mlops",
"tutorial",
"deployment"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"transformers",
"mlops",
"deployment"
] | null | null |
c5f128b3-f370-4984-89cd-132b753a94b3 | completed | 2025-01-16T03:08:37.719457 | 2025-01-16T03:17:15.373299 | 4caf7254-0df2-4acd-8ff2-b335e3c7d9bd | AMD + 🤗: Large Language Models Out-of-the-Box Acceleration with AMD GPU | fxmarty, IlyasMoutawwakil, mohitsha, echarlaix, seungrokj, mfuntowicz | huggingface-and-optimum-amd.md | Earlier this year, [AMD and Hugging Face announced a partnership](https://huggingface.co/blog/huggingface-and-amd) to accelerate AI models during the AMD's AI Day event. We have been hard at work to bring this vision to reality, and make it easy for the Hugging Face community to run the latest AI models on AMD hardwar... | [
[
"llm",
"implementation",
"optimization",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"optimization",
"implementation",
"integration"
] | null | null |
5fbe5aae-7a41-4b61-9506-ae7e8bdb9836 | completed | 2025-01-16T03:08:37.719467 | 2025-01-16T03:13:57.062828 | 3a503229-03f0-4c5f-abd9-9f62f7613473 | Fine-Tune a Semantic Segmentation Model with a Custom Dataset | tobiasc, nielsr | fine-tune-segformer.md | <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script>
<a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="... | [
[
"computer_vision",
"transformers",
"tutorial",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"transformers",
"fine_tuning",
"tutorial"
] | null | null |
87f38fed-f820-4344-bd87-a019413f8662 | completed | 2025-01-16T03:08:37.719476 | 2025-01-19T18:52:58.126948 | 4cac3387-3005-45bd-a1fb-d605ab09f600 | Accelerating Document AI | rajistics, nielsr, florentgbelidji, nbroad | document-ai.md | Enterprises are full of documents containing knowledge that isn't accessible by digital workflows. These documents can vary from letters, invoices, forms, reports, to receipts. With the improvements in text, vision, and multimodal AI, it's now possible to unlock that information. This post shows you how your teams can ... | [
[
"computer_vision",
"implementation",
"multi_modal"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"multi_modal",
"implementation",
"tutorial"
] | null | null |
7129deb4-9c64-4b1e-a27b-71a789ce3cd4 | completed | 2025-01-16T03:08:37.719485 | 2025-01-19T18:59:13.437678 | 36285803-8548-4393-a819-fc9b45ce933f | Overview of natively supported quantization schemes in 🤗 Transformers | ybelkada, marcsun13, IlyasMoutawwakil, clefourrier, fxmarty | overview-quantization-transformers.md | We aim to give a clear overview of the pros and cons of each quantization scheme supported in transformers to help you decide which one you should go for.
Currently, quantizing models are used for two main purposes:
- Running inference of a large model on a smaller device
- Fine-tune adapters on top of quantized mode... | [
[
"transformers",
"implementation",
"optimization",
"quantization"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"quantization",
"optimization",
"implementation"
] | null | null |
05615c67-233e-4acf-92c4-5a3564376aad | completed | 2025-01-16T03:08:37.719494 | 2025-01-16T13:34:39.854827 | 8607bfc3-dbe2-46e0-9570-b0e8ff2fff70 | How to train your model dynamically using adversarial data | chrisjay | mnist-adversarial.md | ##### What you will learn here
- 💡the basic idea of dynamic adversarial data collection and why it is important.
- ⚒ how to collect adversarial data dynamically and train your model on them - using an MNIST handwritten digit recognition task as an example.
## Dynamic adversarial data collection (DADC)
Static benchm... | [
[
"data",
"research",
"benchmarks",
"tutorial"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"data",
"research",
"benchmarks",
"tutorial"
] | null | null |
7a3744a5-a39a-448d-8507-2cd0993c514c | completed | 2025-01-16T03:08:37.719504 | 2025-01-19T19:15:04.653536 | 219ed138-a525-4b47-a5cb-445983ff4c8b | Benchmarking Language Model Performance on 5th Gen Xeon at GCP | MatrixYao, kding1, IlyasMoutawwakil | intel-gcp-c4.md | **TL;DR**: We benchmark 2 representative agentic AI workload components, text embedding and text generation, on two Google Cloud Compute Engine Xeon-based CPU instances, namely N2 and C4. The results consistently shows that C4 has 10x to 24x higher throughput over N2 in text embedding and 2.3x to 3.6x higher throughput... | [
[
"llm",
"benchmarks",
"tutorial",
"optimization",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"benchmarks",
"efficient_computing",
"optimization"
] | null | null |
94f7ae57-3f85-49ab-8018-5d255c2fce7d | completed | 2025-01-16T03:08:37.719513 | 2025-01-19T18:58:06.322018 | d489ba82-5619-48e0-8cd4-38d90790fa06 | StarCoder2-Instruct: Fully Transparent and Permissive Self-Alignment for Code Generation | yuxiang630, cassanof, ganler, YifengDing, StringChaos, harmdevries, lvwerra, arjunguha, lingming | sc2-instruct.md | <div class="flex items-center justify-center">
<img src="https://huggingface.co/datasets/bigcode/starcoder2-instruct-assets/resolve/main/banner.png" alt="StarCoder2-Instruct">
</div>
*Instruction tuning* is an approach of fine-tuning that gives large language models (LLMs) the capability to follow natural and human-wr... | [
[
"llm",
"research",
"text_generation",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"fine_tuning",
"research",
"text_generation"
] | null | null |
c27eb3e0-5c31-428f-8da1-d0985c40d1a7 | completed | 2025-01-16T03:08:37.719522 | 2025-01-19T18:48:19.245166 | e1e72397-a792-4aaf-9b8a-dff460aeab9c | SetFit: Efficient Few-Shot Learning Without Prompts | Unso, lewtun, luketheduke, danielkorat, orenpereg, moshew | setfit.md | <p align="center">
<img src="assets/103_setfit/setfit_curves.png" width=500>
</p>
<p align="center">
<em>SetFit is significantly more sample efficient and robust to noise than standard fine-tuning.</em>
</p>
Few-shot learning with pretrained language models has emerged as a promising solution to every data sci... | [
[
"transformers",
"research",
"text_classification",
"fine_tuning",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"text_classification",
"research",
"efficient_computing"
] | null | null |
da10f8a8-1972-412e-a46f-19d41eeb20ef | completed | 2025-01-16T03:08:37.719532 | 2025-01-16T15:16:51.433096 | e7f3ad6b-67de-4237-ae8a-f44a8615b3d7 | Red-Teaming Large Language Models | nazneen, natolambert, lewtun | red-teaming.md | *Warning: This article is about red-teaming and as such contains examples of model generation that may be offensive or upsetting.*
Large language models (LLMs) trained on an enormous amount of text data are very good at generating realistic text. However, these models often exhibit undesirable behaviors like revealing... | [
[
"llm",
"research",
"security",
"text_generation"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"security",
"research",
"text_generation"
] | null | null |
78ff8ca7-1c0e-4736-a37a-1820c100bc6e | completed | 2025-01-16T03:08:37.719542 | 2025-01-19T19:08:46.765032 | 06314b14-c078-481f-abe3-50149c62ea63 | Launching the Artificial Analysis Text to Image Leaderboard & Arena | mhillsmith, georgewritescode | leaderboard-artificial-analysis2.md | In two short years since the advent of diffusion-based image generators, AI image models have achieved near-photographic quality. How do these models compare? Are the open-source alternatives on par with their proprietary counterparts?
The Artificial Analysis Text to Image Leaderboard aims to answer these questions w... | [
[
"computer_vision",
"benchmarks",
"tools",
"image_generation"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"benchmarks",
"image_generation",
"tools"
] | null | null |
805c7d2f-cfb0-4429-9c99-e3daf6c9c143 | completed | 2025-01-16T03:08:37.719551 | 2025-01-16T03:23:57.949851 | f2a64cac-aa6c-48ac-b1e5-f40a02b89434 | SmolVLM - small yet mighty Vision Language Model | andito, merve, mfarre, eliebak, pcuenq | smolvlm.md | This blog post introduces SmolVLM, a 2B VLM, SOTA for its memory footprint. SmolVLM is small, fast, memory-efficient, and fully open-source. All model checkpoints, VLM datasets, training recipes and tools are released under the Apache 2.0 license.
<img src="https://huggingface.co/datasets/huggingface/documentation-ima... | [
[
"computer_vision",
"research",
"tools",
"multi_modal",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"multi_modal",
"efficient_computing",
"research",
"tools"
] | null | null |
d57d1b89-d9ab-4e18-b36c-6a457434241c | completed | 2025-01-16T03:08:37.719560 | 2025-01-16T15:09:56.319907 | 93659e94-a293-4d04-a91d-86d4bc63df47 | Gradio-Lite: Serverless Gradio Running Entirely in Your Browser | abidlabs, whitphx, aliabd | gradio-lite.md | Gradio is a popular Python library for creating interactive machine learning apps. Traditionally, Gradio applications have relied on server-side infrastructure to run, which can be a hurdle for developers who need to host their applications.
Enter Gradio-lite (`@gradio/lite`): a library that leverages [Pyodide](https... | [
[
"implementation",
"deployment",
"tools",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"tools",
"implementation",
"efficient_computing",
"deployment"
] | null | null |
fccc1a19-b7e3-4420-b09d-a9f39cddcbb7 | completed | 2025-01-16T03:08:37.719569 | 2025-01-16T15:08:50.461041 | 50798689-45b8-44f9-9e31-b02f1b507a48 | Argilla 2.4: Easily Build Fine-Tuning and Evaluation Datasets on the Hub — No Code Required | nataliaElv, burtenshaw, dvilasuero | argilla-ui-hub.md | We are incredibly excited to share the most impactful feature since Argilla joined Hugging Face: you can prepare your AI datasets without any code, getting started from any Hub dataset! Using Argilla’s UI, you can easily import a dataset from the Hugging Face Hub, define questions, and start collecting human feedback.
... | [
[
"data",
"community",
"tools",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"data",
"tools",
"community",
"fine_tuning"
] | null | null |
e93965d0-900d-4e53-998a-6a087433bc7a | completed | 2025-01-16T03:08:37.719578 | 2025-01-19T17:14:41.563412 | bbf48bba-9478-4a7f-8146-344ded22628e | Introducing Agents.js: Give tools to your LLMs using JavaScript | nsarrazin | agents-js.md | We have recently been working on Agents.js at [huggingface.js](https://github.com/huggingface/huggingface.js/blob/main/packages/agents/README.md). It's a new library for giving tool access to LLMs from JavaScript in either the browser or the server. It ships with a few multi-modal tools out of the box and can easily be... | [
[
"llm",
"implementation",
"tools",
"multi_modal"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"implementation",
"tools",
"multi_modal"
] | null | null |
5548482b-f6fd-41f2-9f28-965b1e227158 | completed | 2025-01-16T03:08:37.719587 | 2025-01-16T03:22:45.284464 | 1f83b555-b07f-4a8b-87ae-fa6fd2e5fb80 | Open-sourcing Knowledge Distillation Code and Weights of SD-Small and SD-Tiny | harishsegmind, Warlord-K, Gothos | sd_distillation.md | <p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/distill_sd/Picture1.png" width=500>
</p>
In recent times, the AI community has witnessed a remarkable surge in the development of larger and more performant language models, such as Falcon 40B, LLaMa-2... | [
[
"implementation",
"optimization",
"image_generation",
"quantization"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"image_generation",
"optimization",
"quantization",
"implementation"
] | null | null |
c204bd44-ab46-47b0-836d-e9fba9b482af | completed | 2025-01-16T03:08:37.719597 | 2025-01-16T15:15:25.912586 | 03a15a39-750e-424d-b306-b9a8bde1db16 | Deploy models on AWS Inferentia2 from Hugging Face | jeffboudier, philschmid | inferentia-inference-endpoints.md | 
[AWS Inferentia2](https://aws.amazon.com/machine-learning/inferentia/) is the latest AWS machine learning chip available through the [Amazon EC2 Inf2 instances](https://aws.amazon.com/ec2/instance-types/inf2/) on Amazon Web Services. Designed fro... | [
[
"mlops",
"optimization",
"deployment",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"mlops",
"deployment",
"optimization",
"integration"
] | null | null |
4cc9c50a-feca-45df-806f-a3502c1077e6 | completed | 2025-01-16T03:08:37.719606 | 2025-01-19T19:00:22.785977 | bc194ec3-774e-430f-9fc4-f399ca1d417c | Training Stable Diffusion with Dreambooth using Diffusers | valhalla, pcuenq, 9of9 | dreambooth.md | [Dreambooth](https://dreambooth.github.io/) is a technique to teach new concepts to [Stable Diffusion](https://huggingface.co/blog/stable_diffusion) using a specialized form of fine-tuning. Some people have been using it with a few of their photos to place themselves in fantastic situations, while others are using it t... | [
[
"implementation",
"tutorial",
"image_generation",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"image_generation",
"fine_tuning",
"implementation",
"tutorial"
] | null | null |
0418d658-0c56-4f81-8541-9f155c22b193 | completed | 2025-01-16T03:08:37.719616 | 2025-01-16T03:10:18.709750 | 05ac64b3-d626-4546-acd7-0f1edd2d49a3 | Speech Synthesis, Recognition, and More With SpeechT5 | Matthijs | speecht5.md | We’re happy to announce that SpeechT5 is now available in 🤗 Transformers, an open-source library that offers easy-to-use implementations of state-of-the-art machine learning models.
SpeechT5 was originally described in the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](htt... | [
[
"audio",
"transformers",
"research",
"implementation"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"audio",
"transformers",
"research",
"implementation"
] | null | null |
31b6a75e-2674-4110-b902-c4c69f425c60 | completed | 2025-01-16T03:08:37.719625 | 2025-01-19T17:20:31.382606 | f66ec821-960a-409b-9387-57f653411964 | Practical 3D Asset Generation: A Step-by-Step Guide | dylanebert | 3d-assets.md | ## Introduction
Generative AI has become an instrumental part of artistic workflows for game development. However, as detailed in my [earlier post](https://huggingface.co/blog/ml-for-games-3), text-to-3D lags behind 2D in terms of practical applicability. This is beginning to change. Today, we'll be revisiting practic... | [
[
"implementation",
"tutorial",
"tools",
"image_generation"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"image_generation",
"implementation",
"tutorial",
"tools"
] | null | null |
54c0d208-00fa-4434-94a2-519b2b0545ae | completed | 2025-01-16T03:08:37.719634 | 2025-01-19T19:11:49.470478 | da7c112c-24c9-4b08-b14b-32192618a700 | Introducing the Red-Teaming Resistance Leaderboard | steve-sli, richard2, leonardtang, clefourrier | leaderboard-haizelab.md | **Content warning**: since this blog post is about a red-teaming leaderboard (testing elicitation of harmful behavior in LLMs), some users might find the content of the related datasets or examples unsettling.
LLM research is moving fast. Indeed, some might say too fast.
While researchers in the field continue to rap... | [
[
"llm",
"research",
"benchmarks",
"security"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"security",
"research",
"benchmarks"
] | null | null |
b02b2f16-4663-4c40-a341-fec41c1dc04b | completed | 2025-01-16T03:08:37.719643 | 2025-01-16T03:21:16.866374 | 547e0a54-15ff-4736-8295-aa8058d4cda6 | Binary and Scalar Embedding Quantization for Significantly Faster & Cheaper Retrieval | aamirshakir, tomaarsen, SeanLee97 | embedding-quantization.md | We introduce the concept of embedding quantization and showcase their impact on retrieval speed, memory usage, disk space, and cost. We'll discuss how embeddings can be quantized in theory and in practice, after which we introduce a [demo](https://huggingface.co/spaces/sentence-transformers/quantized-retrieval) showing... | [
[
"research",
"implementation",
"quantization",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"quantization",
"efficient_computing",
"implementation",
"research"
] | null | null |
268f06da-8199-41e3-aeae-864d6169aac0 | completed | 2025-01-16T03:08:37.719652 | 2025-01-16T15:10:45.535365 | b6df100f-d842-4178-a049-f85726a7a09a | Training a language model with 🤗 Transformers using TensorFlow and TPUs | rocketknight1, sayakpaul | tf_tpu.md | ## Introduction
TPU training is a useful skill to have: TPU pods are high-performance and extremely scalable, making it easy to train models at any scale from a few tens of millions of parameters up to truly enormous sizes: Google’s PaLM model (over 500 billion parameters!) was trained entirely on TPU pods.
We’ve pr... | [
[
"llm",
"transformers",
"implementation",
"tutorial"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"transformers",
"tutorial",
"implementation"
] | null | null |
366a9b36-03ab-414b-94ca-65cb86844fda | completed | 2025-01-16T03:08:37.719661 | 2025-01-19T19:05:01.757137 | 8da25a72-68ac-4fd7-8924-525c92d612fc | Bringing the Artificial Analysis LLM Performance Leaderboard to Hugging Face | mhillsmith, georgewritescode, clefourrier | leaderboard-artificial-analysis.md | Building applications with LLMs requires considering more than just quality: for many use-cases, speed and price are equally or more important.
For consumer applications and chat experiences, speed and responsiveness are critical to user engagement. Users expect near-instant responses, and delays can directly lead to... | [
[
"llm",
"mlops",
"benchmarks",
"optimization"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"benchmarks",
"mlops",
"optimization"
] | null | null |
beb22f90-e6ee-4e43-b3fc-51fea1bc02c4 | completed | 2025-01-16T03:08:37.719670 | 2025-01-16T13:39:18.060326 | 53dd658e-e900-4606-b0ec-46b2236f0221 | Introducing the Open Chain of Thought Leaderboard | ggbetz, scacean, clefourrier, yakazimir | leaderboard-cot.md | [Chain-of-thought prompting](https://huggingface.co/docs/transformers/main/en/tasks/prompting#chain-of-thought) is emerging as a powerful and effective design pattern for LLM-based apps and agents. The basic idea of chain-of-thought prompting is to let a model generate a step-by-step solution (“reasoning trace”) before... | [
[
"llm",
"research",
"benchmarks",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"research",
"benchmarks",
"tools"
] | null | null |
4a3c9424-92c0-4fdf-81ab-cf6c45276a63 | completed | 2025-01-16T03:08:37.719680 | 2025-01-19T18:49:22.821104 | b83fa2b6-1e54-4a0e-8aaa-bbfd1a8b9051 | Introducing IDEFICS: An Open Reproduction of State-of-the-art Visual Langage Model | HugoLaurencon, davanstrien, stas, Leyo, SaulLu, TimeRobber, skaramcheti, aps, giadap, yjernite, VictorSanh | idefics.md | We are excited to release IDEFICS (**I**mage-aware **D**ecoder **E**nhanced à la **F**lamingo with **I**nterleaved **C**ross-attention**S**), an open-access visual language model. IDEFICS is based on [Flamingo](https://huggingface.co/papers/2204.14198), a state-of-the-art visual language model initially developed by De... | [
[
"llm",
"computer_vision",
"research",
"multi_modal"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"computer_vision",
"multi_modal",
"research"
] | null | null |
57da9fc4-424b-45ce-9863-1cd27ef41352 | completed | 2025-01-16T03:08:37.719711 | 2025-01-19T18:49:18.240162 | 16b23ee8-0408-40e2-afca-bcb42b134319 | Ethics and Society Newsletter #4: Bias in Text-to-Image Models | sasha, giadap, nazneen, allendorf, irenesolaiman, natolambert, meg | ethics-soc-4.md | **TL;DR: We need better ways of evaluating bias in text-to-image models**
## Introduction
[Text-to-image (TTI) generation](https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads) is all the rage these days, and thousands of TTI models are being uploaded to the Hugging Face Hub. Each modality is pote... | [
[
"computer_vision",
"research",
"image_generation"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"image_generation",
"research"
] | null | null |
a9f35425-045b-4795-bfca-938d5170c0bd | completed | 2025-01-16T03:08:37.719722 | 2025-01-19T19:09:57.942858 | d831d758-9abd-46b4-8717-96682941a443 | Accelerating Stable Diffusion Inference on Intel CPUs | juliensimon, echarlaix | stable-diffusion-inference-intel.md | Recently, we introduced the latest generation of [Intel Xeon](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html) CPUs (code name Sapphire Rapids), its new hardware features for deep learning acceleration, and how to use them to accelerate [distributed fine-tuning](https://huggingfac... | [
[
"implementation",
"tutorial",
"optimization",
"image_generation",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"image_generation",
"optimization",
"efficient_computing",
"implementation"
] | null | null |
a113ed63-5176-46e5-bb43-055a636cc7c1 | completed | 2025-01-16T03:08:37.719731 | 2025-01-19T19:15:22.710078 | 9127bc7a-9b45-4be5-b3e8-946df3bec30e | CodeGemma - an official Google release for code LLMs | pcuenq, osanseviero, reach-vb, philschmid, mishig, loubnabnl | codegemma.md | CodeGemma is a family of open-access versions of Gemma specialized in code, and we’re excited to collaborate with Google on its release to make it as accessible as possible.🤗
CodeGemma comes in three flavors:
- A 2B base model specialized in infilling and open-ended generation.
- A 7B base model trained with both co... | [
[
"llm",
"transformers",
"tools",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"transformers",
"integration",
"tools"
] | null | null |
0e1c1b91-4841-4571-994b-36ee6bba7ced | completed | 2025-01-16T03:08:37.719740 | 2025-01-19T18:59:09.103198 | 47d6ac62-a95e-4fb8-b977-09a194b72dba | Why we’re switching to Hugging Face Inference Endpoints, and maybe you should too | mattupson | mantis-case-study.md | Hugging Face recently launched [Inference Endpoints](https://huggingface.co/inference-endpoints); which as they put it: solves transformers in production. Inference Endpoints is a managed service that allows you to:
- Deploy (almost) any model on Hugging Face Hub
- To any cloud (AWS, and Azure, GCP on the way)
- On a ... | [
[
"transformers",
"mlops",
"deployment",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"mlops",
"deployment",
"transformers",
"tools"
] | null | null |
045218a1-c110-4bfb-a86a-983519b34eb8 | completed | 2025-01-16T03:08:37.719749 | 2025-01-16T13:46:10.821027 | fc129fbd-b8f3-4636-be06-eacf9a891f3e | Build AI on premise with Dell Enterprise Hub | jeffboudier, philschmid, balaatdell, ianr007 | dell-enterprise-hub.md | 
Today we announce the Dell Enterprise Hub, a new experience on Hugging Face to easily train and deploy open models on-premise using Dell platforms.
Try it out at [dell.huggingface.co](https://dell.huggingface.co)
## En... | [
[
"llm",
"mlops",
"security",
"deployment"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"mlops",
"deployment",
"security"
] | null | null |
cfb8516f-dd1b-43e9-849a-1f60ebf373ec | completed | 2025-01-16T03:08:37.719758 | 2025-01-19T17:07:10.184122 | 77cf619f-d73f-47de-90cb-93a60580fbcf | Scaling AI-based Data Processing with Hugging Face + Dask | scj13, jrbourbeau, lhoestq, davanstrien | dask-scaling.md | The Hugging Face platform has many datasets and pre-trained models that make using and training state-of-the-art machine learning models increasingly accessible. However, it can be hard to scale AI tasks because AI datasets are often large (100s GBs to TBs) and using Hugging Face transformers for model inference can so... | [
[
"transformers",
"data",
"implementation",
"tutorial",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"data",
"efficient_computing",
"implementation"
] | null | null |
b717ff39-c514-45ce-b471-9a8c557fd95f | completed | 2025-01-16T03:08:37.719767 | 2025-01-16T03:13:37.800269 | c9309f38-6675-4c8a-a538-a2fe6f3d51dd | Introducing NPC-Playground, a 3D playground to interact with LLM-powered NPCs | Trist4x, aduermael, gdevillele, caillef, ThomasSimonini | npc-gigax-cubzh.md | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/181_npc-gigax-cubzh/thumbnail.png" alt="Thumbnail"/>
*AI-powered NPCs* (Non-Playable Characters) are **one of the most important breakthroughs** brought about by the use of LLMs in games.
LLMs, or Large Language Models, make ... | [
[
"llm",
"implementation",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"implementation",
"tools",
"text_generation"
] | null | null |
ed381179-4efe-42d6-8ad5-593a69e72370 | completed | 2025-01-16T03:08:37.719777 | 2025-01-19T17:15:02.824930 | 80d4d81a-1d72-4a02-b4ad-c8d68630011e | Optimum+ONNX Runtime - Easier, Faster training for your Hugging Face models | Jingya, kshama-msft, askhade, weicwang, zhijiang | optimum-onnxruntime-training.md | ## Introduction
Transformer based models in language, vision and speech are getting larger to support complex multi-modal use cases for the end customer. Increasing model sizes directly impact the resources needed to train these models and scale them as the size increases. Hugging Face and Microsoft’s ONNX Runtime tea... | [
[
"llm",
"optimization",
"fine_tuning",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"optimization",
"fine_tuning",
"integration"
] | null | null |
408684fa-e8f3-47e5-a10a-f56768ba9067 | completed | 2025-01-16T03:08:37.719786 | 2025-01-19T19:16:02.276528 | 6225a541-bb2c-4422-bf5d-d724a32ea0c1 | Google releases Gemma 2 2B, ShieldGemma and Gemma Scope | Xenova, pcuenq, reach-vb, joaogante | gemma-july-update.md | One month after the release of [Gemma 2](https://huggingface.co/blog/gemma2), Google has expanded their set of Gemma models to include the following new additions:
- [Gemma 2 2B](https://huggingface.co/collections/google/gemma-2-2b-release-66a20f3796a2ff2a7c76f98f) - The 2.6B parameter version of Gemma 2, making it a g... | [
[
"llm",
"research",
"security",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"security",
"tools",
"research"
] | null | null |
5b4e0e4c-985b-4d9f-b0c4-d6b5ec9bece9 | completed | 2025-01-16T03:08:37.719795 | 2025-01-19T17:17:00.082953 | ea4ddaed-6e1f-47b1-b58d-9d6d122bff96 | Benchmarking Text Generation Inference | derek-thomas | tgi-benchmarking.md | In this blog we will be exploring [Text Generation Inference’s](https://github.com/huggingface/text-generation-inference) (TGI) little brother, the [TGI Benchmarking tool](https://github.com/huggingface/text-generation-inference/blob/main/benchmark/README.md). It will help us understand how to profile TGI beyond simple... | [
[
"llm",
"mlops",
"benchmarks",
"optimization"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"benchmarks",
"mlops",
"optimization"
] | null | null |
c685a051-6ca6-4fa6-b538-eca4b5d5b73e | completed | 2025-01-16T03:08:37.719804 | 2025-01-16T14:19:58.473671 | 9a11edf9-361d-4cb9-bccd-7c98a734a34d | Make your llama generation time fly with AWS Inferentia2 | dacorvo | inferentia-llama2.md | # Make your llama generation time fly with AWS Inferentia2
In a [previous post on the Hugging Face blog](https://huggingface.co/blog/accelerate-transformers-with-inferentia2), we introduced [AWS Inferentia2](https://aws.amazon.com/ec2/instance-types/inf2/), the second-generation AWS Inferentia accelerator, and explain... | [
[
"llm",
"optimization",
"deployment",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"deployment",
"optimization",
"efficient_computing"
] | null | null |
50e2a17a-0260-4c77-9e27-f4ae2b06f5d7 | completed | 2025-01-16T03:08:37.719814 | 2025-01-19T17:15:22.233550 | f7549041-fb5e-4061-b6b7-9b023abbd482 | Finetune Stable Diffusion Models with DDPO via TRL | metric-space, sayakpaul, kashif, lvwerra | trl-ddpo.md | ## Introduction
Diffusion models (e.g., DALL-E 2, Stable Diffusion) are a class of generative models that are widely successful at generating images most notably of the photorealistic kind. However, the images generated by these models may not always be on par with human preference or human intention. Thus arises the ... | [
[
"research",
"implementation",
"image_generation",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"image_generation",
"fine_tuning",
"research",
"implementation"
] | null | null |
96b67765-5b3f-4955-98fb-703ac27b1ce3 | completed | 2025-01-16T03:08:37.719823 | 2025-01-19T19:08:29.285964 | 907c665c-b90a-4627-952c-0b0837146a06 | Hugging Face on PyTorch / XLA TPUs | jysohn23, lysandre | pytorch-xla.md | <a href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/13_pytorch_xla.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Training Your Favorite Transformers on Cloud TPUs using PyTorch / XLA
The PyTorch-TPU project o... | [
[
"transformers",
"implementation",
"tutorial",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"implementation",
"tutorial",
"integration"
] | null | null |
d70d19ce-0dad-4518-ac50-7c894266f9e9 | completed | 2025-01-16T03:08:37.719832 | 2025-01-16T13:37:55.543909 | c772cdf9-a89b-4096-bfa2-5a7817e50cba | Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel | smangrul, sgugger | pytorch-fsdp.md | In this post we will look at how we can leverage **[Accelerate](https://github.com/huggingface/accelerate)** Library for training large models which enables users to leverage the latest features of **[PyTorch FullyShardedDataParallel (FSDP)](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/)... | [
[
"llm",
"implementation",
"optimization",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"implementation",
"optimization",
"efficient_computing"
] | null | null |
82289155-77b5-4235-b92f-a11b0ee237b7 | completed | 2025-01-16T03:08:37.719842 | 2025-01-16T14:19:28.554033 | f43dd8bc-0376-427f-bfe9-46505d6fc78f | How good are LLMs at fixing their mistakes? A chatbot arena experiment with Keras and TPUs | martin-gorner | keras-chatbot-arena.md | ## A chatbot arena experiment with Keras and TPUs
**<center>👉 You can play with the Keras chatbot arena<br/>while you read. [Click here](https://huggingface.co/spaces/huggingface/keras-chatbot-arena) to open it in a new tab. 👈</center>**
**Table of contents**<br/>
[1. Introduction](#1-introduction... | [
[
"llm",
"implementation",
"benchmarks",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"implementation",
"benchmarks",
"efficient_computing"
] | null | null |
c91e24ff-cbef-47e8-a4e2-61ebba7bd171 | completed | 2025-01-16T03:08:37.719851 | 2025-01-19T18:56:59.753137 | 2e368783-aae6-4baa-a682-cd42a875c68c | Introducing Spaces Dev Mode for a seamless developer experience | pagezyhf | spaces-dev-mode.md | Hugging Face Spaces makes it easy for you to create and deploy AI-powered demos in minutes. Over 500,000 Spaces have been created by the Hugging Face community and it keeps growing! As part of [Hugging Face Spaces](https://huggingface.co/spaces), we recently released support for “Dev Mode”, to make your experience of b... | [
[
"mlops",
"implementation",
"deployment",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"mlops",
"implementation",
"tools",
"deployment"
] | null | null |
859c9291-97a6-4400-a43b-4b551385da23 | completed | 2025-01-16T03:08:37.719860 | 2025-01-18T14:47:03.597530 | 7fb5cbd4-7c48-4c82-b7b9-077e238fc8ad | From GPT2 to Stable Diffusion: Hugging Face arrives to the Elixir community | josevalim | elixir-bumblebee.md | The [Elixir](https://elixir-lang.org/) community is glad to announce the arrival of several Neural Networks models, from GPT2 to Stable Diffusion, to Elixir. This is possible thanks to the [just announced Bumblebee library](https://news.livebook.dev/announcing-bumblebee-gpt2-stable-diffusion-and-more-in-elixir-3Op73O),... | [
[
"llm",
"transformers",
"implementation",
"community"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"transformers",
"implementation",
"community"
] | null | null |
a5d93531-06aa-4d5c-9fde-076515470344 | completed | 2025-01-16T03:08:37.719869 | 2025-01-19T18:58:28.194397 | a0688431-f092-4cd5-8b08-49798e3029e8 | A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake | juliensimon, echarlaix, ofirzaf, imargulis, guybd, moshew | phi2-intel-meteor-lake.md | <p align="center">
<img src="assets/phi2-intel-meteor-lake/02.jpg" alt="David vs. Goliath revisited" width="512"><br>
</p>
Because of their impressive abilities, large language models (LLMs) require significant computing power, which is seldom available on personal computers. Consequently, we have no choice but to de... | [
[
"llm",
"optimization",
"deployment",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"efficient_computing",
"deployment",
"optimization"
] | null | null |
fa57aee1-8acf-4a22-ab35-901131e2b830 | completed | 2025-01-16T03:08:37.719878 | 2025-01-19T18:56:56.574936 | 851b0479-4271-436d-a8a9-f47cc88abe91 | Hugging Face's TensorFlow Philosophy | rocketknight1 | tensorflow-philosophy.md | ### Introduction
Despite increasing competition from PyTorch and JAX, TensorFlow remains [the most-used deep learning framework](https://twitter.com/fchollet/status/1478404084881190912?lang=en). It also differs from those other two libraries in some very important ways. In particular, it’s quite tightly integrated wi... | [
[
"data",
"implementation",
"optimization",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"implementation",
"tools",
"data",
"optimization"
] | null | null |
fe83cd26-71b4-4f4a-8ef1-3080f24d1aea | completed | 2025-01-16T03:08:37.719887 | 2025-01-19T18:57:25.640568 | ec92917a-2984-4080-881e-5f474ec8ecb0 | Making sense of this mess | stevhliu | transformers-docs-redesign.md | <div class="flex justify-center">
<img class="rounded-sm" src="https://huggingface.co/datasets/stevhliu/personal-blog/resolve/main/transformers-docs.png"/>
</div>
<p class="text-xs">The main version of the Transformers documentation today compared to version 4.10.0 from nearly 3 years ago.</p>
As transformer models ... | [
[
"transformers",
"implementation",
"optimization",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"tools",
"implementation",
"optimization"
] | null | null |
59657228-b051-4175-b8b7-1c6f9f3ec8e8 | completed | 2025-01-16T03:08:37.719896 | 2025-01-19T18:55:21.529148 | 724b46c3-7a86-4e08-a9e7-b8c1669e00ed | Deep Learning over the Internet: Training Language Models Collaboratively | mryab, SaulLu | collaborative-training.md | <small>
With the additional help of Quentin Lhoest and Sylvain Lesage.
</small>
Modern language models often require a significant amount of compute for pretraining, making it impossible to obtain them without access to tens and hundreds of GPUs or TPUs. Though in theory it might be possible to combine the resources o... | [
[
"llm",
"research",
"community",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"community",
"efficient_computing",
"research"
] | null | null |
f1eff70a-ab75-4cb7-8e70-6684ad7b5e26 | completed | 2025-01-16T03:08:37.719905 | 2025-01-16T13:39:14.440246 | 9a8c5c83-b21e-4d66-b82e-18a82de20a84 | Llama 3.1 - 405B, 70B & 8B with multilinguality and long context | philschmid, osanseviero, alvarobartt, lvwerra, dvilasuero, reach-vb, marcsun13, pcuenq | llama31.md | Llama 3.1 is out! Today we welcome the next iteration of the Llama family to Hugging Face. We are excited to collaborate with Meta to ensure the best integration in the Hugging Face ecosystem. Eight open-weight models (3 base models and 5 fine-tuned ones) are available on the Hub.
Llama 3.1 comes in three sizes: 8B fo... | [
[
"llm",
"fine_tuning",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"security",
"fine_tuning",
"integration"
] | null | null |
38fbbca0-18f7-4010-9672-1a4379abac89 | completed | 2025-01-16T03:08:37.719914 | 2025-01-19T19:07:07.325150 | 0ac21c60-22a6-47e4-8ca4-e7ca20bf5e2b | 🪆 Introduction to Matryoshka Embedding Models | tomaarsen, xenova, osanseviero | matryoshka.md | In this blogpost, we will introduce you to the concept of Matryoshka Embeddings and explain why they are useful. We will discuss how these models are theoretically trained and how you can train them using Sentence Transformers.
Additionally, we will provide practical guidance on how to use Matryoshka Embedding models ... | [
[
"transformers",
"research",
"implementation",
"tutorial"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"implementation",
"research",
"tutorial"
] | null | null |
db7483cf-83e2-41ec-bd7b-572c78e7a020 | completed | 2025-01-16T03:08:37.719923 | 2025-01-16T03:13:42.452213 | c4f85476-a54a-407c-8d29-fc8b1d606d40 | How 🤗 Accelerate runs very large models thanks to PyTorch | sgugger | accelerate-large-models.md | ## Load and run large models
Meta AI and BigScience recently open-sourced very large language models which won't fit into memory (RAM or GPU) of most consumer hardware. At Hugging Face, part of our mission is to make even those large models accessible, so we developed tools to allow you to run those models even if you... | [
[
"llm",
"transformers",
"implementation",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"transformers",
"implementation",
"efficient_computing"
] | null | null |
6e33517b-79a2-44c4-a4bb-d43658cdf685 | completed | 2025-01-16T03:08:37.719932 | 2025-01-19T17:13:30.972292 | 87c5679e-f3aa-415c-81fb-3f8d823ba4b7 | Hugging Face and Google partner for open AI collaboration | jeffboudier, philschmid | gcp-partnership.md | 
At Hugging Face, we want to enable all companies to build their own AI, leveraging open models and open source technologies. Our goal is to build an open platform, making it easy for data scientists, machine l... | [
[
"llm",
"mlops",
"research",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"mlops",
"research",
"integration"
] | null | null |
e47c6272-13f7-4b5d-8514-b12e3ed8c52d | completed | 2025-01-16T03:08:37.719941 | 2025-01-16T13:33:27.371432 | 5ff1c8c7-b575-4c96-bf39-501259e2f7cc | Going multimodal: How Prezi is leveraging the Hub and the Expert Support Program to accelerate their ML roadmap | Violette, jeffboudier, MoritzLaurer, bmateusz | prezi-case-study.md | Everybody knows that a great visual is worth a thousand words. The team at Prezi, a visual communications software company, is putting this insight into practice with their Prezi presentations that combine images and text in highly dynamic presentations.
Prezi has joined the Hugging Face Expert Support Program to ful... | [
[
"mlops",
"multi_modal",
"integration",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"multi_modal",
"mlops",
"integration",
"efficient_computing"
] | null | null |
472d6881-c298-4c58-b43b-0cc69ee47763 | completed | 2025-01-16T03:08:37.719950 | 2025-01-19T19:15:57.743547 | e1e7b78d-d65f-46cd-83eb-03935c55027b | Deep Dive: Vision Transformers On Hugging Face Optimum Graphcore | juliensimon | vision-transformers.md | This blog post will show how easy it is to fine-tune pre-trained Transformer models for your dataset using the Hugging Face Optimum library on Graphcore Intelligence Processing Units (IPUs). As an example, we will show a step-by-step guide and provide a notebook that takes a large, widely-used chest X-ray dataset and t... | [
[
"computer_vision",
"transformers",
"tutorial",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"transformers",
"fine_tuning",
"tutorial"
] | null | null |
ac6d696a-573e-4bd1-a35d-a6cae0151d6d | completed | 2025-01-16T03:08:37.719959 | 2025-01-19T17:17:51.334137 | b008d044-2096-43c8-98a7-923949c12028 | Understanding BigBird's Block Sparse Attention | vasudevgupta | big-bird.md | ## Introduction
Transformer-based models have shown to be very useful for many NLP tasks. However, a major limitation of transformers-based models is its \\(O(n^2)\\) time & memory complexity (where \\(n\\) is sequence length). Hence, it's computationally very expensive to apply transformer-based models on long sequen... | [
[
"llm",
"transformers",
"research",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"transformers",
"research",
"efficient_computing"
] | null | null |
7eb46fda-9d6f-43e7-b163-ea3787339d00 | completed | 2025-01-16T03:08:37.719968 | 2025-01-19T19:06:26.425272 | 439600e5-1c63-45ab-9d9c-34245ea24b0d | Train your first Decision Transformer | edbeeching, ThomasSimonini | train-decision-transformers.md | In a [previous post](https://huggingface.co/blog/decision-transformers), we announced the launch of Decision Transformers in the transformers library. This new technique of **using a Transformer as a Decision-making model** is getting increasingly popular.
So today, **you’ll learn to train your first Offline Decision ... | [
[
"transformers",
"implementation",
"tutorial"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"robotics",
"implementation",
"tutorial"
] | null | null |
e7170cf0-02fc-4e5a-8e08-afd7299623fa | completed | 2025-01-16T03:08:37.719977 | 2025-01-19T17:18:51.730057 | 525576f3-68b8-4bfb-a37b-ec224f6f3667 | Fine-Tune W2V2-Bert for low-resource ASR with 🤗 Transformers | ylacombe | fine-tune-w2v2-bert.md | <!-- {blog_metadata} -->
<!-- {authors} -->
<a target="_blank" href="https://colab.research.google.com/github/ylacombe/scripts_and_notebooks/blob/main/Fine_Tune_W2V2_BERT_on_CV16_Mongolian.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
***New (01/2024)***: *... | [
[
"audio",
"transformers",
"tutorial",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"audio",
"transformers",
"fine_tuning",
"tutorial"
] | null | null |
215fbabc-0185-4bde-bf7f-8ed0bd36b92b | completed | 2025-01-16T03:08:37.719986 | 2025-01-19T18:49:46.161704 | 54ba0d15-c07c-494e-949c-7638edddf7d5 | Ethics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 Musings | meg | ethics-soc-5.md | One of the most important things to know about “ethics” in AI is that it has to do with **values**. Ethics doesn’t tell you what’s right or wrong, it provides a vocabulary of values – transparency, safety, justice – and frameworks to prioritize among them. This summer, we were able to take our understanding of values i... | [
[
"community"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"community"
] | null | null |
3948bbdb-2be6-49bf-a2da-0106bf4b867e | completed | 2025-01-16T03:08:37.719995 | 2025-01-19T17:15:47.676569 | 3eb2ee44-028c-4ac3-ae93-116f10a0a64b | Open-source LLMs as LangChain Agents | m-ric, Jofthomas, andrewrreed | open-source-llms-as-agents.md | ## TL;DR
Open-source LLMs have now reached a performance level that makes them suitable reasoning engines for powering agent workflows: [Mixtral](https://huggingface.co/blog/mixtral) even [surpasses GPT-3.5](#results) on our benchmark, and its performance could easily be further enhanced with fine-tuning.
## Introduc... | [
[
"llm",
"implementation",
"benchmarks",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"implementation",
"benchmarks",
"integration"
] | null | null |
b15920b2-da90-46c8-92ad-c7f0c5f1301c | completed | 2025-01-16T03:08:37.720004 | 2025-01-19T18:57:28.417852 | 59c91ff2-92e1-44c1-be68-edd200dda552 | Hugging Face Hub on the AWS Marketplace: Pay with your AWS Account | philschmid, sbrandeis, jeffboudier | aws-marketplace.md | The [Hugging Face Hub](https://aws.amazon.com/marketplace/pp/prodview-n6vsyhdjkfng2) has landed on the AWS Marketplace. Starting today, you can subscribe to the Hugging Face Hub through AWS Marketplace to pay for your Hugging Face usage directly with your AWS account. This new integrated billing method makes it easy to... | [
[
"llm",
"mlops",
"deployment",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"mlops",
"deployment",
"integration"
] | null | null |
8eabb62a-b21a-4aab-9024-0220436e9502 | completed | 2025-01-16T03:08:37.720013 | 2025-01-19T19:04:15.500179 | dc17979c-8d8f-4b1b-8423-5ac87ea8251b | Deploy MusicGen in no time with Inference Endpoints | reach-vb, merve | run-musicgen-as-an-api.md | [MusicGen](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen) is a powerful music generation model that takes in text prompt and an optional melody to output music. This blog post will guide you through generating music with MusicGen using [Inference Endpoints](https://huggingface.co/inference-endpoin... | [
[
"audio",
"transformers",
"mlops",
"tutorial",
"deployment"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"audio",
"transformers",
"mlops",
"deployment"
] | null | null |
a25f3b4d-ca1e-4721-b06a-d60c7d8ab38b | completed | 2025-01-16T03:08:37.720022 | 2025-01-16T03:10:40.963396 | 6b4c5e71-e2d4-4818-9c52-bbd0a1f831f4 | My Journey to a serverless transformers pipeline on Google Cloud | Maxence | how-to-deploy-a-pipeline-to-google-clouds.md | > ##### A guest blog post by community member <a href="/Maxence">Maxence Dominici</a>
This article will discuss my journey to deploy the `transformers` _sentiment-analysis_ pipeline on [Google Cloud](https://cloud.google.com). We will start with a quick introduction to `transformers` and then move to the technical par... | [
[
"transformers",
"mlops",
"implementation",
"deployment"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"mlops",
"deployment",
"implementation"
] | null | null |
4585c5b0-3cb2-4767-b584-4164f41d9b26 | completed | 2025-01-16T03:08:37.720031 | 2025-01-16T03:23:30.018243 | bb3529a1-ad37-480e-905b-8b69d537f9d8 | Making LLMs lighter with AutoGPTQ and transformers | marcsun13, fxmarty, PanEa, qwopqwop, ybelkada, TheBloke | gptq-integration.md | Large language models have demonstrated remarkable capabilities in understanding and generating human-like text, revolutionizing applications across various domains. However, the demands they place on consumer hardware for training and deployment have become increasingly challenging to meet.
🤗 Hugging Face's core mi... | [
[
"transformers",
"data"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"data"
] | null | null |
9e6803df-d6a2-4870-a1e4-567561b38310 | completed | 2025-01-16T03:08:37.720040 | 2025-01-16T15:15:03.250048 | 033ccf2d-d883-4d0d-9610-e4b1243022c9 | From PyTorch DDP to Accelerate to Trainer, mastery of distributed training with ease | muellerzr | pytorch-ddp-accelerate-transformers.md | ## General Overview
This tutorial assumes you have a basic understanding of PyTorch and how to train a simple model. It will showcase training on multiple GPUs through a process called Distributed Data Parallelism (DDP) through three different levels of increasing abstraction:
- Native PyTorch DDP through the `pytorc... | [
[
"transformers",
"implementation",
"tutorial",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"implementation",
"tutorial",
"transformers",
"efficient_computing"
] | null | null |
c50c3bcc-e1b9-4245-b5a7-78be1fb2bdcc | completed | 2025-01-16T03:08:37.720048 | 2025-01-16T03:16:59.764374 | bc4403eb-f9c4-4566-8f14-e4ae0064892f | Making thousands of open LLMs bloom in the Vertex AI Model Garden | philschmid, jeffboudier | google-cloud-model-garden.md | Today, we are thrilled to announce the launch of **Deploy on Google Cloud**, a new integration on the Hugging Face Hub to deploy thousands of foundation models easily to Google Cloud using Vertex AI or Google Kubernetes Engine (GKE). Deploy on Google Cloud makes it easy to deploy open models as API Endpoints within you... | [
[
"llm",
"mlops",
"deployment",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"mlops",
"deployment",
"integration"
] | null | null |
28250a92-e429-45eb-8f35-0695c211031d | completed | 2025-01-16T03:08:37.720058 | 2025-01-16T03:10:09.305328 | e6e43f78-db86-4013-8296-8db8d102e56b | Introducing the Open Leaderboard for Japanese LLMs! | akimfromparis, miyao-yusuke, namgiH, t0-0, sh1gechan, hysts, clefourrier | leaderboard-japanese.md | LLMs are now increasingly capable in English, but it's quite hard to know how well they perform in other national languages, widely spoken but which present their own set of linguistic challenges. Today, we are excited to fill this gap for Japanese!
We'd like to announce the **[Open Japanese LLM Leaderboard](https://... | [
[
"llm",
"data",
"benchmarks",
"community"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"benchmarks",
"data",
"community"
] | null | null |
45583591-5f6e-4922-846f-2fe45dc6d436 | completed | 2025-01-16T03:08:37.720067 | 2025-01-19T18:52:22.077480 | b9d91315-8c17-46ba-967a-cda43b0cf6c2 | SmolLM - blazingly fast and remarkably powerful | loubnabnl, anton-l, eliebak | smollm.md | ## TL;DR
This blog post introduces [SmolLM](https://huggingface.co/collections/HuggingFaceTB/smollm-models-6695016cad7167254ce15966), a family of state-of-the-art small models with 135M, 360M, and 1.7B parameters, trained on a new high-quality dataset. It covers data curation, model evaluation, and usage.
## Introdu... | [
[
"llm",
"data",
"optimization",
"quantization"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"quantization",
"optimization",
"data"
] | null | null |
57c68fa5-ce1e-40f7-8e2f-47ca5f03ba46 | completed | 2025-01-16T03:08:37.720076 | 2025-01-16T03:14:18.625553 | b8288440-55ea-4ea9-8180-b3f2173aaf40 | Exploring the Daily Papers Page on Hugging Face | AdinaY | daily-papers.md | In the fast-paced world of research, staying up-to-date with the latest advancements is crucial. To help developers and researchers keep a pulse on the cutting-edge of AI, Hugging Face introduced the [Daily Papers](https://huggingface.co/papers) page. Since its launch, Daily Papers has featured high-quality research se... | [
[
"research",
"tutorial",
"community",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"research",
"community",
"tools",
"tutorial"
] | null | null |
63a0cd7f-cf7e-4994-a988-bd1c7b5d21d2 | completed | 2025-01-16T03:08:37.720085 | 2025-01-19T17:20:20.399328 | ff16775d-c706-4f54-a5a6-3c42b74a504e | Hosting your Models and Datasets on Hugging Face Spaces using Streamlit | merve | streamlit-spaces.md | ## Showcase your Datasets and Models using Streamlit on Hugging Face Spaces
[Streamlit](https://streamlit.io/) allows you to visualize datasets and build demos of Machine Learning models in a neat way. In this blog post we will walk you through hosting models and datasets and serving your Streamlit applications in Hug... | [
[
"llm",
"mlops",
"implementation",
"tutorial"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"mlops",
"implementation",
"tutorial"
] | null | null |
bcf488c9-f1d0-4fb6-938d-9168a6d03227 | completed | 2025-01-16T03:08:37.720094 | 2025-01-16T13:34:14.453863 | 77a6cb7f-58bc-465a-9403-2f239d25ac80 | Fine tuning CLIP with Remote Sensing (Satellite) images and captions | arampacha, devv, goutham794, cataluna84, ghosh-r, sujitpal | fine-tune-clip-rsicd.md | ## Fine tuning CLIP with Remote Sensing (Satellite) images and captions
<img src="/blog/assets/30_clip_rsicd/clip-rsicd-header-image.png"/>
In July this year, [Hugging Face](https://huggingface.co/) organized a [Flax/JAX Community Week](https://github.com/huggingface/transformers/blob/master/examples/research_project... | [
[
"computer_vision",
"research",
"multi_modal",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"multi_modal",
"fine_tuning",
"research"
] | null | null |
1d96d156-f61c-49d0-939c-1b9a30260b61 | completed | 2025-01-16T03:08:37.720102 | 2025-01-19T18:56:09.735958 | 9f5df04d-99a3-4721-af8a-e0a8e18a8e67 | 'Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker' | philschmid | sagemaker-distributed-training-seq2seq.md | <a target="_blank" href="https://github.com/huggingface/notebooks/blob/master/sagemaker/08_distributed_summarization_bart_t5/sagemaker-notebook.ipynb">
<img src="https://badgen.net/badge/Github/Open/black?icon=github" alt="Open on Github"/>
</a>
In case you missed it: on March 25th [we announced a collaboration w... | [
[
"llm",
"transformers",
"mlops",
"tutorial"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"transformers",
"mlops",
"tutorial"
] | null | null |
4309156c-caaa-4828-a7d4-8b25454f146e | completed | 2025-01-16T03:08:37.720111 | 2025-01-16T14:19:34.622569 | 16a468c2-b1c3-4f5d-be2c-937e8df82fbb | 'Deploy Hugging Face models easily with Amazon SageMaker' | nan | deploy-hugging-face-models-easily-with-amazon-sagemaker.md | # **Deploy Hugging Face models easily with Amazon SageMaker 🏎**
Earlier this year[ we announced a strategic collaboration with Amazon](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face) to make it easier for companies to use Hugging Face in Amazon SageMaker, and ship cutting-edge Machine L... | [
[
"transformers",
"mlops",
"deployment",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"mlops",
"deployment",
"integration"
] | null | null |
308a8996-a6de-42f0-b3c0-b01590f3e803 | completed | 2025-01-16T03:08:37.720121 | 2025-01-19T17:20:05.131058 | 47d0c3fd-5446-429f-89cb-ca692ca56dc8 | Panel on Hugging Face | philippjfr, sophiamyang | panel-on-hugging-face.md | We are thrilled to announce the collaboration between Panel and Hugging Face! 🎉 We have integrated a Panel template in Hugging Face Spaces to help you get started building Panel apps and deploy them on Hugging Face effortlessly.
<a href="https://huggingface.co/new-space?template=Panel-Org/panel-template"> <img src="... | [
[
"tutorial",
"community",
"deployment",
"tools",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"tools",
"integration",
"deployment",
"tutorial"
] | null | null |
62a997e6-a6ad-4478-9623-e3fb9c45f68b | completed | 2025-01-16T03:08:37.720130 | 2025-01-18T14:43:28.166162 | 732e422f-1abc-4ed5-8611-bbae565c2429 | Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon | juliensimon, Haihao, antonyvance, MatrixYao, lianglv, Suleyman Sair, gserochi, Debbh, kding1 | cost-efficient-rag-applications-with-intel.md | <p align="center">
<img src="assets/cost_efficient_rag_applications_with_intel/main.jpg" width="512"><br>
</p>
Retrieval-augmented generation (RAG) enhances text generation with a large language model by incorporating fresh domain knowledge stored in an external datastore. Separating your company data from the knowle... | [
[
"llm",
"mlops",
"optimization",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"mlops",
"optimization",
"efficient_computing"
] | null | null |
9a073ad7-bce1-4cda-afcf-0b5ba4251bd7 | completed | 2025-01-16T03:08:37.720138 | 2025-01-19T18:49:06.462669 | 44d67d46-d13f-4217-a353-8cc2479b9396 | Welcome Gemma 2 - Google’s new open LLM | philschmid, osanseviero, pcuenq, lewtun, tomaarsen, reach-vb | gemma2.md | Google released Gemma 2, the latest addition to its family of state-of-the-art open LLMs, and we are excited to collaborate with Google to ensure the best integration in the Hugging Face ecosystem. You can find the 4 open-weight models (2 base models & 2 fine-tuned ones) on the Hub. Among the features and integrations ... | [
[
"llm",
"transformers",
"research",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"transformers",
"integration",
"research"
] | null | null |
ba352d76-42aa-4c8d-9bb6-bd428b332174 | completed | 2025-01-16T03:08:37.720147 | 2025-01-19T19:03:33.944879 | 451b9017-0b9b-4e58-b707-a0a2d93aff30 | XetHub is joining Hugging Face! | yuchenglow, julien-c | xethub-joins-hf.md | We are super excited to officially announce that Hugging Face acquired XetHub 🔥
XetHub is a Seattle-based company founded by Yucheng Low, Ajit Banerjee, Rajat Arya who previously worked at Apple where they built and scaled Apple’s internal ML infrastructure. XetHub’s mission is to enable software engineering best pra... | [
[
"data",
"mlops",
"tools",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"mlops",
"data",
"tools",
"integration"
] | null | null |
4a43486c-0d6c-436b-8033-3826d7a54974 | completed | 2025-01-16T03:08:37.720156 | 2025-01-19T18:48:54.735987 | 6867ccdd-e577-440d-869e-d97f358b8e80 | 'Train and Fine-Tune Sentence Transformers Models' | espejelomar | how-to-train-sentence-transformers.md | > This guide is only suited for Sentence Transformers before v3.0. Read [Training and Finetuning Embedding Models with Sentence Transformers v3](train-sentence-transformers) for an updated guide.
# Train and Fine-Tune Sentence Transformers Models
Check out this tutorial with the Notebook Companion:
<a target="_blank... | [
[
"transformers",
"implementation",
"tutorial",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"fine_tuning",
"implementation",
"tutorial"
] | null | null |
941dabd3-a816-47ca-9f3f-e10daaacf3d0 | completed | 2025-01-16T03:08:37.720165 | 2025-01-19T18:48:41.622871 | a5a18bd1-597c-47b7-bd87-10b2b8e2f79d | Welcome PaliGemma 2 – New vision language models by Google | merve, andsteing, pcuenq, ariG23498 | paligemma2.md | We are excited to welcome Google's all-new vision language models, PaliGemma 2, a new iteration of PaliGemma. Like its predecessor, PaliGemma 2 uses the same powerful [SigLIP](https://huggingface.co/collections/google/siglip-659d5e62f0ae1a57ae0e83ba) for vision, but it upgrades to the latest Gemma 2 for the text decode... | [
[
"computer_vision",
"research",
"multi_modal"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"multi_modal",
"fine_tuning",
"research"
] | null | null |
e8385c10-f925-45f2-a3c4-682dad9b4889 | completed | 2025-01-16T03:08:37.720174 | 2025-01-19T17:19:04.890351 | f9cb1af9-dfcf-4448-8f59-4026f2149f52 | Hugging Face and Graphcore partner for IPU-optimized Transformers | sallydoherty | graphcore.md | > ##### Speaking at the 2021 AI Hardware Summit, Hugging Face announced the launch of their new Hardware Partner Program, including device-optimized models and software integrations. Here, Graphcore - creators of the Intelligence Processing Unit (IPU) and a founding member of the program – explain how their partnership... | [
[
"transformers",
"optimization",
"integration",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"optimization",
"integration",
"efficient_computing"
] | null | null |
b72bc9eb-db3b-424b-897c-aad0c6d2045d | completed | 2025-01-16T03:08:37.720183 | 2025-01-18T14:44:04.765729 | 79119647-a9dd-447c-960e-fa928ff89e6a | Introducing Würstchen: Fast Diffusion for Image Generation | dome272, babbleberns, kashif, sayakpaul, pcuenq | wuerstchen.md | 
## What is Würstchen?
Würstchen is a diffusion model, whose text-conditional component works in a highly compressed latent space of images. Why is this impo... | [
[
"computer_vision",
"research",
"image_generation",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"image_generation",
"research",
"efficient_computing"
] | null | null |
7f14534d-6925-4e06-bcd9-863b803a1592 | completed | 2025-01-16T03:08:37.720192 | 2025-01-19T19:04:18.609028 | 580672df-f117-40bd-9534-e78195342d74 | Ethics and Society Newsletter #1 | meg | ethics-soc-1.md | Hello, world!
Originating as an open-source company, Hugging Face was founded on some key ethical values in tech: _collaboration_, _responsibility_, and _transparency_. To code in an open environment means having your code – and the choices within – viewable to the world, associated with your account and available for... | [
[
"data",
"research",
"community"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"community",
"research",
"data"
] | null | null |
99bf687a-3d2c-41c1-a9f0-7c1683648b52 | completed | 2025-01-16T03:08:37.720201 | 2025-01-19T18:57:30.972652 | 2b682308-f9e4-49ef-820e-538cbc3c85d1 | StarCoder: A State-of-the-Art LLM for Code | lvwerra, loubnabnl | starcoder.md | ## Introducing StarCoder
StarCoder and StarCoderBase are Large Language Models for Code (Code LLMs) trained on permissively licensed data from GitHub, including from 80+ programming languages, Git commits, GitHub issues, and Jupyter notebooks. Similar to LLaMA, we trained a ~15B parameter model for 1 trillion tokens. ... | [
[
"llm",
"research",
"benchmarks",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"research",
"benchmarks",
"fine_tuning"
] | null | null |
1557221e-640d-46f6-be1f-2b2bde95c806 | completed | 2025-01-16T03:08:37.720210 | 2025-01-19T19:02:00.376374 | 22653d48-c2e6-427a-9cc8-206c18a65e3c | New ViT and ALIGN Models From Kakao Brain | adirik, Unso, dylan-m, jun-untitled | vit-align.md | Kakao Brain and Hugging Face are excited to release a new open-source image-text dataset [COYO](https://github.com/kakaobrain/coyo-dataset) of 700 million pairs and two new visual language models trained on it, [ViT](https://github.com/kakaobrain/coyo-vit) and [ALIGN](https://github.com/kakaobrain/coyo-align). This is ... | [
[
"computer_vision",
"data",
"research",
"multi_modal"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"data",
"multi_modal",
"research"
] | null | null |
aacaba35-93ca-4471-8a65-3390539838e7 | completed | 2025-01-16T03:08:37.720219 | 2025-01-18T14:43:43.844029 | 90702660-5edd-43ed-8c67-3d0c7979d21f | Introducing the Synthetic Data Generator - Build Datasets with Natural Language | davidberenstein1957, sdiazlor, Leiyre, dvilasuero, Ameeeee, burtenshaw | synthetic-data-generator.md | Introducing the [Synthetic Data Generator](https://huggingface.co/spaces/argilla/synthetic-data-generator), a user-friendly application that takes a no-code approach to creating custom datasets with Large Language Models (LLMs). The best part: A simple step-by-step process, making dataset creation a non-technical breez... | [
[
"llm",
"data",
"tutorial",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"data",
"tools",
"tutorial"
] | null | null |
bd14a533-acf8-4f1b-be0e-4ef9dfc34c97 | completed | 2025-01-16T03:08:37.720228 | 2025-01-19T18:53:56.767529 | 2d833f4e-6b15-4825-966b-cf3dc4004f63 | Large-scale Near-deduplication Behind BigCode | chenghao | dedup.md | ## Intended Audience
People who are interested in document-level near-deduplication at a large scale, and have some understanding of hashing, graph and text processing.
## Motivations
It is important to take care of our data before feeding it to the model, at least Large Language Model in our case, as the old saying... | [
[
"llm",
"data",
"research"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"data",
"research",
"security"
] | null | null |
0f3f80f7-5e01-4f11-8d2c-8e14560d3f5e | completed | 2025-01-16T03:08:37.720237 | 2025-01-19T18:59:57.766904 | 38e29ce8-b5ba-4421-8750-be73a9d74732 | How we leveraged distilabel to create an Argilla 2.0 Chatbot | plaguss, gabrielmbmb, sdiazlor, osanseviero, dvilasuero | argilla-chatbot.md | ## TL;DR
Discover how to build a Chatbot for a tool of your choice ([Argilla 2.0](https://github.com/argilla-io/argilla) in this case) that can understand technical documentation and chat with users about it.
In this article, we'll show you how to leverage [distilabel](https://github.com/argilla-io/distilabel) and f... | [
[
"llm",
"data",
"implementation",
"tutorial",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"fine_tuning",
"implementation",
"tutorial"
] | null | null |
f599ad2a-e4e8-494a-ab36-3b40eb7832d6 | completed | 2025-01-16T03:08:37.720246 | 2025-01-19T19:14:46.214075 | 6e466236-2e2f-478a-9f8c-81177ca574ad | Open LLM Leaderboard: DROP deep dive | clefourrier, cabreraalex, stellaathena, SaylorTwift, thomwolf | open-llm-leaderboard-drop.md | Recently, [three new benchmarks](https://twitter.com/clefourrier/status/1722555555338956840) were added to the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard): Winogrande, GSM8k and DROP, using the original implementations reproduced in the [EleutherAI Harness](https://github.co... | [
[
"llm",
"data",
"research",
"benchmarks"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"benchmarks",
"research",
"data"
] | null | null |
03555740-6c78-4785-a75f-16feb152cdca | completed | 2025-01-16T03:08:37.720255 | 2025-01-19T18:54:51.095472 | 692686a6-2bb1-4888-b119-fafcdf8f4233 | Getting Started with Transformers on Habana Gaudi | juliensimon | getting-started-habana.md | A couple of weeks ago, we've had the pleasure to [announce](https://huggingface.co/blog/habana) that [Habana Labs](https://habana.ai) and [Hugging Face](https://huggingface.co/) would partner to accelerate Transformer model training.
Habana Gaudi accelerators deliver up to 40% better price performance for training mac... | [
[
"transformers",
"implementation",
"tutorial",
"fine_tuning",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"implementation",
"fine_tuning",
"efficient_computing"
] | null | null |
b864460d-6d2c-444b-9588-baaf726200fa | completed | 2025-01-16T03:08:37.720264 | 2025-01-19T19:11:44.668082 | 55727c27-023e-4b14-8316-eed80098880c | Welcome fastText to the Hugging Face Hub | sheonhan, juanpino | fasttext.md | [fastText](https://fasttext.cc/) is a library for efficient learning of text representation and classification. [Open-sourced](https://fasttext.cc/blog/2016/08/18/blog-post.html) by Meta AI in 2016, fastText integrates key ideas that have been influential in natural language processing and machine learning over the pas... | [
[
"tools",
"text_classification",
"integration",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"text_classification",
"tools",
"integration",
"efficient_computing"
] | null | null |
c74ebc3b-50bc-4c85-bf8b-92f80b5490ad | completed | 2025-01-16T03:08:37.720273 | 2025-01-19T17:20:46.110156 | 1d86224f-d010-407f-b0f2-3d0220ae3408 | SegMoE: Segmind Mixture of Diffusion Experts | Warlord-K, Icar, harishp | segmoe.md | SegMoE is an exciting framework for creating Mixture-of-Experts Diffusion models from scratch! SegMoE is comprehensively integrated within the Hugging Face ecosystem and comes supported with `diffusers` 🔥!
Among the features and integrations being released today:
- [Models on the Hub](https://huggingface.co/models?s... | [
[
"implementation",
"tools",
"image_generation",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"image_generation",
"implementation",
"integration",
"tools"
] | null | null |
8106d8ff-e2c9-4f6d-acac-19c1205246d8 | completed | 2025-01-16T03:08:37.720282 | 2025-01-19T17:19:12.914211 | f984d6a0-4e94-4c9d-8632-c43f7c2ebd5c | 🤗 PEFT welcomes new merging methods | smangrul, sayakpaul | peft_merging.md | Model merging has quickly become the de-facto standard of pushing the performance limits of large language models. On the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), we continue to notice merged models topping up the charts. Our very own Omar Sanseviero, made a little sprin... | [
[
"llm",
"optimization",
"tools",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"optimization",
"tools",
"fine_tuning"
] | null | null |
ec5d95f8-03b8-4060-95ca-f50500894839 | completed | 2025-01-16T03:08:37.720291 | 2025-01-16T03:18:15.708676 | ed891e73-9c38-4d62-99fe-f320c5fd41b7 | Releasing Swift Transformers: Run On-Device LLMs in Apple Devices | pcuenq | swift-coreml-llm.md | I have a lot of respect for iOS/Mac developers. I started writing apps for iPhones in 2007, when not even APIs or documentation existed. The new devices adopted some unfamiliar decisions in the constraint space, with a combination of power, screen real estate, UI idioms, network access, persistence, and latency that wa... | [
[
"llm",
"implementation",
"deployment",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"implementation",
"deployment",
"efficient_computing"
] | null | null |
8154be8d-b31b-4153-be60-ffa633ab7c89 | completed | 2025-01-16T03:08:37.720300 | 2025-01-16T13:37:58.631166 | df623d11-01ab-42f9-a4f9-7def067997a0 | 🇨🇿 BenCzechMark - Can your LLM Understand Czech? | mfajcik, hynky, mdocekal, xdolez52, jstetina, Lakoc, popelucha, hales, michal-stefanik, Adamiros, davidamczyk, janH, jsedivy | benczechmark.md | The 🇨🇿 BenCzechMark is the first and most comprehensive evaluation suite for assessing the abilities of Large Language Models (LLMs) in the Czech language. It aims to test how well LLMs can:
- Reason and perform complex tasks in Czech.
- Generate and verify grammatically and semantically correct Czech.
- Extract inf... | [
[
"llm",
"research",
"benchmarks"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"benchmarks",
"research",
"translation"
] | null | null |
a0d6d5fd-b653-4448-b122-edf717bd7109 | completed | 2025-01-16T03:08:37.720309 | 2025-01-19T19:11:38.460901 | 36cfe369-5482-44f0-8432-9120dfe9af12 | Fine-Tune ViT for Image Classification with 🤗 Transformers | nateraw | fine-tune-vit.md | <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script>
<a target="_blank" href="https://colab.research.google.com/github/nateraw/huggingface-hub-examples/blob/main/vit_image_classification_explained.ipynb">
<img src="https://colab.research.google.com/assets/cola... | [
[
"computer_vision",
"transformers",
"tutorial",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"transformers",
"fine_tuning",
"tutorial"
] | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.