repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/agents-course | 295 | [QUESTION] Ambiguity what chat templates are. | Issue:
Where ➡ https://huggingface.co/learn/agents-course/unit1/messages-and-special-tokens
> This is where chat templates come in. They act as the bridge between conversational messages (user and assistant turns) and the specific formatting requirements of your chosen LLM. In other words, chat templates structure t... | https://github.com/huggingface/agents-course/issues/295 | open | [
"question"
] | 2025-03-06T17:12:41Z | 2025-03-06T17:12:41Z | null | MekongDelta-mind |
huggingface/open-r1 | 483 | How to calculate total optimization steps | I ran it on 8 GPUs and set num_generations to 8, num_processes=7, Why Total optimization steps=196, isn't it Num examples/Total train batch size? It seems that multiplying by num_generations yields 196. Why do we need to multiply by num_generations?
[INFO|trainer.py:2405] 2025-03-06 12:04:09,913 >> ***** Running traini... | https://github.com/huggingface/open-r1/issues/483 | open | [] | 2025-03-06T09:47:19Z | 2025-03-13T08:45:23Z | null | HelloWorld506 |
huggingface/transformers.js | 1,221 | How to use Xenova/deplot using the transformers.js library. | ### Question
Currently I'm doing:
```
this.pipeline = await pipeline("image-text-to-text", "Xenova/deplot", {
progress_callback: (progress) => {
this.updateProgress({
status: `Loading model: ${progress.status}`,
progress: 0.1 + (progress.progress * 0.9)
});... | https://github.com/huggingface/transformers.js/issues/1221 | open | [
"question"
] | 2025-03-06T07:56:07Z | 2025-03-06T11:36:19Z | null | aadya940 |
huggingface/peft | 2,410 | running forward loop using get_peft_model disables requires_grad on output | Hi,
I would like to report a recent issue I have been facing, but I am not sure if it is a bug or I am doing something wrong in the process. The steps to re-create the steps are easy. The issue happens when I try to convert **Qwen2-VL-2B-Instruct** model into a PEFT model using `get_peft_model` method. Simply load the... | https://github.com/huggingface/peft/issues/2410 | closed | [] | 2025-03-06T05:12:42Z | 2025-04-13T15:03:40Z | 4 | Hamidreza3252 |
huggingface/lerobot | 826 | Should the pi0 pytorch model on Huggingface load model.safetensors or the other three satetensors? | https://huggingface.co/lerobot/pi0/tree/main
What is the difference between `model.safetensors` and the other three satetensors (`model-00001-of-0000*.safetensors`)? The pi0 model `from_pretrained()` method will load `model.safetensor`s by default instead of `model-00001-of-0000*.safetensors`.
| https://github.com/huggingface/lerobot/issues/826 | closed | [
"question",
"stale"
] | 2025-03-06T03:12:05Z | 2025-10-08T08:42:49Z | null | chopinxxxx |
huggingface/agents-course | 290 | [QUESTION] First Agent code does not produce any output | I cloned and tried running the first agent app.py. I wanted to try the image generation tool. the application built and ran but when I tried typing something in the chat such as "generate an image of a cat", there is no response from the bot. it stays blank
| https://github.com/huggingface/agents-course/issues/290 | open | [
"question"
] | 2025-03-05T23:49:06Z | 2025-03-18T14:45:44Z | null | Sabk0926 |
huggingface/accelerate | 3,421 | How to sync distribute model paramaters when training with continual learning fashion? | When performing distributed continual learning tasks, it is common to expand model parameters as tasks increase. For example, I have defined an `expand_classifier()` method with random initialization to increase the parameters of the classifier.
How can I ensure that the newly added parameters are initialized the sa... | https://github.com/huggingface/accelerate/issues/3421 | closed | [] | 2025-03-05T13:44:15Z | 2025-04-13T15:06:22Z | null | Iranb |
huggingface/lerobot | 817 | SO 100 Arm assembly instruction inconsistency | Step 22 of the assembly guide shows a picture of wrist that is flipped comparing to the drawing and front page photo. Are both right? If not, which one is correct?
[Latest instruction](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#wrist-assembly):
<img width="723" alt="Image" src="https://g... | https://github.com/huggingface/lerobot/issues/817 | closed | [
"question",
"robots",
"stale"
] | 2025-03-05T05:23:57Z | 2025-11-30T02:37:07Z | null | liuhuanjim013 |
huggingface/open-r1 | 472 | how to set the max_model_length, max_new_tokens and generation_size when evaluate ? | Suppose the max_position_embedding of my model is 4096, how to set max_model_length, max_new_tokens and generation_size to. get the correct evaluate result? For example , set max_model_length=4096, max_new_tokens=1000, generation_size=1000? | https://github.com/huggingface/open-r1/issues/472 | open | [] | 2025-03-05T04:01:48Z | 2025-03-12T03:41:42Z | null | ItGirls |
huggingface/transformers | 36,546 | how to use transformers with musicgen with float16 | ```
import transformers, torch, builtins, numpy
processor = transformers.AutoProcessor.from_pretrained(' facebook/musicgen-stereo-melody-large', torch_dtype=torch.float16)
model = transformers.MusicgenMelodyForConditionalGeneration.from_pretrained('facebook/musicgen-stereo-melody-large ,torch_dtype=torch.float16).to('... | https://github.com/huggingface/transformers/issues/36546 | closed | [] | 2025-03-05T00:40:24Z | 2025-03-06T09:49:18Z | null | ghost |
huggingface/lerobot | 813 | State Collection Timing Issue in Manipulator Teleoperation: Post-action vs Pre-action States | **Description:**
I've noticed in lerobot/lerobot/common/robot_devices/robots/manipulator.py that during teleoperation, the state being collected is the state after action execution. Is this intended behavior?
In my understanding, model inference should use the state before action execution, not after. This could potent... | https://github.com/huggingface/lerobot/issues/813 | closed | [
"question",
"policies",
"stale"
] | 2025-03-04T14:19:52Z | 2025-10-07T02:26:55Z | null | www-Ye |
huggingface/agents-course | 284 | [QUESTION] Clarify Payment Required for completing Unit 2 notebooks | For the notebook for [components.ipynb]() I ran the `IngestionPipeline` function as follows:
```py
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core.ingestion import IngestionPipeline
# create the pipelin... | https://github.com/huggingface/agents-course/issues/284 | open | [
"question"
] | 2025-03-04T14:16:01Z | 2025-03-06T16:08:39Z | null | carlosug |
huggingface/agents-course | 281 | [any free and unpaid alternative for Inference Providers?] | while executing the [notebook](https://colab.research.google.com/github/huggingface/agents-course/blob/main/notebooks/unit2/smolagents/multiagent_notebook.ipynb) on **unit2. multi agent systems**, i got the following client error for [Inference Providers](https://huggingface.co/blog/inference-providers):
```python
> ... | https://github.com/huggingface/agents-course/issues/281 | open | [
"question"
] | 2025-03-04T12:51:26Z | 2025-03-31T07:23:49Z | null | carlosug |
huggingface/lerobot | 808 | How to acquire the End-Effector(eef) pose? | Hi, thanks for your great job!
How can we acquire the eef pose and control the eef pose instead of only the joints states?
Thanks for your attention and hope for your kind response! | https://github.com/huggingface/lerobot/issues/808 | closed | [
"question",
"policies",
"robots",
"stale"
] | 2025-03-04T09:30:35Z | 2025-10-16T02:28:50Z | null | oym1994 |
huggingface/lerobot | 806 | How to control local robot with remote model? | I have achieved the inference process on my local computer. I want to know how to put the model on a remote server and control a robot on local.
My robot: Koch1.1 | https://github.com/huggingface/lerobot/issues/806 | closed | [
"question",
"stale"
] | 2025-03-04T09:09:12Z | 2025-10-16T02:28:51Z | null | neverspillover |
huggingface/optimum-intel | 1,186 | How to initialize development env for this repo? | Hi! I would like to develop this repo, met some issues during env initialization. I ran `pip install -e .` to install current repo to local python env.
However error came out when running 'pytest tests\'
`ImportError while importing test module '/home/shji/codes/optimum-intel/tests/ipex/test_modeling.py'.
Hint: make su... | https://github.com/huggingface/optimum-intel/issues/1186 | closed | [] | 2025-03-04T06:10:15Z | 2025-03-10T06:01:21Z | null | shjiyang-intel |
huggingface/open-r1 | 457 | How to run reject sampling | I ran generate_reaoning and got the cot data. How do I run reject sampling after that? | https://github.com/huggingface/open-r1/issues/457 | open | [] | 2025-03-03T03:56:32Z | 2025-03-03T03:56:32Z | null | JavaZeroo |
huggingface/lerobot | 797 | use_delta_joint_actions_aloha | if self.use_delta_joint_actions_aloha:
raise NotImplementedError(
"`use_delta_joint_actions_aloha` is used by pi0 for aloha real models. It is not ported yet in LeRobot."
)
when will you put implementation for it because it is very important
| https://github.com/huggingface/lerobot/issues/797 | closed | [
"question",
"policies"
] | 2025-03-02T18:14:13Z | 2025-04-03T16:39:39Z | null | AbdElrahmanMostafaRifaat1432 |
huggingface/open-r1 | 453 | How to log the intermediate outputs results? | How to log the intermediate outputs results to track the 'aha moment'. How can I set this in config or modify the code? | https://github.com/huggingface/open-r1/issues/453 | closed | [] | 2025-03-01T17:08:48Z | 2025-03-09T13:53:59Z | null | 0205090923 |
huggingface/Math-Verify | 32 | How to adjust the priority of '\\ln' and '*' when parsing latex? | When I try to parse a string: "$$ \\dfrac{\\cos x}{2\\lnx * x^{\\ln x - 1}} $$", the result is "cos(x)/((2*log(x*x**(log(x, E) - 1), E)))", rather than "cos(x)/((2*x**(log(x, E) - 1)*log(x, E)))". It seems that there is something wrong when dealing with the priority of '\\ln' and '*'. So I wonder how to adjust the prio... | https://github.com/huggingface/Math-Verify/issues/32 | closed | [] | 2025-03-01T09:22:31Z | 2025-07-01T20:17:49Z | null | yhhu99 |
huggingface/smolagents | 842 | How to pass custom type variables to tools |
I’m working on a Telegram bot and using the `smolagents` library to create agents that handle reminders. The issue I’m facing is related to passing the `context` object (which is specific to each message received by the bot) to a tool function (`add_reminder`). The `context` object is required to access the `job_queue... | https://github.com/huggingface/smolagents/issues/842 | closed | [] | 2025-02-28T23:04:49Z | 2025-03-01T23:45:40Z | null | ebravofm |
huggingface/sentence-transformers | 3,254 | How to train sentencetransformer with multiple negative? | I have a dataset like: {'anchor':str,'postive':str,negative:list[str]}
it seems invalid by example code
```python
model = SentenceTransformer(model_path)
extend_position_embeddings(model._first_module().auto_model,max_length)
loss = CachedMultipleNegativesRankingLoss(model, mini_batch_size=16)
tr... | https://github.com/huggingface/sentence-transformers/issues/3254 | closed | [] | 2025-02-28T15:01:19Z | 2025-06-13T05:04:35Z | null | rangehow |
huggingface/lerobot | 789 | how to run eval with mujoco sim? | now ,run eval.py is only output in command line. how to run eval with mujoco sim? | https://github.com/huggingface/lerobot/issues/789 | closed | [
"simulation",
"stale"
] | 2025-02-28T10:42:46Z | 2025-10-08T11:57:42Z | null | mmlingyu |
huggingface/lerobot | 788 | offline run convert_dataset_v1_to_v2.py | I need help!!!!!
for example,when i run convert_dataset_v1_to_v2.py, it prompts the following:

and what is train.parquet?

how to solve it... | https://github.com/huggingface/lerobot/issues/788 | closed | [
"bug",
"question",
"dataset",
"stale"
] | 2025-02-28T06:41:43Z | 2025-10-09T21:54:09Z | null | ximiluuuu |
huggingface/sentence-transformers | 3,252 | How to train sentence transformers with multi machines? | The [docs](https://sbert.net/docs/sentence_transformer/training/distributed.html) describes how to train sentence transformers with multi-GPUs.
But both my model and my data are huge, and training sentence transformers with 8 GPUs in one single machine is still very slow.
Does sentence transformers support training u... | https://github.com/huggingface/sentence-transformers/issues/3252 | open | [] | 2025-02-27T13:37:02Z | 2025-02-27T13:37:02Z | null | awmoe |
huggingface/diffusers | 10,917 | Is lumina-2.0 script correct? | I wrote a script, based on the one provided [here](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py)
it gets stuck on loss around 0.5, and i think it is a lot, isn't it? | https://github.com/huggingface/diffusers/issues/10917 | open | [] | 2025-02-27T11:17:00Z | 2025-02-28T15:46:43Z | 3 | Riko0 |
huggingface/open-r1 | 444 | How to increase the context window from 4k to 32k on qwen models ? | Hello,
I'm trying to distill a subset of the [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/openr1-220k-math) dataset into my Qwen/Qwen2.5-Math-7B-Instruct. I want to do this via a custom SFT pipeline in order to see if I can match the results obtained in the evaluations.
However I'm struggling increasing... | https://github.com/huggingface/open-r1/issues/444 | closed | [] | 2025-02-27T10:27:43Z | 2025-07-24T23:56:12Z | null | Jeremmmyyyyy |
huggingface/trl | 2,972 | How many H20 (96GB) GPUs are needed to train Qwen7B with the GRPO algorithm? | I want to use the GRPO algorithm to train Qwen7B, but I failed using 4 H20 (96GB) GPUs with the trl library. I would like to know how many H20 GPUs are needed. | https://github.com/huggingface/trl/issues/2972 | open | [
"❓ question",
"🏋 GRPO"
] | 2025-02-27T04:12:16Z | 2025-03-14T02:22:36Z | null | Tuziking |
huggingface/lerobot | 779 | Is there a way for a robot arm with kinesthetic teaching function to collect data using lerobot? | Hello, I have a robot arm with kinesthetic teaching function. I guess I can teach my robot at the first time, and collect data from the second time using lerobot? I'm here to ask is this easy to achieve by modifying control_robot.py file? Thanks | https://github.com/huggingface/lerobot/issues/779 | closed | [
"question",
"stale"
] | 2025-02-26T17:50:51Z | 2025-10-16T02:28:54Z | null | yzzueong |
huggingface/diffusers | 10,910 | ValueError: Attempting to unscale FP16 gradients. | ### Describe the bug
I encountered the following error when running train_text_to_image_lora.py: ValueError: Attempting to unscale FP16 gradients.
The script I am running is as follows:
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export DATASET_NAME="lambdalabs/naruto-blip-captions"
accelerate launch --mixed_... | https://github.com/huggingface/diffusers/issues/10910 | closed | [
"bug"
] | 2025-02-26T14:43:57Z | 2025-03-18T17:43:08Z | 4 | Messimanda |
huggingface/transformers.js | 1,209 | Is NFD type normalizer supported? | ### Question
Hi,
I was trying the following code on browser which uses [dewdev/language_detection](https://huggingface.co/dewdev/language_detection):
`import { pipeline, Pipeline } from '@huggingface/transformers';
export class DetectLanguage {
private modelid: string | null = null;
private detectPipeline: ... | https://github.com/huggingface/transformers.js/issues/1209 | closed | [
"question"
] | 2025-02-26T08:48:08Z | 2025-02-26T14:41:38Z | null | adewdev |
huggingface/open-r1 | 436 | Why is the reward low and not increased in grpo training?How to solve? | my config
# Model arguments
model_name_or_path: ../experiment/models/Qwen2.5-1.5B-Instruct
#model_revision: main
torch_dtype: bfloat16
attn_implementation: flash_attention_2
# Data training arguments
dataset_name: ../experiment/datasets/NuminaMath-TIR/data
dataset_configs:
- default
system_prompt: "You are a helpful A... | https://github.com/huggingface/open-r1/issues/436 | open | [] | 2025-02-26T05:12:18Z | 2025-02-27T01:06:53Z | null | AXy1527 |
huggingface/lerobot | 773 | How to overrite the code to collect action datas from others robot? | Hey,I have got a problem when i try to overwrite the code of lerobot to collect action datas from my own robot. Here‘s the detail. My robot is a single six joint robot arm, so i make a new RobotConfig, which only contains the info of the camera. And then I overwrite the fuction 'teleop_step' in file manipulator.py. I a... | https://github.com/huggingface/lerobot/issues/773 | closed | [
"question",
"stale"
] | 2025-02-26T03:33:09Z | 2025-10-16T02:28:56Z | null | tjh-flash |
huggingface/lerobot | 771 | Example of training a policy with PI0? | is there an example config file for training a policy with PI0 policy? | https://github.com/huggingface/lerobot/issues/771 | closed | [
"question",
"policies"
] | 2025-02-25T19:39:51Z | 2025-04-03T16:44:44Z | null | pqrsqwewrty |
huggingface/diffusers | 10,904 | CLIP Score Evaluation without Pre-processing. | I am referring to [Evaluating Diffusion Models](https://huggingface.co/docs/diffusers/main/en/conceptual/evaluation), specifically the quantitative evaluation using CLIP score example.
We have images of shape (6, 512, 512, 3).
CLIP score is calculated using `"openai/clip-vit-base-patch16"`.
However, as far as I can... | https://github.com/huggingface/diffusers/issues/10904 | open | [
"stale"
] | 2025-02-25T16:51:44Z | 2025-03-28T15:03:20Z | 1 | e-delaney |
huggingface/lerobot | 769 | How to convert my ALOHA hdf5 data type to your dataset format? | https://github.com/huggingface/lerobot/issues/769 | closed | [
"question",
"dataset",
"stale"
] | 2025-02-25T14:07:13Z | 2025-10-16T02:28:58Z | null | return-sleep | |
huggingface/diffusers | 10,901 | HunyuanVIdeo in diffusers use negative_prompt but generate wrong video | ### Describe the bug
Diffusers support negative_prompt for hunyuan_video recently, but when I use negative_prompt and set **guidance_scale** and **true_cfg_scale**, I got a video with all black elements. Maybe I set wrong parameters or save video fail.
How can I fix my problem? Thanks
### Reproduction
import torch
... | https://github.com/huggingface/diffusers/issues/10901 | open | [
"bug",
"stale"
] | 2025-02-25T11:08:43Z | 2025-07-15T07:19:15Z | 2 | philipwan |
huggingface/optimum | 2,200 | Bug exporting Whisper? | ### System Info
Hi! I'm exporting some fine-tuned whisper models, small and base, being fine-tuned in english or spanish. In some cases I've detected that the tokenizer.json is 2.423KB and in other cases 3.839, being the tokenizer.json exported for the same language. I have some models in english where the tokenizer w... | https://github.com/huggingface/optimum/issues/2200 | open | [
"bug"
] | 2025-02-25T09:45:02Z | 2025-03-05T20:58:30Z | 1 | AlArgente |
huggingface/diffusers | 10,899 | Whether lohaconfig is supported in the convert_state_dict_to_diffusers method | In the train_text_to_image_lora.py file
unet_lora_config = LoraConfig(
r=cfg.rank,
lora_alpha=cfg.rank,
init_lora_weights="gaussian",
target_modules=["to_k", "to_q", "to_v", "to_out.0"],
)
modified to
unet_lora_config = LoHaConfig(
r=cfg.rank,
alpha=cfg.rank,
... | https://github.com/huggingface/diffusers/issues/10899 | open | [
"stale"
] | 2025-02-25T08:39:08Z | 2025-03-27T15:03:17Z | 2 | llm8047 |
huggingface/sentence-transformers | 3,246 | How to save the merged model trained with peft? | I am working on fine tuning a 7B model and due to the size, we trained it with lora- by following the guidance (https://sbert.net/examples/training/peft/README.html)
```python
peft_config = LoraConfig(
task_type=TaskType.FEATURE_EXTRACTION,
inference_mode=False,
r=8,
lora_alpha=32,
... | https://github.com/huggingface/sentence-transformers/issues/3246 | closed | [] | 2025-02-25T00:56:20Z | 2025-12-05T12:33:48Z | null | chz816 |
huggingface/datasets | 7,420 | better correspondence between cached and saved datasets created using from_generator | ### Feature request
At the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a... | https://github.com/huggingface/datasets/issues/7420 | open | [
"enhancement"
] | 2025-02-24T22:14:37Z | 2026-01-05T15:16:35Z | 3 | vttrifonov |
huggingface/open-r1 | 413 | How many resources are required to train deepseek r1 671b using grpo? | . | https://github.com/huggingface/open-r1/issues/413 | open | [] | 2025-02-24T11:55:12Z | 2025-02-24T11:55:12Z | null | LiuShixing |
huggingface/safetensors | 577 | Could I get safe tensor without lazy loading? | ### System Info
I see safe_open and deserialize, it seems that both two are lazy loading.
So if I don't want to load safetensor without lazy loading
how could I do, thanks
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Reproduction
I use sglang, and in sglang model_loader/weig... | https://github.com/huggingface/safetensors/issues/577 | open | [] | 2025-02-24T07:55:33Z | 2025-03-13T16:51:49Z | 1 | voidxb |
huggingface/trl | 2,941 | How to dynamically adjust params during grpo training? | How to dynamically adjust params during training? For example, I want to adopt a smaller num_generations(8) at the beginning of grpo training, and enlarge it to 32 and also adopt a larger temperature from the 50th step. | https://github.com/huggingface/trl/issues/2941 | open | [
"❓ question",
"🏋 GRPO"
] | 2025-02-24T02:08:52Z | 2025-02-24T07:49:10Z | null | Tomsawyerhu |
huggingface/open-r1 | 406 | How many GPU hours you take to train a simple model? | I wonder how many hours you take to use this repo to train a simple model, like DeepSeek-R1-Distill-Qwen-1.5B or DeepSeek-R1-Distill-Qwen-7B, if on 8 H100? | https://github.com/huggingface/open-r1/issues/406 | closed | [] | 2025-02-24T00:27:52Z | 2025-02-24T06:31:31Z | null | Red-Scarff |
huggingface/safetensors | 576 | How to access header with python | Is there a way to access the header in Python to know the offsets of each tensor data? | https://github.com/huggingface/safetensors/issues/576 | closed | [] | 2025-02-23T17:42:46Z | 2025-03-13T16:58:36Z | null | justinchuby |
huggingface/diffusers | 10,878 | How to expand peft.LoraConfig | If expanding
peft.LoraConfig, How to modify to accommodate more lora? | https://github.com/huggingface/diffusers/issues/10878 | open | [
"stale"
] | 2025-02-23T14:01:11Z | 2025-03-25T15:03:28Z | null | llm8047 |
huggingface/diffusers | 10,874 | Does it support adding LoHa method | Does it support adding LoHa method?
Where can I modify it? | https://github.com/huggingface/diffusers/issues/10874 | open | [
"stale"
] | 2025-02-23T12:06:14Z | 2025-03-25T15:03:41Z | 3 | llm8047 |
huggingface/diffusers | 10,872 | [Feature request] Please add from_single_file support in SanaTransformer2DModel to support first Sana Apache licensed model | **Is your feature request related to a problem? Please describe.**
We all know Sana model is very good but unfortunately the LICENSE is restrictive.
Recently a Sana finetuned model is released under Apache LICENSE. Unfortunately SanaTransformer2DModel does not support from_single_file to use it
**Describe the solution... | https://github.com/huggingface/diffusers/issues/10872 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome",
"roadmap"
] | 2025-02-23T11:36:21Z | 2025-03-10T03:08:32Z | 5 | nitinmukesh |
huggingface/lerobot | 761 | How to convert from custom dataset format to LeRobotDataset format? | I'm trying to train a LeRobot model on some custom data I've recorded on a custom robot, but first, I need to convert that custom data into the correct format for LeRobotDataset. I'm guessing that an example of how to do this is in the `pusht_zarr.py` file.
Questions:
1) Is the example in `pusht_zarr.py` the proper w... | https://github.com/huggingface/lerobot/issues/761 | closed | [] | 2025-02-22T02:35:36Z | 2025-02-25T19:39:08Z | null | pqrsqwewrty |
huggingface/trl | 2,922 | How to support multi-device VLLM inference in the GRPO Trainer | https://github.com/huggingface/trl/blob/e5ae703d352b29537159180087ef8bd4b41bf625/trl/trainer/grpo_trainer.py#L439-L461
In the current GRPO implementation, VLLM can only run on a single GPU, which becomes a performance bottleneck. For example, in an 8-GPU setup, the remaining 7 GPUs have to wait for 1 GPU to complete i... | https://github.com/huggingface/trl/issues/2922 | open | [
"✨ enhancement",
"🏋 GRPO"
] | 2025-02-21T09:24:51Z | 2025-03-14T02:45:21Z | null | 0x404 |
huggingface/safetensors | 575 | How to change the model weights in safetensors? | ### Feature request
For example, I want to change some weight with shape [K,K,C] into [K,K,C/2], how can I achieve this hacking?
### Motivation
N/A
### Your contribution
N/A | https://github.com/huggingface/safetensors/issues/575 | open | [] | 2025-02-21T03:36:27Z | 2025-03-13T16:59:32Z | null | JulioZhao97 |
huggingface/transformers.js | 1,201 | Unable to convert Janus models to ONNX | ### Question
I see that @xenova has successfully export Janus-1.3B and Janus-Pro-1B to ONNX, presumably using some version of scripts/convert.py. We are interested in exporting Janus-Pro-7B to ONNX as well, but have not been able to do so using this script (nor any other path). Attempting to convert either of the prev... | https://github.com/huggingface/transformers.js/issues/1201 | open | [
"question"
] | 2025-02-20T17:55:00Z | 2025-08-19T12:55:58Z | null | turneram |
huggingface/datasets | 7,415 | Shard Dataset at specific indices | I have a dataset of sequences, where each example in the sequence is a separate row in the dataset (similar to LeRobotDataset). When running `Dataset.save_to_disk` how can I provide indices where it's possible to shard the dataset such that no episode spans more than 1 shard. Consequently, when I run `Dataset.load_from... | https://github.com/huggingface/datasets/issues/7415 | open | [] | 2025-02-20T10:43:10Z | 2025-02-24T11:06:45Z | 3 | nikonikolov |
huggingface/trl | 2,913 | How to specify the GPU used by vllm | https://github.com/huggingface/trl/blob/a92e00e810762548787fadd5c4a5e6fc13a4928a/trl/trainer/grpo_trainer.py#L392
I have an 8-GPUs server, of which only the last two GPUs are available, and I set CUDA_VISIBLE_DEVICE=6,7, the value of torch.cuda.device_count() is 2. I want to load vllm into GPU 6, and I set vllm_device=... | https://github.com/huggingface/trl/issues/2913 | closed | [
"❓ question"
] | 2025-02-20T10:32:30Z | 2025-02-21T03:14:13Z | null | xiaolizh1 |
huggingface/open-r1 | 381 | how to set sampling parameters when do evaluation | As you said you use greedy decoding to reproduce deepseek's evaluation results, And I get different score, there may be something not aligning. So I want to know how to set the sampling parameters and how to see them when I use the 'evaluate.py' to do evaluation. | https://github.com/huggingface/open-r1/issues/381 | open | [] | 2025-02-20T08:41:26Z | 2025-02-24T06:57:59Z | null | ItGirls |
huggingface/open-r1 | 380 | How to set cuda device for your Data generation pipline | Hi author, thanks for your work.
When I use your pipline to generate data set (deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
I find I can not set device with os.environ

It is actually always on the cude:0, how can I set it correctl? ... | https://github.com/huggingface/open-r1/issues/380 | open | [] | 2025-02-20T07:06:44Z | 2025-02-20T07:06:44Z | null | Aristo23333 |
huggingface/transformers | 36,293 | Bug in v4.49 where the attention mask is ignored during generation (t5-small) | ### System Info
Hi all!
First, thank you very much for your hard work and making these features avalible.
I'm seeing a bug after updating to v4.49 where the output changes even though the attention mask should be masking padded values. Below is a script to reproduce the error.
It will tokenize two prompts, and then... | https://github.com/huggingface/transformers/issues/36293 | closed | [
"bug"
] | 2025-02-20T02:16:23Z | 2025-02-20T16:28:11Z | null | bdhammel |
huggingface/optimum-nvidia | 176 | How to run whisper after #133 | I see that previously, whisper could be run as follows: [https://github.com/huggingface/optimum-nvidia/blob/whisper-inference/examples/automatic-speech-recognition/whisper.py](https://github.com/huggingface/optimum-nvidia/blob/whisper-inference/examples/automatic-speech-recognition/whisper.py)
But after #133 the code... | https://github.com/huggingface/optimum-nvidia/issues/176 | open | [] | 2025-02-19T17:45:01Z | 2025-02-19T17:45:01Z | null | huggingfacename |
huggingface/peft | 2,388 | ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported. | ## Context
I'm finetuning the Qwen2.5-Vl model with swift for data extraction using LoRA. I'm not sure what is the correct way to save and upload the adapter and be able to recharge it correctly.
In short, I followed these steps
```python
# load model
model, processor = get_model_tokenizer(
'Qwen/Qwen2.5-VL-3B-Inst... | https://github.com/huggingface/peft/issues/2388 | closed | [] | 2025-02-19T15:09:17Z | 2025-04-09T16:23:53Z | 8 | samuellimabraz |
huggingface/trl | 2,905 | How to use GRPOTrainer to train a LLM for code generation? What is the format of the dataset? | https://github.com/huggingface/trl/issues/2905 | open | [] | 2025-02-19T12:38:13Z | 2025-02-19T12:38:13Z | null | xiangxinhello | |
huggingface/open-r1 | 370 | how to train grpo on 2 nodes(16gpus) | how to train grpo on 2 nodes(16gpus)? 10000 thanks for giving a successful example. | https://github.com/huggingface/open-r1/issues/370 | closed | [] | 2025-02-19T09:15:14Z | 2025-03-26T11:36:03Z | null | glennccc |
huggingface/finetrainers | 267 | How to save the best performing checkpoint during LoRA fine-tuning on Hunyuan Video? | In the HunyuanVideo training scripts, we can save checkpoints every 500 steps by passing `--checkpointing_steps 500`. The final model is saved through the following code:
```python
if accelerator.is_main_process:
transformer = unwrap_model(accelerator, self.transformer)
if self.args.training_type == "lora":
... | https://github.com/huggingface/finetrainers/issues/267 | open | [] | 2025-02-19T07:49:11Z | 2025-02-21T01:39:30Z | null | dingangui |
huggingface/lerobot | 748 | [pi0] confusion about the state embedding dimension in `embed_suffix` | ### System Info
```Shell
- `lerobot` version: 0.1.0
- Platform: Linux-5.14.0-284.86.1.el9_2.x86_64-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.28.1
- Dataset version: 3.2.0
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Cuda version: 12040
- Using GPU in script?... | https://github.com/huggingface/lerobot/issues/748 | closed | [
"question",
"policies",
"stale"
] | 2025-02-19T03:33:01Z | 2025-10-20T02:31:45Z | null | IrvingF7 |
huggingface/transformers.js | 1,198 | whisper: how to get streaming word level timestamps? (automatic-speech-recognition) | ### Question
## Goal
- streaming
- word level timestamps
## Issue
`on_chunk_start` / `on_chunk_end` are not called when using `return_timestamps: "word"`.
These callbacks only provide timestamps with `return_timestamps: true`
I also tried to decode tokens, as I’ve seen it in the demo, but that uses callbacks that n... | https://github.com/huggingface/transformers.js/issues/1198 | open | [
"question"
] | 2025-02-18T15:29:42Z | 2025-02-20T04:45:48Z | null | getflourish |
huggingface/diffusers | 10,817 | auto_pipeline missing SD3 contol nets | ### Describe the bug
Hey, auto_pipeline seesm to be missing the control nets variants for SD3
venv\Lib\site-packages\diffusers\pipelines\auto_pipeline.py
### Reproduction
Load an sd3 model checkpoint with a controlnet loading any of the auto pipes you will just get the none control net variations as its not set in ... | https://github.com/huggingface/diffusers/issues/10817 | closed | [
"bug",
"help wanted",
"contributions-welcome"
] | 2025-02-18T12:54:40Z | 2025-02-24T16:21:03Z | 3 | JoeGaffney |
huggingface/lerobot | 746 | How should I run the model on my own datasets in different envs which is not clearly mentioned in the README? | I want to run the diffusion model on my own real world arms datasets, which are different from the example env and input format in observation and action dims.
I've seem some yaml files to store these parameters in earlier version of the repo, but I can't find it in the newest version of the repo. So should I write th... | https://github.com/huggingface/lerobot/issues/746 | closed | [
"question",
"policies",
"dataset",
"stale"
] | 2025-02-18T12:33:07Z | 2025-10-19T02:32:17Z | null | shi-akihi |
huggingface/lerobot | 741 | Inquiry on Implementing NoMaD Model (Transformers and Diffusion Policy) | I am planning to implement the NoMaD model, which combines Transformers and Diffusion Policy, within the LeRobot project. Before proceeding, I wanted to check if anyone else is currently working on or has already started implementing this model.
For reference, here are the relevant resources:
Website: https://general... | https://github.com/huggingface/lerobot/issues/741 | closed | [
"question",
"stale"
] | 2025-02-17T19:57:23Z | 2025-10-08T20:56:42Z | null | vaishanth-rmrj |
huggingface/lerobot | 738 | convert simulation data of insertion from v1 to v2 | I cannot convert using the file (datasets/v2/convert_dataset_v1_to_v2.py) which requires robotconfig which I don't have
I just want to convert your data on lerobot/act_aloha_sim_transfer_cube_human | https://github.com/huggingface/lerobot/issues/738 | closed | [
"question",
"dataset",
"stale"
] | 2025-02-17T11:00:38Z | 2025-10-08T08:59:52Z | null | AbdElrahmanMostafaRifaat1432 |
huggingface/open-r1 | 340 | About the data using in sft, how to set SFTConfig.dataset_text_field? | how to use the HuggingFaceH4/Bespoke-Stratos-17k in sft.
I find there are two items in the data, "system" and "conversations". So, when I download this data and to finetune a LLM such as Qwen2.5-1.5B-Instruct, how to organize the data, in trl SFTConfig has a default parameter named dataset_text_field, it's default va... | https://github.com/huggingface/open-r1/issues/340 | open | [] | 2025-02-17T07:06:14Z | 2025-02-20T08:59:49Z | null | ItGirls |
huggingface/finetrainers | 264 | How to set --precompute_conditions for CogvideoI2V training? | cause i don't find this feature in Image2Video training.
does it exist? | https://github.com/huggingface/finetrainers/issues/264 | open | [] | 2025-02-17T06:00:50Z | 2025-03-05T03:49:05Z | null | BlackTea-c |
huggingface/diffusers | 10,805 | is there inpainiting dataset and parameters example provided for xl training? | **What API design would you like to have changed or added to the library? Why?**
**What use case would this enable or better enable? Can you give us a code example?**
Hi patil-suraj @patil-suraj , appreciated for the convenient script ! Is there any code example and dataset example to run the script: https://github.c... | https://github.com/huggingface/diffusers/issues/10805 | closed | [] | 2025-02-17T01:56:14Z | 2025-02-17T02:03:09Z | 2 | fire2323 |
huggingface/gsplat.js | 109 | Info request: How to update individual points in splat? | I would like to update position of individual points dynamically in order to create animations and effects.
What would be the optimal way to do it?
| https://github.com/huggingface/gsplat.js/issues/109 | open | [] | 2025-02-16T18:11:14Z | 2025-02-16T18:43:23Z | null | sjovanovic |
huggingface/diffusers | 10,803 | SANARubber a flexible version of SANA with i2i and multidiffusion/regional diffusion | ### Model/Pipeline/Scheduler description
I made a pipeline that is as reliable as the basic SANA pipeline but more flexible by making it run an array of functions which runs everything the og pipeline does. this can make easy combinations if necessary.
here's the link, enjoy
https://github.com/alexblattner/SANARubbe... | https://github.com/huggingface/diffusers/issues/10803 | open | [
"stale"
] | 2025-02-16T15:08:11Z | 2025-03-19T15:03:31Z | 1 | alexblattner |
huggingface/candle | 2,774 | Dumb Question: How to do forward hooks ? | For example I want to extract activations of intermediate layers. How do I register forward hooks similar to PyTorch or is there a similar/comparable paradigm in candle for this ? | https://github.com/huggingface/candle/issues/2774 | open | [] | 2025-02-16T12:41:26Z | 2025-02-16T12:41:26Z | null | pzdkn |
huggingface/diffusers | 10,799 | Effective region mask for controlnet | Hi, I just want to ask is there any way to use controlnet with mask like [this](https://github.com/Mikubill/sd-webui-controlnet/discussions/2831)
As you know comfyui, webui support effective region (mask for controlnet affect).
But I can't find how to do this with diffusers. | https://github.com/huggingface/diffusers/issues/10799 | closed | [
"stale"
] | 2025-02-15T17:42:20Z | 2025-04-03T04:01:37Z | 8 | Suprhimp |
huggingface/swift-coreml-diffusers | 102 | Question: how to use in my own swift project for inference? | How would I run diffusers on device on all apple devices in my swift Xcode project? | https://github.com/huggingface/swift-coreml-diffusers/issues/102 | open | [] | 2025-02-15T15:56:36Z | 2025-02-15T15:56:36Z | null | SpyC0der77 |
huggingface/transformers.js | 1,194 | How do I know which ONNX transformation models are available? (Errors when loading models with CDN) | ### Question
I am using a CDN to load the models, as shown in the code below.
I filtered the models in HuggingFace the way you recommend (text-generation, transformers.js) and put the id of the model I looked up. As I understand it, to change the model, I only need to change the model id.
However, I get an error for... | https://github.com/huggingface/transformers.js/issues/1194 | open | [
"question"
] | 2025-02-15T10:31:32Z | 2025-02-16T14:02:08Z | null | mz-imhj |
huggingface/open-r1 | 333 | how to use tensorboard instead of wandb? | https://github.com/huggingface/open-r1/issues/333 | closed | [] | 2025-02-15T08:00:06Z | 2025-02-15T08:02:35Z | null | ngrxmu | |
huggingface/diffusers | 10,796 | Docs for HunyuanVideo LoRA? | ### Describe the bug
As it seems like LoRA loading on HunyuanVideo has been implemented, I wonder where I can find the docs on this? Are they missing?
### Reproduction
Search for HunyuanVideo and LoRA
### Logs
```shell
```
### System Info
As it is the online docs...
### Who can help?
@stevhliu @sayakpaul | https://github.com/huggingface/diffusers/issues/10796 | closed | [
"bug",
"stale"
] | 2025-02-15T04:31:34Z | 2025-06-10T20:52:28Z | 9 | tin2tin |
huggingface/open-r1 | 328 | How to set generation sampling parameters? | Need to use deepseek reference settings of temperature=0.6, top_p=0.95.
Greedy sampling does poorly on AIME:
## r1-1.5B
- AIME24: 23.33%
Tried to refer to lighteval docs and ran into issues using model config:
```
model: # Model specific parameters
base_params:
model_args: "pretrained=Qwen/Qwen2.5-7B-Instruct... | https://github.com/huggingface/open-r1/issues/328 | open | [] | 2025-02-14T21:42:28Z | 2025-02-20T03:28:53Z | null | rawsh |
huggingface/trl | 2,864 | How to train GPRO on 2 GPUs, one for training, one for vllm | ### Reproduction
When I use `Qwen2.5-3B-instruct` to train GRPO, the device for vllm always appear OOM when loading weights. II used two GPUs with 32GB of memory, one device for training, another for vllm. I dont know why a 3B model using so much memory on `device 1`
.
I was wondering if there is a way for me to contribute a recently ac... | https://github.com/huggingface/peft/issues/2377 | closed | [] | 2025-02-14T12:17:46Z | 2025-03-24T15:04:11Z | 2 | SpeeeedLee |
huggingface/optimum | 2,189 | PEFT to ONNX conversion | ### System Info
```shell
Hello!
I have a fine-tuned LLM model from Hugging Face saved in PEFT format, and it’s about 2.1 GB. When we convert it to ONNX, its size nearly doubles to about 4.1 GB. What causes this significant increase in model size after converting from PEFT to ONNX? Is there any bug under this conversi... | https://github.com/huggingface/optimum/issues/2189 | open | [
"bug"
] | 2025-02-13T18:21:05Z | 2025-03-10T13:58:28Z | 2 | morteza89 |
huggingface/agents-course | 113 | Show how to use Inference Providers for inference | Can be helpful for students to explore different models easily.
| https://github.com/huggingface/agents-course/issues/113 | open | [] | 2025-02-13T07:46:01Z | 2025-02-13T08:04:58Z | null | pcuenca |
huggingface/lerobot | 718 | Hand-Eye Calibration for LeRobot | Hello,
I am starting a project where I plan to use LeRobot for pick-and-place tasks utilizing classical robotics and vision techniques. I am wondering if anyone has experience with performing hand-eye calibration for this robot.
My major concern is that the high-mounted camera is usually parallel to the arm, which may ... | https://github.com/huggingface/lerobot/issues/718 | closed | [
"question",
"stale"
] | 2025-02-12T05:44:09Z | 2025-12-21T02:59:43Z | null | Akumar201 |
huggingface/optimum-neuron | 782 | Docs on how to compile a pre-trained transformer | Hello,
I am experimenting with Transformers and trying to run them on AWS Inferentia.
I checked the official [docs](https://huggingface.co/docs/optimum-neuron/index) but I could not find a clear answer to my current problem.
I currently have a customized model based on the [ALBERT transformer](https://huggingface.co... | https://github.com/huggingface/optimum-neuron/issues/782 | closed | [
"Stale"
] | 2025-02-11T23:36:13Z | 2025-03-20T08:05:40Z | null | efemaer |
huggingface/diffusers | 10,772 | Sana Controlnet Support | **Is your feature request related to a problem? Please describe.**
The first controlnet for Sana has appeared, so the feature is to add the sana controlnet to the diffusers pipeline https://github.com/NVlabs/Sana/blob/main/asset/docs/sana_controlnet.md
**Describe the solution you'd like.**
Be able to use the sana cont... | https://github.com/huggingface/diffusers/issues/10772 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome",
"roadmap"
] | 2025-02-11T22:39:10Z | 2025-04-13T13:49:40Z | 5 | jloveric |
huggingface/smolagents | 610 | Is this normal? Im getting this a lot | Hey, is this normal?

also, out: None is this ok as well?? | https://github.com/huggingface/smolagents/issues/610 | closed | [
"question"
] | 2025-02-11T22:05:27Z | 2025-03-19T07:12:32Z | null | Mhdaw |
huggingface/agents-course | 77 | [QUESTION] Why am I able to select multiple options in Quick Quiz? | In quick quizzes as there is a single answer correct, shouldn't it be like only be able to choose a single option instead of being able select all at once to see correct answer?
| https://github.com/huggingface/agents-course/issues/77 | closed | [
"question"
] | 2025-02-11T17:35:31Z | 2025-02-13T07:20:59Z | null | Devrajsinh-Gohil |
huggingface/agents-course | 66 | [QUESTION] About the **Thought: Internal Reasoning and the Re-Act Approach** section of UNIT 1 | I am a bit confused about the ReAct prompting example at the end of the **Thought: Internal Reasoning and the Re-Act Approach** section in Unit 1. The figure label describes it as an example of **ReAct**, but the image itself mentions "Zero-shot CoT." Could you please take a look at this section and clarify? I would re... | https://github.com/huggingface/agents-course/issues/66 | closed | [
"question"
] | 2025-02-11T03:54:26Z | 2025-02-13T07:30:13Z | null | saidul-islam98 |
huggingface/datasets | 7,390 | Re-add py.typed | ### Feature request
The motivation for removing py.typed no longer seems to apply. Would a solution like [this one](https://github.com/huggingface/huggingface_hub/pull/2752) work here?
### Motivation
MyPy support is broken. As more type checkers come out, such as RedKnot, these may also be broken. It would be goo... | https://github.com/huggingface/datasets/issues/7390 | open | [
"enhancement"
] | 2025-02-10T22:12:52Z | 2025-08-10T00:51:17Z | 1 | NeilGirdhar |
huggingface/lerobot | 707 | is there option to run on parallel gpu | I have 2 gpus 4090 I wonder if there is an option to run on parallel while finetuning the model
I have found this parameter here

but I don't actually understand what do you mean by mp
so if there is option for parallel gpu pl... | https://github.com/huggingface/lerobot/issues/707 | closed | [
"question"
] | 2025-02-10T09:34:13Z | 2025-05-14T20:51:43Z | null | AbdElrahmanMostafaRifaat1432 |
huggingface/lerobot | 706 | adapt_to_pi_aloha parameter | I am finetuning pi0 on a static aloha dataset and I found the following parameter : adapt_to_pi_aloha : false
in /lerobot/common/policies/pi0/configuration_pi0.py
but when I set it to true the first loss increased from 0.17 to 4.7
should I set it to true or not knowing that I want the predicted actions to be in alo... | https://github.com/huggingface/lerobot/issues/706 | open | [
"question",
"configuration"
] | 2025-02-10T09:24:45Z | 2025-07-24T08:15:35Z | null | AbdElrahmanMostafaRifaat1432 |
huggingface/chat-ui | 1,708 | Generation failed occur | when I ask model then get generation error

using base model is llama3 -1b
below code is my .env.local code
 | https://github.com/huggingface/chat-ui/issues/1708 | open | [
"support"
] | 2025-02-10T08:12:56Z | 2025-02-12T07:48:47Z | 5 | mondayjowa |
huggingface/open-r1 | 260 | How to use tensor_parallel_size for vllm in GRPO? | GRPO use vllm to load reference model for data sampling , The limitation is that tensor parallel are not supported.
What if the reference model is larger than One GPU can hold, for example, 72B with 40GB's H800,
Is there any setting we can set the tensor_parallel_size for vllm params?
```
if self.accelerator.... | https://github.com/huggingface/open-r1/issues/260 | open | [] | 2025-02-10T07:17:07Z | 2025-02-20T12:21:15Z | null | bannima |
huggingface/trl | 2,814 | How to use tensor_parallel_size for vllm reference in GRPO? | GRPO use vllm to load reference model for data sampling , The limitation is that tensor parallel are not supported.
What if the reference model is larger than One GPU can hold, for example, 72B with 40GB's H800,
Is there any setting we can set the tensor_parallel_size for vllm params?
```
if self.accelerator... | https://github.com/huggingface/trl/issues/2814 | open | [
"⚡accelerate",
"🏋 GRPO"
] | 2025-02-10T07:09:47Z | 2025-03-04T11:40:13Z | null | bannima |
huggingface/diffusers | 10,755 | Difference in Output When Using PIL.Image vs numpy.array for Image and Mask Input. | hi.
I get different results when providing image and mask as input using PIL.Image versus numpy. array. Why does this happen?
Is there an issue with my normalization method?
| pillow | array |
|---|---|
|  | ![Image](https://gith... | https://github.com/huggingface/diffusers/issues/10755 | open | [
"stale"
] | 2025-02-10T05:24:27Z | 2025-03-12T15:03:12Z | 2 | purple-k |
huggingface/datasets | 7,387 | Dynamic adjusting dataloader sampling weight | Hi,
Thanks for your wonderful work! I'm wondering is there a way to dynamically adjust the sampling weight of each data in the dataset during training? Looking forward to your reply, thanks again. | https://github.com/huggingface/datasets/issues/7387 | open | [] | 2025-02-10T03:18:47Z | 2025-03-07T14:06:54Z | 3 | whc688 |
huggingface/trl | 2,813 | What is the minimum GPU requirement in gigabytes for TRL intensive training? | https://github.com/huggingface/trl/issues/2813 | open | [] | 2025-02-10T02:52:07Z | 2025-02-11T08:41:56Z | null | lonngxiang |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.