repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/trl | 3,109 | where is file https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py | ### Reproduction
```python
from trl import ...
```
outputs:
```
Traceback (most recent call last):
File "example.py", line 42, in <module>
...
```
### System Info
https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py
### Checklist
- [x] I have checked that my issue isn't already filed (see [ope... | https://github.com/huggingface/trl/issues/3109 | closed | [
"🐛 bug",
"🏋 SFT"
] | 2025-03-19T02:20:26Z | 2025-03-19T02:22:23Z | null | zh794390558 |
pytorch/xla | 8,853 | Have documentation to point to all our environment variables and their meaning | ## 📚 Documentation
Prepare a documentation to point to all our environment variables and their meaning. This world should be a forcing function to (1) make the yaml file up to date (2) rename it to something like `env_vraiable_definitions.yaml`, (3) start a workstream to trim down on these env variables to avoid usab... | https://github.com/pytorch/xla/issues/8853 | open | [
"usability",
"documentation"
] | 2025-03-19T00:23:51Z | 2025-03-19T00:26:22Z | 1 | miladm |
pytorch/TensorRT | 3,446 | ValueError: Invalid input type <class 'bool'> encountered when compiling FLUX.1-dev model with Torch-TensorRT | ## ❓ Question
When trying to compile the FLUX.1-dev model using Torch-TensorRT following the official example/blog post, I'm encountering a `ValueError` during the `torch_tensorrt.dynamo.compile()` step. The error suggests there's an issue with input parsing where it's encountering a boolean value that it doesn't know... | https://github.com/pytorch/TensorRT/issues/3446 | open | [
"question"
] | 2025-03-18T21:55:16Z | 2025-03-21T23:57:54Z | null | yachty66 |
huggingface/transformers.js | 1,245 | QuestionAnsweringOutput does not return start/end index | ### Question
Question/Answering pipeline does not seem to return start/end index.
console output example
``` { answer: 'anywhere', score: 0.8719829671013909 }```
source code in pipeline.js
```
class QuestionAnsweringPipeline ...
// TODO add start and end?
// NOTE: HF returns character index
toRetu... | https://github.com/huggingface/transformers.js/issues/1245 | open | [
"question"
] | 2025-03-18T21:20:25Z | 2025-03-18T21:20:25Z | null | sleep9 |
huggingface/transformers.js | 1,243 | Transformer.js compatibility with Angular17 | ### Question
I want to add transformer.js in Angular 17 project. Getting several errors can some one guide me how to add transformer.js with Angular project | https://github.com/huggingface/transformers.js/issues/1243 | open | [
"question"
] | 2025-03-18T16:15:30Z | 2025-03-24T21:27:11Z | null | AnuragPant01 |
huggingface/diffusers | 11,108 | Is there a way to generate a single image using multiple GPUs? | This is related to #2977 and #3392, but I would like to know how to generate a single image using multiple GPUs. If such a method does not exist, I would also like to know if Accelerate's [Memory-efficient pipeline parallelism](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference#memory-efficient-p... | https://github.com/huggingface/diffusers/issues/11108 | closed | [
"stale"
] | 2025-03-18T13:43:05Z | 2025-05-02T21:00:31Z | 12 | suzukimain |
huggingface/lerobot | 876 | Multiple GPU Training Support | Hi, lerobot team!
Thanks for the great work and organized content.
Are there plans to support PyTorch's Distributed Data Parallel (DDP) training in this framework? | https://github.com/huggingface/lerobot/issues/876 | closed | [
"enhancement",
"question",
"stale"
] | 2025-03-18T12:44:43Z | 2025-10-07T02:26:45Z | null | kingchou007 |
huggingface/open-r1 | 521 | How to use my own dataset in sft? | Could you please give an instruction/demo on how to use my own dataset (any column name) to apply sft? | https://github.com/huggingface/open-r1/issues/521 | open | [] | 2025-03-18T11:38:19Z | 2025-03-18T14:21:36Z | null | dongdongzhaoUP |
huggingface/diffusers | 11,103 | Which repo should I use for LTX-Video 0.9.5 diffusers | I see the changes are merged
Checked repo and it is empty
https://huggingface.co/Lightricks/LTX-Video-0.9.5/tree/main
Noticed in test pipeline it is
repo = "YiYiXu/ltx-95"
So can I safely assume that the above can be used?
@yiyixuxu | https://github.com/huggingface/diffusers/issues/11103 | closed | [] | 2025-03-18T10:50:41Z | 2025-03-18T11:00:34Z | 2 | nitinmukesh |
huggingface/trl | 3,103 | How are Lora parameters used in VLLM generation? (_move_model_to_vllm in GRPO trainer) | From the following code does not see the process of moving lora training parameters to VLLM? How guarantee that generated with the latest parameters? Can someone help explain.
<img width="1123" alt="Image" src="https://github.com/user-attachments/assets/62cacf0a-0197-4210-b326-c4e24b9b6701" />
And I printed the vllm l... | https://github.com/huggingface/trl/issues/3103 | closed | [
"❓ question",
"⚡ PEFT"
] | 2025-03-18T09:24:48Z | 2025-03-24T18:32:19Z | null | cuiyuhao1996 |
pytorch/xla | 8,847 | How to compile torch-xla form source? | ## ❓ Questions and Help
I have reviewed the relevant materials on torch-xla but have not found a clear guide on how to compile torch-xla from source. The instructions mentioned on [this page](https://pytorch.org/xla/master/contribute/bazel.html) are somewhat disorganized. Could you provide a detailed compilation proces... | https://github.com/pytorch/xla/issues/8847 | open | [
"question",
"build"
] | 2025-03-18T02:31:05Z | 2025-03-24T17:40:13Z | null | south-ocean |
pytorch/xla | 8,846 | Need a documentation page that always hosts the latest stable documentation | ## 📚 Documentation
PyTorch has https://pytorch.org/docs/stable/index.html that always contains the documentation for the latest stable branch.
The same URL variant doesn't work for PyTorch/XLA https://pytorch.org/xla/release/stable/index.html
| https://github.com/pytorch/xla/issues/8846 | open | [
"enhancement",
"documentation"
] | 2025-03-18T00:19:41Z | 2025-05-01T07:46:15Z | 3 | tengyifei |
pytorch/vision | 8,980 | nvjpeg missing from all linux GPU wheel build jobs | Linux CUDA: https://github.com/pytorch/vision/actions/runs/13901104094/job/38892841516?pr=8601
Linux aarch64 CUDA: https://github.com/pytorch/vision/actions/runs/13901104115/job/38892844332?pr=8601
Failing the smoke test part with:
```
+ echo 'pytorch/vision/test/smoke_test.py found'
+ conda run -p /__w/_temp/conda_e... | https://github.com/pytorch/vision/issues/8980 | closed | [] | 2025-03-17T15:05:04Z | 2025-03-18T11:28:18Z | 1 | NicolasHug |
huggingface/datasets | 7,457 | Document the HF_DATASETS_CACHE env variable | ### Feature request
Hello,
I have a use case where my team is sharing models and dataset in shared directory to avoid duplication.
I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`... | https://github.com/huggingface/datasets/issues/7457 | closed | [
"enhancement"
] | 2025-03-17T12:24:50Z | 2025-05-06T15:54:39Z | 4 | LSerranoPEReN |
pytorch/pytorch | 149,315 | How to Retain Computational Graph in torch.func.jvp() for Parameter Gradients? | ### 🚀 The feature, motivation and pitch
## Help Needed: Making `torch.func.jvp` Work with `torch.autograd.grad`
Hi all,
Thanks so much for all the functionalities of pytorch! I'm trying to make the following code valid (and efficient):
```python
output_values, output_grads = torch.func.jvp(model, input_value, inpu... | https://github.com/pytorch/pytorch/issues/149315 | open | [
"module: autograd",
"triaged",
"module: functorch"
] | 2025-03-17T12:10:21Z | 2025-06-24T14:30:39Z | null | edouardoyallon |
huggingface/transformers | 36,762 | When what needs to be loaded is in the cache directory, there is no need to make a request to the remote | ### Feature request
When what needs to be loaded is in the cache directory, there is no need to make a request to the remote.
### Motivation
I noticed that when `AutoTokenizer` loads a file using `from_pretrained`, it first tries to load it from a cached directory when `pretrained_model_name_or_path` is a model_id... | https://github.com/huggingface/transformers/issues/36762 | closed | [
"Feature request"
] | 2025-03-17T11:20:24Z | 2025-03-19T15:49:04Z | null | JinFish |
huggingface/diffusers | 11,086 | RuntimeError after using apply_group_offloading on diffusers: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same | Can anyone help me?
I used WanX's diffusers and used apply_group_offloading according to url: https://huggingface.co/docs/diffusers/main/en/optimization/memory.
The code is as follows:
```
image_encoder = CLIPVisionModel.from_pretrained(local_model_path, subfolder="image_encoder", torch_dtype=torch.float32)
vae = Auto... | https://github.com/huggingface/diffusers/issues/11086 | open | [
"stale"
] | 2025-03-17T11:03:48Z | 2025-04-16T15:03:36Z | 5 | tiga-dudu |
huggingface/trl | 3,093 | How to use a custom function as the reward model for PPO training | The new version of TRL's PPOtrainer requires Module as the reward model, but I need a custom function calculation to calculate the reward. I tried to lower the TRL version to 0.11.4, but the old version does not seem to support the peft model. I get the following error:
ValueError: model must be a PreTrainedModelWrappe... | https://github.com/huggingface/trl/issues/3093 | open | [
"❓ question",
"🏋 PPO",
"⚡ PEFT"
] | 2025-03-16T09:02:25Z | 2025-03-20T10:33:02Z | null | JWQZ |
huggingface/ai-deadlines | 19 | How to know the rankings of a conference? | @NielsRogge, may I know where we can get the conference rankings? | https://github.com/huggingface/ai-deadlines/issues/19 | closed | [] | 2025-03-15T18:32:34Z | 2025-03-15T21:45:02Z | null | julurisaichandu |
huggingface/diffusers | 11,063 | prepare_attention_mask - incorrect padding? | ### Describe the bug
I'm experimenting with attention masking in Stable Diffusion (so that padding tokens aren't considered for cross attention), and I found that UNet2DConditionModel doesn't work when given an `attention_mask`.
https://github.com/huggingface/diffusers/blob/8ead643bb786fe6bc80c9a4bd1730372d410a9df/sr... | https://github.com/huggingface/diffusers/issues/11063 | open | [
"bug",
"stale"
] | 2025-03-14T19:01:01Z | 2025-04-14T15:03:14Z | 2 | cheald |
huggingface/transformers.js | 1,237 | Using pipeline API in Mobile Devices | ### Question
How can I do the pipeline running in mobile devices?
Like here:
pipeline('background-removal', 'briaai/RMBG-1.4', { device: "webgpu" })
Or it depends from the model avaliable?
I don't find documentations about pipeline API options, like 'device' and others params... | https://github.com/huggingface/transformers.js/issues/1237 | open | [
"question"
] | 2025-03-14T17:55:27Z | 2025-05-11T19:58:39Z | null | LuSrodri |
huggingface/autotrain-advanced | 869 | How to fine-tune a custom model for Ollama? | Probably a stupid question, but I'm trying to upload a .csv dataset and fine-tune an 8B model in Autotrain. But when I add the model name taken from Ollama (e.g. deepseek-r1:8b or DeepSeek-R1-Distill-Llama-8B-NexaQuant) and try to train, I get an error.
validated_self = self.__pydantic_validator__.validate_python(d... | https://github.com/huggingface/autotrain-advanced/issues/869 | closed | [
"stale"
] | 2025-03-14T14:46:23Z | 2025-05-03T15:01:33Z | null | nigelp |
huggingface/diffusers | 11,060 | `prepare_image` in Kandinsky pipelines doesn't support `torch.Tensor` | Hi, I want to report a bug in Kandinsky pipelines.
https://github.com/huggingface/diffusers/blob/2f0f281b0d808c05bc7a974e68d298a006dd120a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L413-L420
According to the above contents, elements in `image` can be either `PIL.Image.Image` or `torch.Tensor`.
h... | https://github.com/huggingface/diffusers/issues/11060 | closed | [
"good first issue",
"help wanted"
] | 2025-03-14T10:34:30Z | 2025-04-21T18:41:10Z | 1 | dk-hong |
huggingface/Math-Verify | 39 | How to choose ExprExtractionConfig() and LatexExtractionConfig() | Hi. Thanks for your awesome tool.
I want to ask how I should set the configuration when the answer is either LaTeX or Expr? I found that if the case below (without $$ $$) is not set, the output will be false when the expected result is true.
```python
from math_verify import parse, verify
gold = parse("\\frac{\sqrt... | https://github.com/huggingface/Math-Verify/issues/39 | closed | [] | 2025-03-13T23:36:27Z | 2025-04-28T20:42:03Z | null | Zhuofeng-Li |
huggingface/diffusers | 11,055 | Training on unconditional image generation creates colorized images | ### Describe the bug
Hi, I'm trying to follow the tutorial from unconditional image generation on my own dataset, and I'm getting weirdly colored images. I originally thought it was due to RGB/BGR channel order, but I've switched it around and got the same result. Do you have any suggestions of how to fix it?
### Re... | https://github.com/huggingface/diffusers/issues/11055 | open | [
"bug",
"stale"
] | 2025-03-13T20:47:22Z | 2025-04-13T15:02:53Z | 1 | esizikova-fda |
huggingface/lerobot | 860 | Modify camera async_read/read API to return a dictionary instead of tuple for better compatability? | Currently the intel real sense camera api supports returning either a single rgb image or a rgb image and depth image as a 2-uple
https://github.com/huggingface/lerobot/blob/3c0a209f9fac4d2a57617e686a7f2a2309144ba2/lerobot/common/robot_devices/cameras/intelrealsense.py#L440-L443
However this is not super compatible t... | https://github.com/huggingface/lerobot/issues/860 | closed | [
"enhancement",
"question"
] | 2025-03-13T18:44:20Z | 2025-05-26T09:28:48Z | null | StoneT2000 |
huggingface/transformers.js | 1,230 | Using background-removal pipeline produces images with 50% opacity | ### Question
I have a issue using the background-removal pipeline. Some models returns the exacly same image, but 50% opacite (RGBA: [X, Y, Z, 127]). So other models, returns an error like this: Uncaught Error: Unsupported model type: null transformers:1:670067.
How can I procede? | https://github.com/huggingface/transformers.js/issues/1230 | closed | [
"question"
] | 2025-03-13T17:00:13Z | 2025-03-25T22:28:37Z | null | LuSrodri |
huggingface/lerobot | 858 | DATASET conversion from V.16 to V2.0 ❌❌❌ |
Hi @aliberts @Cadene
Thanks for your amazing work. I have one doubt, I forked lerobot repo and training some policies, now i want to convert to V1.6 to V2.0, but my episodes are .pth format not in parquet format. I check remaining issues, i didn't find anything. right now while conversion it takes only parquet format.... | https://github.com/huggingface/lerobot/issues/858 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-13T15:22:51Z | 2025-10-07T02:26:46Z | null | Kacchan16 |
huggingface/optimum | 2,215 | not able to convert DeepSeek-R1 into Onnx using optimum-cli | ### System Info
```shell
v1.24.0
```
### Who can help?
@michaelbenayoun
I'm trying to convert DeepSeek-R1 into a onnx format, but i'm being presented with
> ValueError: Loading deepseek-ai/DeepSeek-R1 requires you to execute the configuration file in that repo on your local machine. Make sure you have read the c... | https://github.com/huggingface/optimum/issues/2215 | open | [
"bug"
] | 2025-03-13T07:07:10Z | 2025-05-13T11:13:36Z | 1 | volcano619 |
huggingface/trl | 3,066 | How to switch on the multi-GPU for GRPOTrainer? | Issue:
OOM errors during GRPO training - Need multi-GPU support for combined VRAM
Problem Description:
I'm encountering Out-of-Memory (OOM) errors while using GRPOTrainer to train reasoning capabilities similar to DeepSeek R1.
My Question:
How to switch on multi-GPU support for GRPOTrainer to utilize the combined VR... | https://github.com/huggingface/trl/issues/3066 | closed | [
"🏋 GRPO"
] | 2025-03-13T05:01:12Z | 2025-04-05T17:04:50Z | null | tjoymeed |
pytorch/pytorch | 149,096 | How to determine which part of torch.compile undergoes recompiling after caching | ### 🐛 Describe the bug
Thanks for the helpful blog: https://dev-discuss.pytorch.org/t/how-to-bring-compile-time-down-to-zero-our-plans-and-direction-may-14th-edition/2089
I am currently caching all 3 stages of the compiler but only seeing ~50% reduction in compile time.
How do I determine which part of the compilat... | https://github.com/pytorch/pytorch/issues/149096 | open | [
"triaged",
"oncall: pt2"
] | 2025-03-13T02:33:58Z | 2025-03-13T06:40:24Z | null | janak2 |
huggingface/agents-course | 314 | [QUESTION] agent.run(stream=True) How get finall result | agent = CodeAgent(
tools=[],
model=model,
max_steps=10,
verbosity_level=2
)
response = agent.run(
"""
descripe image
""",
images=image_urls,
stream=True
)
print()??? | https://github.com/huggingface/agents-course/issues/314 | open | [
"question"
] | 2025-03-13T02:32:47Z | 2025-03-13T02:32:47Z | null | via007 |
pytorch/pytorch | 149,094 | How to skip backward specific steps in torch.compile | ### 🐛 Describe the bug
I couldn't find much documentation around how we can skip backward specific-steps in torch.compile/AOT autograd.
Some info would be helpful.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu | https://github.com/pytorch/pytorch/issues/149094 | open | [
"triaged",
"oncall: pt2"
] | 2025-03-13T02:12:44Z | 2025-03-17T23:55:31Z | null | janak2 |
huggingface/diffusers | 11,046 | flux pipeline inference with controlnet, inpainting, plus ip-adapter | ### Describe the bug
Hi, I would like to utilize flux pipeline. But for now, I have gpu issues to use origin flux pipeline.
If I would like to use nf4 version, How can I set up the inference file on controlnet, inpainting, ip-adapter?
Do I use Fluxcontrol depth or canny and mask, ip-adapter model? or fluxcontrol, flu... | https://github.com/huggingface/diffusers/issues/11046 | open | [
"bug",
"stale"
] | 2025-03-12T20:14:01Z | 2025-04-12T15:02:52Z | 1 | john09282922 |
huggingface/lerobot | 854 | How to train diffusion policy in only state space, no images? | I have been having a lot of trouble trying to only train a model on purely a state space task so there are no images involved. I have already looked through every tutorial and most source code files and just can not get this working.
I have a script that creates a LeRobotDataset through human demonstrations. The scrip... | https://github.com/huggingface/lerobot/issues/854 | closed | [
"question",
"policies",
"stale"
] | 2025-03-12T16:01:19Z | 2025-10-26T02:30:57Z | null | Nicholas-Baldassini |
huggingface/diffusers | 11,045 | Crash when loading Flux Schnell 1 model with train_dreambooth_lora_flux | ### Describe the bug
When using the `Diffusers/example/dreambooth/train_dreambooth_lora_flux` script with the Flux Schnell 1 model, the process consistently crashes during the transformer shard loading at 33% (1/3), causing my entire Google JupyterLab kernel to crash.
**Question:** Is this related to using the Flux S... | https://github.com/huggingface/diffusers/issues/11045 | closed | [
"bug",
"stale"
] | 2025-03-12T15:08:11Z | 2025-05-07T15:18:15Z | 4 | rleygonie |
huggingface/diffusers | 11,043 | When will we be getting Quanto support for Wan 2.1? | The diffusers library for quantizers currently doesn't contain an entry for Quantro:
https://github.com/huggingface/diffusers/tree/main/src/diffusers/quantizers
Isn't this needed to perform requantization on a quantized Transformer for WAN 2.1?
Currently we can't do this due to missing Quanto quantizer after we've q... | https://github.com/huggingface/diffusers/issues/11043 | closed | [] | 2025-03-12T12:43:59Z | 2025-03-23T18:17:53Z | 2 | ukaprch |
huggingface/lerobot | 853 | How to customize adding other robot and manipulator? | Thanks for your great work! Now I got a problem how to customize adding other robot and manipulator.
I have 7DOF bimanual manipulators robot, which is powered by servo-motor. I want to add it to lerobot so I can use this fantastic platform to collect data and train. Specially the ACT and diffusion policy.
I have the... | https://github.com/huggingface/lerobot/issues/853 | closed | [
"question",
"robots"
] | 2025-03-12T11:39:19Z | 2025-10-08T20:16:23Z | null | meijie-jesse |
huggingface/smollm | 65 | How to set video size when fine tuning | Hi,
I've tried a bunch of variants but I can't seem to figure out how to set the video size. Currently, I have:
```py
processor.video_size = { "longest_edge": 128 }
processor.do_image_splitting = False
def sample_indices_fn(metadata, num_frames=None, fps=None, **kwargs):
return np.arange(0, 20, dtype=int)
m... | https://github.com/huggingface/smollm/issues/65 | open | [
"Video"
] | 2025-03-12T11:20:28Z | 2025-07-29T13:12:05Z | null | FredrikNoren |
huggingface/accelerate | 3,437 | Need help on how to disable enable_model_cpu_offload / enable_sequential_cpu_offload | So during my testing when used individually, I observed that
enable_sequential_cpu_offload require- 11 GB VRAM
enable_model_cpu_offload require - 8 GB VRAM
I am using Diffusers + nunchaku + sd_embed
Problem: sd_embed does not support enable_sequential_cpu_offload but support enable_model_cpu_offload
Requirement: ... | https://github.com/huggingface/accelerate/issues/3437 | closed | [] | 2025-03-12T09:29:08Z | 2025-03-12T10:10:33Z | null | nitinmukesh |
huggingface/diffusers | 11,042 | ZeroDivisionError when performing forward pass with UNet3DConditionModel | ### Describe the bug
# ZeroDivisionError when performing forward pass with UNet3DConditionModel
I'm encountering a ZeroDivisionError when attempting to perform a forward pass with the UNet3DConditionModel. This seems to be related to the num_attention_heads parameter being None, which causes self.inner_dim to be 0.
... | https://github.com/huggingface/diffusers/issues/11042 | closed | [
"bug"
] | 2025-03-12T09:26:01Z | 2025-03-13T02:00:12Z | 2 | txz32102 |
pytorch/executorch | 9,180 | Convert model.safetensors in order to be able to execute it with ExecuteTorch: how to prepare the example input and dynamic shape information? | Hi!
I've trained for fine-tuning the Bert model to use it for Named Entity Recognition.
Now I want to convert the resulting model.safetensors in order to be able to execute it with ExecuteTorch. Thanks to the explanation of a kind guy : https://dev-discuss.pytorch.org/t/what-is-the-correct-future-proof-way-of-deplo... | https://github.com/pytorch/executorch/issues/9180 | open | [
"module: user experience"
] | 2025-03-12T09:17:50Z | 2025-12-18T21:55:01Z | null | raphael10-collab |
huggingface/lerobot | 851 | Hello, I would like to ask if I can use my ROS2 MoveIt2 robotic arm? | Can it support ROS training? I believe this would be beneficial for ecosystem development. | https://github.com/huggingface/lerobot/issues/851 | open | [
"question"
] | 2025-03-12T07:39:51Z | 2025-08-04T19:29:03Z | null | Gates-456 |
huggingface/open-r1 | 502 | How to use vllm with 2 GPUs? | Just as GRPO OOM #475 stated, the vllm kv init is so large that 1 A100 80GB could not hold it, while I have 8*A100 in total.
However, only 1 GPU is allowed to assign to vllm, as `vllm_device: auto` or `ib/python3.10/site-packages/trl/trainer/grpo_trainer.py`.
How should I solve the issue? Would anybody know?
| https://github.com/huggingface/open-r1/issues/502 | open | [] | 2025-03-12T03:36:18Z | 2025-06-03T11:55:47Z | null | greatxue |
huggingface/diffusers | 11,036 | Why perform the following operations on the latent condition? | in the code :https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py
line 395-404:
```
latents_mean = (
torch.tensor(self.vae.config.latents_mean)
.view(1, self.vae.config.z_dim, 1, 1, 1)
.to(latents.device, latents.dtype)
)
latents_std = 1.0 / torch.tensor(self.va... | https://github.com/huggingface/diffusers/issues/11036 | closed | [] | 2025-03-12T02:32:09Z | 2025-03-15T02:40:13Z | 2 | trouble-maker007 |
pytorch/vision | 8,962 | Missing Windows Wheel for torchvision==0.11.2+cu111 | Hello Torchvision team,
We are attempting to install specific versions with CUDA 11.1 using .whl files from [torch_stable.html](https://download.pytorch.org/whl/cu111/torch_stable.html).
However, we can't find the required wheel for torchvision==0.11.2+cu111 for Windows (win_amd64.whl).
Could you provide guidance o... | https://github.com/pytorch/vision/issues/8962 | closed | [] | 2025-03-11T22:04:31Z | 2025-03-28T13:10:02Z | 2 | huang3527 |
huggingface/lerobot | 847 | Is there a way Merge | Convert | Edit datasets function or a way how we can train model using different datasets ? | Hey, everyone.
At the moment, we have this problem: We have recorded datasets with around 100 episodes each, but we would like to train our model with 1000 episodes. Unfortunately, we didn't find a way to load multiple datasets into a single policy training job, is it even possible ? If no, ss there a way to merge a ... | https://github.com/huggingface/lerobot/issues/847 | closed | [
"question",
"policies",
"dataset"
] | 2025-03-11T17:25:08Z | 2025-10-17T12:09:32Z | null | runmaget |
huggingface/lerobot | 846 | How to convert my own dataset to LerobotDataset format? | Hi, I am new to Lerobot and have a dataset in my own format. I would like to convert it to the LerobotDataset format.
I referred to `lerobot/scripts/push_dataset_to_hub.py`, but it seems to be deprecated. Could you provide guidance or an updated method for converting custom datasets?
Thanks in advance! | https://github.com/huggingface/lerobot/issues/846 | closed | [
"question",
"dataset"
] | 2025-03-11T09:17:23Z | 2025-04-15T00:59:10Z | null | yilin404 |
pytorch/torchtitan | 951 | Nan's on step 1 of 405B model training | Anyone has any tip of how to debug/prevent nan's on step 1 during FSDP+TP training of the 405B model on 256 GPU's on the C4 dataset ? | https://github.com/pytorch/torchtitan/issues/951 | closed | [] | 2025-03-11T07:00:12Z | 2025-03-28T01:47:28Z | 12 | githubsgi |
huggingface/open-r1 | 498 | How to Enable enforce_eager or Disable CUDA Graph in Evaluation | Evaluation code is currently using lighteval and vLLM for inference, and I would like to disable CUDA Graph by enabling options like ```enforce_eager```. However, I could not find a command-line argument for this in ```$MODEL_ARGS```. Additionally, setting it as an environment variable (e.g., VLLM_ENFORCE_EAGER) does n... | https://github.com/huggingface/open-r1/issues/498 | closed | [] | 2025-03-11T00:25:49Z | 2025-03-11T04:54:02Z | null | superdocker |
huggingface/diffusers | 11,020 | Multi-gpus Context Parallel training support? | Nowadays, the number of parameters in video generation models is increasing, and the video length is increasing. When training video models, it is difficult to fit a complete video sequence(200k~ tokens) on a single GPU. Some sequence parallel training technologies can solve this problem, such as the [fastvideo](https:... | https://github.com/huggingface/diffusers/issues/11020 | open | [] | 2025-03-10T11:45:30Z | 2025-07-18T13:05:08Z | 2 | yinian-lw |
huggingface/blog | 2,728 | Open In "02_how_to_generate", code cell 1 has an outdated version of tensorflow | The notebook 02_how_to_generate.ipynb currently specifies tensorflow==2.1, which is no longer available.
if we run that cell we get the error:Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.... | https://github.com/huggingface/blog/issues/2728 | open | [] | 2025-03-09T18:05:55Z | 2025-03-09T18:06:11Z | null | Umashankar86 |
huggingface/blog | 2,727 | Open In "02_how_to_generate", code cell 1 has an outdated version of tensorflow | The notebook 02_how_to_generate.ipynb currently specifies tensorflow==2.1, which is no longer available.
if we run that cell we get the error:Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.... | https://github.com/huggingface/blog/issues/2727 | closed | [] | 2025-03-09T18:04:48Z | 2025-03-09T18:05:03Z | null | Umashankar86 |
huggingface/datasets | 7,442 | Flexible Loader | ### Feature request
Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset?
It can be something as simple as this one:
```
def load_hf_dataset(path_or_name):
if os.path.exists(path_or_name):
return load_from_disk(path_or_name)
... | https://github.com/huggingface/datasets/issues/7442 | open | [
"enhancement"
] | 2025-03-09T16:55:03Z | 2025-03-27T23:58:17Z | 3 | dipta007 |
huggingface/chat-ui | 1,751 | Analyze uploaded PDF files through OpenAI API | When I upload a PDF file and leverage it, I will get the base64 data. But I didn't find the code to process it in endpoints/openai, while it can handle the image base64 data. Besides, I failed to transfer it back to text. How can I analyze the file through OpenAI API?
 that the latest version of hf-hub is 0.4.2, but I can't find the 0.4.2 tag on GitHub. Could you tell me what is the commit ID corresponding to this version?
Sincerely suggest that you add a corresponding tag for each version release, which can effectively ... | https://github.com/huggingface/hf-hub/issues/99 | closed | [] | 2025-03-08T12:43:18Z | 2025-06-16T09:41:15Z | null | HairlessVillager |
huggingface/transformers | 36,613 | In "02_how_to_generate", code cell 1 has an error message | ### System Info
In "02_how_to_generate", code cell 1 has an error message but the rest works fine: ERROR: Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, ... | https://github.com/huggingface/transformers/issues/36613 | closed | [
"bug"
] | 2025-03-08T07:46:39Z | 2025-04-16T08:03:04Z | null | kvutien |
pytorch/xla | 8,809 | MarkShardingFunction causes OOM when applied to model parameters | When tested in https://github.com/AI-Hypercomputer/torchprime/pull/144/files, if we shard parameters with `MarkShardingFunction.apply`, that causes Mixtral to OOM. Gradient HLO arrays end up living much longer than needed.
Shard both activations and model parameters with `MarkShardingFunction`: http://shortn/_vvNPYfxS... | https://github.com/pytorch/xla/issues/8809 | closed | [
"performance"
] | 2025-03-08T06:14:48Z | 2025-03-17T04:03:08Z | 3 | tengyifei |
huggingface/diffusers | 11,008 | Support wan2.1 video model? | ### Did you like the remote VAE solution?
Yes.
### What can be improved about the current solution?
Wan2.1 video model support is appreciated!
### What other VAEs you would like to see if the pilot goes well?
Wan2.1 video model support is appreciated!
### Notify the members of the team
@hlky @sayakpaul | https://github.com/huggingface/diffusers/issues/11008 | open | [
"stale"
] | 2025-03-08T04:21:33Z | 2025-05-09T15:03:47Z | 6 | kexul |
huggingface/trl | 3,028 | Distill teacher models where the vocab size of teacher and student is different | I am trying to distill a Qwen2.5-7B-Instruct to Qwen2.5-5B-Instruct using a sample code
```from datasets import Dataset
from trl import GKDConfig, GKDTrainer
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
)
NUM_DUMMY_SAMPLES = 100
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B... | https://github.com/huggingface/trl/issues/3028 | open | [
"🏋 GKD"
] | 2025-03-08T00:29:01Z | 2025-10-29T04:15:50Z | null | shaunakjoshi12 |
huggingface/diffusers | 11,005 | pipeline_wan_i2v.py: minor discrepancy between arg default and docstring | ### Describe the bug
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py
Line 447 (arg default):
```output_type: Optional[str] = "np",```
Line 496 (docstring):
```output_type (`str`, *optional*, defaults to `"pil"`):```
### Reproduction
n/a
### Logs
```shell
```
#... | https://github.com/huggingface/diffusers/issues/11005 | closed | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-03-07T16:37:48Z | 2025-04-24T18:49:38Z | 2 | rolux |
huggingface/finetrainers | 301 | How to train text-to-video generation model on different generation models using Disney dataset? | The current repository does not explicitly describe ho to change training methods between t2v or i2v.
| https://github.com/huggingface/finetrainers/issues/301 | closed | [] | 2025-03-07T16:02:42Z | 2025-03-07T16:08:06Z | null | kjosh925 |
huggingface/speech-to-speech | 159 | What is from df.enhance import enhance, init_df ? in vad_handler? | https://github.com/huggingface/speech-to-speech/issues/159 | open | [] | 2025-03-07T15:07:53Z | 2025-03-07T15:07:53Z | null | Manukrishna2K | |
huggingface/diffusers | 11,002 | Any chance class members like self._interrupt could be defined in __init__ across pipelines? | ### Describe the bug
I think there is no benefit to late initializing here and it puts a burden on the library user that could be easily avoided. Also leads to some confusion as it is uncommon, code inspection flags this. Let me know if I'm missing something.
### Reproduction
```
class WanImageToVideoPipeline:
def ... | https://github.com/huggingface/diffusers/issues/11002 | open | [
"bug",
"help wanted",
"contributions-welcome"
] | 2025-03-07T11:28:27Z | 2025-05-26T07:21:47Z | 9 | spezialspezial |
pytorch/ao | 1,850 | What the dtype of input in Float8Linear backward? | In Float8Linear forward input is saved in high precision,
<img width="605" alt="Image" src="https://github.com/user-attachments/assets/b2f4fdff-79e6-4274-8e68-9bf7947f5003" />
Why not save input in float8? I don't know if I understand this correctly. | https://github.com/pytorch/ao/issues/1850 | closed | [
"question"
] | 2025-03-07T07:33:01Z | 2025-03-10T16:28:11Z | null | yh8899 |
pytorch/pytorch | 148,747 | How can I use inductor aot_compile to support a MoE network? | ### 🚀 The feature, motivation and pitch
Deepseek has sparked a wave of enthusiasm for the design of Moe (Mixture of Experts) network architectures. I am often asked how to accelerate the inference of an Moe network. Undoubtedly, I thought of using Inductor's aot_compile to compile it into a dynamic library and then c... | https://github.com/pytorch/pytorch/issues/148747 | closed | [
"oncall: pt2",
"export-triage-review",
"oncall: export",
"module: aotinductor"
] | 2025-03-07T07:04:07Z | 2025-05-24T02:21:21Z | null | sujuyu |
pytorch/pytorch | 148,713 | [torch.export] How to export with the model having *args and **kwargs as forward signature? | This is the original model code:
```python
from diffusers.models import AutoencoderKL
import torch
model_name = "black-forest-labs/FLUX.1-dev"
hf_safetensor = True
model_opts = {'torch_dtype': torch.float16}
model = AutoencoderKL.from_pretrained(model_name, subfolder="vae", use_safetensors=hf_safetensor, force_downlo... | https://github.com/pytorch/pytorch/issues/148713 | closed | [
"oncall: pt2",
"oncall: export"
] | 2025-03-06T23:01:17Z | 2025-03-07T01:47:05Z | null | titaiwangms |
huggingface/diffusers | 10,993 | f-divergence | Is there a plan to implement the f-divergence scheduler ? I would like to contribute that to the library. | https://github.com/huggingface/diffusers/issues/10993 | open | [
"stale"
] | 2025-03-06T22:46:13Z | 2025-04-06T15:02:55Z | 5 | manmeet3591 |
huggingface/smolagents | 902 | How to populate custom variables in prompt template? | I'm trying to configure custom template variables in my system prompt.
**Current Implementation:**
1. I have a system prompt template with custom variables:
```python
CUSTOM_CODE_SYSTEM_PROMPT = """You are {{ bot_name }}, a customer support assistant...
{{ formatting_guidelines }}
```
2. Agent creation and configura... | https://github.com/huggingface/smolagents/issues/902 | closed | [] | 2025-03-06T20:45:51Z | 2025-03-07T08:54:22Z | null | Luisotee |
huggingface/agents-course | 295 | [QUESTION] Ambiguity what chat templates are. | Issue:
Where ➡ https://huggingface.co/learn/agents-course/unit1/messages-and-special-tokens
> This is where chat templates come in. They act as the bridge between conversational messages (user and assistant turns) and the specific formatting requirements of your chosen LLM. In other words, chat templates structure t... | https://github.com/huggingface/agents-course/issues/295 | open | [
"question"
] | 2025-03-06T17:12:41Z | 2025-03-06T17:12:41Z | null | MekongDelta-mind |
huggingface/open-r1 | 483 | How to calculate total optimization steps | I ran it on 8 GPUs and set num_generations to 8, num_processes=7, Why Total optimization steps=196, isn't it Num examples/Total train batch size? It seems that multiplying by num_generations yields 196. Why do we need to multiply by num_generations?
[INFO|trainer.py:2405] 2025-03-06 12:04:09,913 >> ***** Running traini... | https://github.com/huggingface/open-r1/issues/483 | open | [] | 2025-03-06T09:47:19Z | 2025-03-13T08:45:23Z | null | HelloWorld506 |
huggingface/transformers.js | 1,221 | How to use Xenova/deplot using the transformers.js library. | ### Question
Currently I'm doing:
```
this.pipeline = await pipeline("image-text-to-text", "Xenova/deplot", {
progress_callback: (progress) => {
this.updateProgress({
status: `Loading model: ${progress.status}`,
progress: 0.1 + (progress.progress * 0.9)
});... | https://github.com/huggingface/transformers.js/issues/1221 | open | [
"question"
] | 2025-03-06T07:56:07Z | 2025-03-06T11:36:19Z | null | aadya940 |
huggingface/peft | 2,410 | running forward loop using get_peft_model disables requires_grad on output | Hi,
I would like to report a recent issue I have been facing, but I am not sure if it is a bug or I am doing something wrong in the process. The steps to re-create the steps are easy. The issue happens when I try to convert **Qwen2-VL-2B-Instruct** model into a PEFT model using `get_peft_model` method. Simply load the... | https://github.com/huggingface/peft/issues/2410 | closed | [] | 2025-03-06T05:12:42Z | 2025-04-13T15:03:40Z | 4 | Hamidreza3252 |
pytorch/pytorch | 148,634 | README doesn't explain how to run tests in the "Test PyTorch" section | ### 📚 The doc issue
README needs to have the "Test PyTorch" section after the [Install PyTorch](https://github.com/pytorch/pytorch#install-pytorch) section in the README.
Testing is the next step after building PyTorch.
### Suggest a potential alternative/fix
_No response_ | https://github.com/pytorch/pytorch/issues/148634 | closed | [] | 2025-03-06T04:32:44Z | 2025-03-06T17:58:19Z | null | yurivict |
huggingface/lerobot | 826 | Should the pi0 pytorch model on Huggingface load model.safetensors or the other three satetensors? | https://huggingface.co/lerobot/pi0/tree/main
What is the difference between `model.safetensors` and the other three satetensors (`model-00001-of-0000*.safetensors`)? The pi0 model `from_pretrained()` method will load `model.safetensor`s by default instead of `model-00001-of-0000*.safetensors`.
| https://github.com/huggingface/lerobot/issues/826 | closed | [
"question",
"stale"
] | 2025-03-06T03:12:05Z | 2025-10-08T08:42:49Z | null | chopinxxxx |
huggingface/agents-course | 290 | [QUESTION] First Agent code does not produce any output | I cloned and tried running the first agent app.py. I wanted to try the image generation tool. the application built and ran but when I tried typing something in the chat such as "generate an image of a cat", there is no response from the bot. it stays blank
| https://github.com/huggingface/agents-course/issues/290 | open | [
"question"
] | 2025-03-05T23:49:06Z | 2025-03-18T14:45:44Z | null | Sabk0926 |
pytorch/xla | 8,799 | Re-enable CPU test `test/test_python_ops.py -k TestPythonOps` for `uint8` dtype | To unblock bumping libtpu pin, we have to disable this test: https://github.com/pytorch/xla/pull/8788/files
This test fails with a LLVM memory allocation error on the CPU.
We should report this bug upstream and re-enable it after a fix is there.
Failed run: https://github.com/pytorch/xla/actions/runs/13668949609/job... | https://github.com/pytorch/xla/issues/8799 | closed | [
"bug",
"libtpu"
] | 2025-03-05T19:40:50Z | 2025-05-05T00:25:18Z | 0 | tengyifei |
huggingface/accelerate | 3,421 | How to sync distribute model paramaters when training with continual learning fashion? | When performing distributed continual learning tasks, it is common to expand model parameters as tasks increase. For example, I have defined an `expand_classifier()` method with random initialization to increase the parameters of the classifier.
How can I ensure that the newly added parameters are initialized the sa... | https://github.com/huggingface/accelerate/issues/3421 | closed | [] | 2025-03-05T13:44:15Z | 2025-04-13T15:06:22Z | null | Iranb |
pytorch/xla | 8,792 | Generating stablehlo.composite and running it through PJRT | ## ❓ Questions and Help
Following the example from the [docs](https://pytorch.org/xla/release/r2.6/features/stablehlo.html#preserving-high-level-pytorch-operations-in-stablehlo-by-generating-stablehlo-composite), I tried to use `StableHLOCompositeBuilder` to generate a `stablehlo.composite` op with the difference that... | https://github.com/pytorch/xla/issues/8792 | open | [
"bug",
"stablehlo"
] | 2025-03-05T10:45:12Z | 2025-03-06T12:49:08Z | 1 | sechkova |
huggingface/lerobot | 817 | SO 100 Arm assembly instruction inconsistency | Step 22 of the assembly guide shows a picture of wrist that is flipped comparing to the drawing and front page photo. Are both right? If not, which one is correct?
[Latest instruction](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#wrist-assembly):
<img width="723" alt="Image" src="https://g... | https://github.com/huggingface/lerobot/issues/817 | closed | [
"question",
"robots",
"stale"
] | 2025-03-05T05:23:57Z | 2025-11-30T02:37:07Z | null | liuhuanjim013 |
huggingface/open-r1 | 472 | how to set the max_model_length, max_new_tokens and generation_size when evaluate ? | Suppose the max_position_embedding of my model is 4096, how to set max_model_length, max_new_tokens and generation_size to. get the correct evaluate result? For example , set max_model_length=4096, max_new_tokens=1000, generation_size=1000? | https://github.com/huggingface/open-r1/issues/472 | open | [] | 2025-03-05T04:01:48Z | 2025-03-12T03:41:42Z | null | ItGirls |
pytorch/torchtitan | 930 | `CheckpointManager.save` with async mode is vulnerable to race conditions | ### Bug description
Based on [[Distributed w/ TorchTitan] Optimizing Checkpointing Efficiency with PyTorch DCP](https://discuss.pytorch.org/t/distributed-w-torchtitan-optimizing-checkpointing-efficiency-with-pytorch-dcp/211250)'s Figure 3, when using async checkpointing via `CheckpointManager` with `AsyncMode.ASYNC`, ... | https://github.com/pytorch/torchtitan/issues/930 | closed | [
"question",
"module: checkpoint"
] | 2025-03-05T02:06:09Z | 2025-03-20T18:30:28Z | null | jamesbraza |
huggingface/transformers | 36,546 | how to use transformers with musicgen with float16 | ```
import transformers, torch, builtins, numpy
processor = transformers.AutoProcessor.from_pretrained(' facebook/musicgen-stereo-melody-large', torch_dtype=torch.float16)
model = transformers.MusicgenMelodyForConditionalGeneration.from_pretrained('facebook/musicgen-stereo-melody-large ,torch_dtype=torch.float16).to('... | https://github.com/huggingface/transformers/issues/36546 | closed | [] | 2025-03-05T00:40:24Z | 2025-03-06T09:49:18Z | null | ghost |
pytorch/torchx | 1,012 | possible Improvement: Using shutdown() Before close() in `server.py` | ### Description:
While reviewing the get_routable_ip_to function in [torchx/apps/serve/serve.py](https://github.com/pytorch/torchx/blob/main/torchx/apps/serve/serve.py#L96), I noticed that the socket is directly closed using s.close(), without calling shutdown() beforehand.
```python3
def get_routable_ip_to(addr: str... | https://github.com/meta-pytorch/torchx/issues/1012 | open | [] | 2025-03-04T23:59:09Z | 2025-03-04T23:59:09Z | 0 | allrob23 |
huggingface/lerobot | 813 | State Collection Timing Issue in Manipulator Teleoperation: Post-action vs Pre-action States | **Description:**
I've noticed in lerobot/lerobot/common/robot_devices/robots/manipulator.py that during teleoperation, the state being collected is the state after action execution. Is this intended behavior?
In my understanding, model inference should use the state before action execution, not after. This could potent... | https://github.com/huggingface/lerobot/issues/813 | closed | [
"question",
"policies",
"stale"
] | 2025-03-04T14:19:52Z | 2025-10-07T02:26:55Z | null | www-Ye |
huggingface/agents-course | 284 | [QUESTION] Clarify Payment Required for completing Unit 2 notebooks | For the notebook for [components.ipynb]() I ran the `IngestionPipeline` function as follows:
```py
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core.ingestion import IngestionPipeline
# create the pipelin... | https://github.com/huggingface/agents-course/issues/284 | open | [
"question"
] | 2025-03-04T14:16:01Z | 2025-03-06T16:08:39Z | null | carlosug |
huggingface/agents-course | 281 | [any free and unpaid alternative for Inference Providers?] | while executing the [notebook](https://colab.research.google.com/github/huggingface/agents-course/blob/main/notebooks/unit2/smolagents/multiagent_notebook.ipynb) on **unit2. multi agent systems**, i got the following client error for [Inference Providers](https://huggingface.co/blog/inference-providers):
```python
> ... | https://github.com/huggingface/agents-course/issues/281 | open | [
"question"
] | 2025-03-04T12:51:26Z | 2025-03-31T07:23:49Z | null | carlosug |
pytorch/xla | 8,786 | How to show PJRT Call Stack | ## ❓ Questions and Help
I wounder how to print PJRT Call Stack. Thanks | https://github.com/pytorch/xla/issues/8786 | open | [
"question",
"openxla"
] | 2025-03-04T09:32:43Z | 2025-03-07T20:23:32Z | null | yuanfz98 |
huggingface/lerobot | 808 | How to acquire the End-Effector(eef) pose? | Hi, thanks for your great job!
How can we acquire the eef pose and control the eef pose instead of only the joints states?
Thanks for your attention and hope for your kind response! | https://github.com/huggingface/lerobot/issues/808 | closed | [
"question",
"policies",
"robots",
"stale"
] | 2025-03-04T09:30:35Z | 2025-10-16T02:28:50Z | null | oym1994 |
huggingface/lerobot | 806 | How to control local robot with remote model? | I have achieved the inference process on my local computer. I want to know how to put the model on a remote server and control a robot on local.
My robot: Koch1.1 | https://github.com/huggingface/lerobot/issues/806 | closed | [
"question",
"stale"
] | 2025-03-04T09:09:12Z | 2025-10-16T02:28:51Z | null | neverspillover |
huggingface/optimum-intel | 1,186 | How to initialize development env for this repo? | Hi! I would like to develop this repo, met some issues during env initialization. I ran `pip install -e .` to install current repo to local python env.
However error came out when running 'pytest tests\'
`ImportError while importing test module '/home/shji/codes/optimum-intel/tests/ipex/test_modeling.py'.
Hint: make su... | https://github.com/huggingface/optimum-intel/issues/1186 | closed | [] | 2025-03-04T06:10:15Z | 2025-03-10T06:01:21Z | null | shjiyang-intel |
pytorch/xla | 8,784 | how to save weights | ## ❓ Questions and Help
Hello, I using torchxa to convert model to stablehlo.
https://pytorch.org/xla/master/features/stablehlo.html#torch-export-to-stablehlo
Follow this page,
weights, stablehlo = tx.export.exported_program_to_stablehlo(exported)
print(stablehlo.mlir_module())
Can store weights and/or stablehlo objec... | https://github.com/pytorch/xla/issues/8784 | closed | [
"question"
] | 2025-03-04T06:05:29Z | 2025-03-29T08:35:03Z | null | raninbowlalala |
pytorch/examples | 1,319 | Cuda memory usage does not decrease when increasing the number of cuda cards (fsdp_tp_example.py). | According to the implementation of the source code, I did several experiments to study the script running time and cuda memory occupancy.
- exp1: nproc_per_node=4, nnodes=1 => cuda=2161~2411MB, runtime=63.04s
- exp2: nproc_per_node=8, nnodes=1 => cuda=2141~2395MB, runtime=70.52s
- e... | https://github.com/pytorch/examples/issues/1319 | open | [] | 2025-03-04T04:04:35Z | 2025-03-04T04:59:47Z | 0 | YangHui90 |
huggingface/open-r1 | 457 | How to run reject sampling | I ran generate_reaoning and got the cot data. How do I run reject sampling after that? | https://github.com/huggingface/open-r1/issues/457 | open | [] | 2025-03-03T03:56:32Z | 2025-03-03T03:56:32Z | null | JavaZeroo |
pytorch/serve | 3,396 | Why is TorchServe No Longer Actively Maintained? | Hello, I noticed that the TorchServe GitHub page has been marked as 'Limited Maintenance,' indicating that the project is no longer actively maintained. Could you share the reasons behind this decision? Is it related to the development direction of the PyTorch ecosystem? Additionally, are there any recommended alte... | https://github.com/pytorch/serve/issues/3396 | open | [] | 2025-03-03T02:16:01Z | 2025-04-09T09:29:25Z | 11 | ily666666 |
huggingface/lerobot | 797 | use_delta_joint_actions_aloha | if self.use_delta_joint_actions_aloha:
raise NotImplementedError(
"`use_delta_joint_actions_aloha` is used by pi0 for aloha real models. It is not ported yet in LeRobot."
)
when will you put implementation for it because it is very important
| https://github.com/huggingface/lerobot/issues/797 | closed | [
"question",
"policies"
] | 2025-03-02T18:14:13Z | 2025-04-03T16:39:39Z | null | AbdElrahmanMostafaRifaat1432 |
huggingface/open-r1 | 453 | How to log the intermediate outputs results? | How to log the intermediate outputs results to track the 'aha moment'. How can I set this in config or modify the code? | https://github.com/huggingface/open-r1/issues/453 | closed | [] | 2025-03-01T17:08:48Z | 2025-03-09T13:53:59Z | null | 0205090923 |
huggingface/Math-Verify | 32 | How to adjust the priority of '\\ln' and '*' when parsing latex? | When I try to parse a string: "$$ \\dfrac{\\cos x}{2\\lnx * x^{\\ln x - 1}} $$", the result is "cos(x)/((2*log(x*x**(log(x, E) - 1), E)))", rather than "cos(x)/((2*x**(log(x, E) - 1)*log(x, E)))". It seems that there is something wrong when dealing with the priority of '\\ln' and '*'. So I wonder how to adjust the prio... | https://github.com/huggingface/Math-Verify/issues/32 | closed | [] | 2025-03-01T09:22:31Z | 2025-07-01T20:17:49Z | null | yhhu99 |
pytorch/ao | 1,805 | What kind of layers are optimized by torchao on a RTX 4090? | I am trying to quantize a model and I am running this on a 4090. Since many of the available quantization benchmarks are done on higher gpus, I am trying to establish a baseline perfromance gain I can expect from quantization.
I tried the tutorial at [torchao_demo](https://github.com/ethanshenley/PyTorch-Conference-R... | https://github.com/pytorch/ao/issues/1805 | open | [
"question",
"performance",
"triaged"
] | 2025-03-01T00:36:14Z | 2025-05-01T18:36:43Z | null | naiveen |
pytorch/xla | 8,776 | Standardize `AllClose` calls from test_aten_xla_tensor tests | Standardize `AllClose` calls from `test/cpp/test_aten_xla_tensor_*.cpp` tests to be with the same standards. | https://github.com/pytorch/xla/issues/8776 | open | [
"enhancement",
"documentation"
] | 2025-03-01T00:14:19Z | 2025-03-05T20:24:41Z | 0 | pgmoka |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.