repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js | 1,130 | Tips on Converting Newer Models | ### Question
🎉🎉Happy New Year to the incredible Transformers.js team!🎉🎉
As I work on converting new (text-generation) models for use with Transformers.js.
Here's what i've tried since last week :
* python converter script
* optimum cli onnx
* onnx-community/convert-to-onnx spaces
the problem i encount... | https://github.com/huggingface/transformers.js/issues/1130 | open | [
"question"
] | 2025-01-01T05:32:09Z | 2025-01-01T05:32:09Z | null | josephencila |
huggingface/lerobot | 606 | Dataset does not support length of feature shape > 1 | Hi,
Thank you for this excellent project!
I am trying to create a custom dataset with additional sensory information (such as tactile data) which is an Array3D tensor, but find that when I use the approach mentioned in #547, there is no support to add custom tensor like observations to the episode buffer.
Spec... | https://github.com/huggingface/lerobot/issues/606 | closed | [
"question",
"dataset",
"stale"
] | 2024-12-31T21:08:26Z | 2025-10-19T02:32:29Z | null | akashsharma02 |
huggingface/finetrainers | 169 | How to build a dataset for finetuning CogVideoX I2V 1.5 | Hi,
I want to finetune the CogVideoX I2V 1.5 (5B) model. I have a set of videos that I want to use, but first I need to preprocess them so they meet the requirements of the model. Do I have to make sure that my fine-tuning dataset meets the generation properties of the model? That is, in the case of CogVideoX 1.5, the... | https://github.com/huggingface/finetrainers/issues/169 | closed | [] | 2024-12-31T19:55:00Z | 2025-03-08T23:43:31Z | null | royvelich |
huggingface/diffusers | 10,416 | Euler flow matching scheduler is missing documentation for parameters | 
I think there are some undocumented parameters here. | https://github.com/huggingface/diffusers/issues/10416 | closed | [] | 2024-12-31T13:15:35Z | 2025-01-09T18:54:41Z | 4 | bghira |
huggingface/chat-ui | 1,636 | Any way to pass authorization header from Oauth2 down to custom endpoint? | ## Describe your feature request
It would be nice to be able to pass the authorization header from Oauth2 to custom endpoint. I have an endpoint that mimicks TGI and I would like to authenticate every request in order to protect the api,
## Implementation idea
Just pass an authorization header from frontend to... | https://github.com/huggingface/chat-ui/issues/1636 | open | [
"enhancement"
] | 2024-12-31T13:00:22Z | 2024-12-31T13:00:22Z | 0 | corte |
huggingface/diffusers | 10,415 | [Pipelines] Add AttentiveEraser | ### Model/Pipeline/Scheduler description
I’ve worked on a project called AttentiveEraser, which is a tuning-free method for object removal in images using diffusion models. The code for this project is built upon modifications to existing Diffusers pipelines, so it should be relatively straightforward to integrate i... | https://github.com/huggingface/diffusers/issues/10415 | closed | [
"stale"
] | 2024-12-31T07:44:48Z | 2025-02-05T15:54:43Z | 7 | Anonym0u3 |
huggingface/diffusers | 10,414 | [<languageCode>] Translating docs to Chinese | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐.
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/m... | https://github.com/huggingface/diffusers/issues/10414 | closed | [] | 2024-12-31T06:45:21Z | 2024-12-31T06:49:52Z | 0 | S20180576 |
huggingface/peft | 2,301 | How to pass in an attention _ mask that is one dimension more than input _ ids | ### System Info
Hello, how can I pass in `attention_mask` that has one more dimension than `input_ids`, for example: `output = peft_model.generate(input_ids,attention_mask=attention_mask,max_new_tokens=100)` The `input_ids` dimension is [bitch_size,N], and the `attention_mask` dimension is [bitch_size,N,N].
Under th... | https://github.com/huggingface/peft/issues/2301 | closed | [] | 2024-12-31T02:26:14Z | 2025-02-07T15:03:57Z | null | Chinesehou97 |
huggingface/diffusers | 10,411 | How to call the training of lora weights obtained from examples/concestency_stiffness/train_lcm-distill-lora_std_wds. py | I followed https://github.com/huggingface/diffusers/tree/main/examples/consistency_distillation The provided tutorial trained the final Lora weight, but did not find a way to call it. May I ask if you can provide me with a demo of running and calling this weight? Thank you very much!
the training set:
```
#!/bin/bas... | https://github.com/huggingface/diffusers/issues/10411 | closed | [] | 2024-12-30T12:06:07Z | 2024-12-31T07:21:40Z | null | yangzhenyu6 |
huggingface/text-embeddings-inference | 461 | How to Set the Threshold for gte-multilingual-reranker | I want to use the gte-multilingual-reranker-base model to re-rank the retrieved documents and discard some of them based on a threshold. I have seen examples on Hugging Face where the logits are used as the output scores, but how can I determine the appropriate threshold? | https://github.com/huggingface/text-embeddings-inference/issues/461 | open | [] | 2024-12-30T11:39:48Z | 2025-02-09T06:29:02Z | null | ketusrai |
huggingface/optimum | 2,140 | KeyError: 'swinv2 model type is not supported yet in NormalizedConfig. | ### System Info
```shell
Google Colab
T4 GPU
transformers Version: 4.47.1
optimum Version: 1.24.0.dev0
```
### Who can help?
@michaelbenayoun, @JingyaHuang, @echarlaix
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `example... | https://github.com/huggingface/optimum/issues/2140 | open | [
"bug"
] | 2024-12-30T10:29:14Z | 2024-12-30T10:29:14Z | 0 | Billybeast2003 |
huggingface/optimum-intel | 1,096 | How to use trainer.train() with OVModelForCausalLM() model | I am currently converting a local LLM to Open Vino, I would like to fine tune my model with the Trainer function but I get an error stating: AttributeError: 'OVModelForCausalLM' object has no attribute 'named_children'
Please let me know if there is a way to fine tune openVino models that are loaded with OVModelForC... | https://github.com/huggingface/optimum-intel/issues/1096 | closed | [] | 2024-12-29T23:54:26Z | 2025-02-27T14:54:20Z | null | CJames1261 |
huggingface/trl | 2,523 | How to solve the situation where the tokenizer of the reward model is inconsistent with the tokenizer of the actor model? | https://github.com/huggingface/trl/issues/2523 | open | [
"❓ question"
] | 2024-12-27T09:43:06Z | 2024-12-28T06:26:16Z | null | stephen-nju | |
huggingface/peft | 2,298 | Qdora support | ### Feature request
is it possible to use qdora with peft?
### Motivation
qdora is better than qlora and perform like full fine tuning.
### Your contribution
```
peft_config = LoraConfig(
r=8,
lora_alpha=32,
lora_dropout=0.1,
qdora=True # adding qdora
)
``` | https://github.com/huggingface/peft/issues/2298 | closed | [] | 2024-12-27T04:47:54Z | 2025-01-03T12:26:58Z | 2 | imrankh46 |
huggingface/smolagents | 2 | How to call OpenAI-like models through an API? | How to call OpenAI-like models through an API? | https://github.com/huggingface/smolagents/issues/2 | closed | [] | 2024-12-27T04:34:35Z | 2024-12-29T21:58:10Z | null | win4r |
huggingface/datasets | 7,347 | Converting Arrow to WebDataset TAR Format for Offline Use | ### Feature request
Hi,
I've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by:
```
import json
from datasets import load_dataset
dataset = load_dataset("pixparse/cc3m-wds")
dataset.save_to_disk("./cc3m_1")
```
now I need to convert it to WebDataset's TAR form... | https://github.com/huggingface/datasets/issues/7347 | closed | [
"enhancement"
] | 2024-12-27T01:40:44Z | 2024-12-31T17:38:00Z | 4 | katie312 |
huggingface/transformers.js | 1,118 | Trying to use custom finetuned Whisper Model with | ### Question
@xenova I am trying to use our own Whisper fine tuned model https://huggingface.co/medxcribe/whisper-base.en with
https://huggingface.co/spaces/Xenova/whisper-web. I have uploaded into a seperate repo for reference https://huggingface.co/medxcribe/whisper-base-onnx.en.
We have converted the fine tun... | https://github.com/huggingface/transformers.js/issues/1118 | open | [
"question"
] | 2024-12-26T20:18:36Z | 2024-12-26T20:18:36Z | null | vijaim |
huggingface/finetrainers | 153 | How to generate result of validation and resolution. | Hi author:
I am using your hunyuan finetuning bash to finetune lora on my own dataset with original resolution of 1080p. But I find your model can only run on video with both height and weight can be divided by 32. Can the model also be trained on video with 360p or 720p and why? | https://github.com/huggingface/finetrainers/issues/153 | closed | [] | 2024-12-26T15:21:22Z | 2025-01-10T23:38:39Z | null | Aristo23333 |
huggingface/lerobot | 597 | Inquiry About Support for RDT-1B Model | Hi,
I would like to extend my heartfelt thanks for maintaining such an outstanding codebase. Your dedication and hard work have significantly contributed to advancements in the robotics field, and I truly appreciate the resources and support your community provides.
I am reaching out to inquire whether there are an... | https://github.com/huggingface/lerobot/issues/597 | closed | [
"question",
"policies",
"stale"
] | 2024-12-26T11:12:58Z | 2025-10-08T20:52:51Z | null | Robert-hua |
huggingface/diffusers | 10,383 | [Request] Optimize HunyuanVideo Inference Speed with ParaAttention | Hi guys,
First and foremost, I would like to commend you for the incredible work on the `diffusers` library. It has been an invaluable resource for my projects.
I am writing to suggest an enhancement to the inference speed of the `HunyuanVideo` model. We have found that using [ParaAttention](https://github.com/ch... | https://github.com/huggingface/diffusers/issues/10383 | closed | [
"roadmap"
] | 2024-12-25T15:07:53Z | 2025-01-16T18:05:15Z | 10 | chengzeyi |
huggingface/lerobot | 596 | How to achieve multiple tasks on the basis of LeRobot ? | LeRobot can achieve single tasks (such as inserting, transferring blocks, etc.), how to achieve multiple tasks on the basis of LeRobot (such as first recognizing objects and classifying, and then putting objects in order in boxes, etc.)?"
Please give me some ideas. | https://github.com/huggingface/lerobot/issues/596 | closed | [
"question",
"stale"
] | 2024-12-25T12:20:37Z | 2025-10-17T11:38:20Z | null | wangwisdom |
huggingface/diffusers | 10,375 | [low priority] Please fix links in documentation | https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video
Both links are broken
Make sure to check out the Schedulers [guide](https://huggingface.co/docs/diffusers/main/en/using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse co... | https://github.com/huggingface/diffusers/issues/10375 | closed | [] | 2024-12-25T09:04:33Z | 2024-12-28T20:01:27Z | 0 | nitinmukesh |
huggingface/diffusers | 10,374 | Is there any plan to support TeaCache for training-free acceleration? | TeaCache is a training-free inference acceleration method for visual generation. TeaCache currently supports HunyuanVideo, CogVideoX, Open-Sora, Open-Sora-Plan and Latte. TeaCache can speedup HunyuanVideo 2x without much visual quality degradation. For example, the inference for a 720p, 129-frame video takes around 5... | https://github.com/huggingface/diffusers/issues/10374 | open | [
"wip"
] | 2024-12-25T05:00:23Z | 2025-01-27T01:28:53Z | 4 | LiewFeng |
huggingface/chat-ui | 1,633 | docker run is not working | I'm running the following:
```
docker run -p 3000:3000 --env-file env.local huggingface/chat-ui
```
The env file has the following set: `HF_TOKEN`, `MONGODB_URL` and `MODELS`. The container prints the following:
```
Listening on 0.0.0.0:3000
```
However, on hitting the `localhost:3000`, I get a blank page wit... | https://github.com/huggingface/chat-ui/issues/1633 | open | [
"support"
] | 2024-12-23T08:36:09Z | 2025-01-06T07:30:46Z | 1 | sebastiangonsal |
huggingface/peft | 2,293 | Is it possible to add LoRA on specific head? | ### Feature request
Could I add LoRA only to some selected heads on the model?
I read some documentation [here](https://huggingface.co/docs/peft/developer_guides/custom_models), but am still not sure about how to implement my goal.
### Motivation
Current LoRA Config can allow users to decide where matrices to add L... | https://github.com/huggingface/peft/issues/2293 | closed | [] | 2024-12-22T19:57:54Z | 2025-12-14T10:07:49Z | 12 | SpeeeedLee |
huggingface/datasets | 7,344 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs | ### Describe the bug
I am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when ... | https://github.com/huggingface/datasets/issues/7344 | closed | [] | 2024-12-22T16:30:07Z | 2025-01-15T05:32:00Z | 2 | clankur |
huggingface/diffusers | 10,345 | safetensor streaming in from_single_file_loading() | can we add support for streaming safetensors while loading using `from_single_file`.
source:https://github.com/run-ai/runai-model-streamer
example:
```python
from runai_model_streamer import SafetensorsStreamer
file_path = "/path/to/file.safetensors"
with SafetensorsStreamer() as streamer:
streamer.str... | https://github.com/huggingface/diffusers/issues/10345 | closed | [
"stale"
] | 2024-12-22T13:27:46Z | 2025-01-21T15:07:58Z | 2 | AbhinavJangra29 |
huggingface/accelerate | 3,309 | deepspeed zero3 how to save custom model? | DeepSpeedEngine(
(module): LLMDecoder(
(model): Qwen2ForSequenceClassification(
(model): Qwen2Model(
(embed_tokens): Embedding(151936, 1536)
(layers): ModuleList(
(0-27): 28 x Qwen2DecoderLayer(
(self_attn): Qwen2SdpaAttention(
(q_proj): Linear(in_... | https://github.com/huggingface/accelerate/issues/3309 | closed | [] | 2024-12-21T17:01:17Z | 2025-01-30T15:06:45Z | null | NLPJCL |
huggingface/diffusers | 10,334 | Sana broke on MacOS. Grey images on MPS, NaN's on CPU. | ### Describe the bug
Just started to play with Sana, was excited when I saw it was coming to Diffusers as the NVIDIA supplied code was full of CUDA only stuff.
Ran the example code, changing cuda to mps and got a grey image.
 using SentenceTransformer?
The main difficulty I met is about the weight loading of prediction head as defined [here](https://github.com/huggingface/transformers/blob/f42084e6411c39b74309af4a7d6ed640c01a4c9e/src/tran... | https://github.com/huggingface/sentence-transformers/issues/3141 | closed | [] | 2024-12-20T06:52:44Z | 2024-12-24T03:08:47Z | null | Hannibal046 |
huggingface/picotron | 15 | Difference between picotron and nanotron | What is the difference between picotron and [nanotron](https://github.com/huggingface/nanotron)? Why huggingface team rolled out two hybrid-parallelism framework? | https://github.com/huggingface/picotron/issues/15 | closed | [
"question"
] | 2024-12-19T12:48:57Z | 2024-12-20T10:17:25Z | null | cailun01 |
huggingface/diffusers | 10,302 | Using FP8 for inference without CPU offloading can introduce noise. | ### Describe the bug
If I use ```pipe.enable_model_cpu_offload(device=device)```, the model can perform inference correctly after warming up. However, if I comment out this line, the inference results are noisy.
### Reproduction
```python
from diffusers import (
FluxPipeline,
FluxTransformer2DModel... | https://github.com/huggingface/diffusers/issues/10302 | open | [
"bug"
] | 2024-12-19T12:39:06Z | 2025-03-10T14:18:58Z | 6 | todochenxi |
huggingface/candle | 2,674 | [Question] How to create a autograd function like in PyTorch? How to customize forward and backward process? | https://github.com/huggingface/candle/issues/2674 | open | [] | 2024-12-19T07:02:04Z | 2024-12-19T07:02:15Z | null | VanderBieu | |
huggingface/blog | 2,551 | How to process and visualize the segment output tokens? | How to process the segment tokens and generate segmentation masks? what the output means?

| https://github.com/huggingface/blog/issues/2551 | open | [] | 2024-12-19T03:11:15Z | 2024-12-19T03:11:15Z | null | 00mmw |
huggingface/transformers | 35,316 | How to use a custom Image Processor? | I want to use the processor in the form of `auto_map` but when using `AutoProcessor.from_pretrained`, I am unable to load the custom `ImageProcessor`.
The root cause lies in the use of the `transformers_module` to initialize the class in `ProcessorMixin`.
https://github.com/huggingface/transformers/blob/c7e48053... | https://github.com/huggingface/transformers/issues/35316 | closed | [] | 2024-12-18T12:04:33Z | 2024-12-19T02:53:43Z | null | glamourzc |
huggingface/diffusers | 10,281 | Request to implement FreeScale, a new diffusion scheduler | ### Model/Pipeline/Scheduler description
FreeScale is a tuning-free method for higher-resolution visual generation, unlocking the 8k image generation for pre-trained SDXL! Compared to direct inference by SDXL, FreeScale brings negligible additional memory and time costs.
.
The reason is that the loader creates multiple proces... | https://github.com/huggingface/diffusers/issues/10280 | closed | [
"bug"
] | 2024-12-18T06:02:41Z | 2025-01-10T10:11:05Z | 4 | wlhee |
huggingface/optimum-neuron | 750 | Document how to use Qwen 2.5 | ### Feature request
Qwen 2.5 7B Instruct on EC2 with HF DL AMI
Qwen 2.5 7B Instruct on Sagemaker with HF DLC Neuronx TGI
Maybe something for the code version too?
Dependency of adding the model to the cache
### Motivation
increase AMI and DLC usage
### Your contribution
doc | https://github.com/huggingface/optimum-neuron/issues/750 | closed | [
"Stale"
] | 2024-12-17T16:03:25Z | 2025-01-22T08:04:54Z | null | pagezyhf |
huggingface/accelerate | 3,294 | How to run accelerate with PYTORCH_ENABLE_MPS_FALLBACK | ### System Info
```Shell
MacOS
transformers>=4.35.1
datasets[audio]>=2.14.7
accelerate>=0.24.1
matplotlib
wandb
tensorboard
Cython
- `Accelerate` version: 1.2.1
- Platform: macOS-14.7.1-arm64-arm-64bit
- `accelerate` bash location: .venv/bin/accelerate
- Python version: 3.12.3
- Numpy version: 2.0.2
... | https://github.com/huggingface/accelerate/issues/3294 | closed | [] | 2024-12-15T07:03:41Z | 2025-01-23T15:06:57Z | null | mirodil-ml |
huggingface/diffusers | 10,223 | Where should I obtain the lora-sdxl-dreambooth-id in Inference | ### Describe the bug
I tried to upload the download link from the README file generated during training, but an error indicated it was incorrect. Where should I obtain the lora-id for Inference?
### Reproduction
README.md:
---
base_model: /data/ziqiang/czc/diffusers/examples/dreambooth/model
library_name: diffuse... | https://github.com/huggingface/diffusers/issues/10223 | open | [
"bug",
"stale"
] | 2024-12-14T06:34:56Z | 2025-02-07T15:03:24Z | 5 | Zarato2122 |
huggingface/lerobot | 575 | Gello dataset converter | I made a converter for the [Gello](https://wuphilipp.github.io/gello_site/) dataset format (pickles containing dicts with all the observations).
If this is of interest, I am willing to contribute it back here.
The current code can be found [here](https://github.com/tlpss/lerobot/blob/tlpss-dev/lerobot/common/da... | https://github.com/huggingface/lerobot/issues/575 | closed | [
"enhancement",
"question",
"dataset",
"stale"
] | 2024-12-13T15:47:58Z | 2025-10-08T08:50:40Z | null | tlpss |
huggingface/diffusers | 10,207 | KolorsPipeline does not support from_single_file | from diffusers import KolorsPipeline
KolorsPipeline.from_single_file("models/kolrs-8steps.safetensors")
How does KolorsPipeline load a single file model? | https://github.com/huggingface/diffusers/issues/10207 | open | [
"stale",
"single_file"
] | 2024-12-13T09:44:46Z | 2025-01-12T15:02:46Z | 3 | Thekey756 |
huggingface/sentence-transformers | 3,134 | How to set a proper batchsize when using CachedMultipleNegativesRankingLoss? | When using the [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss), I tried different batchsize(per_device_train_batch_size) setting, and found that 512 was the maximum. When batchsize was greater than 512, GPU memory OOM was happened.
... | https://github.com/huggingface/sentence-transformers/issues/3134 | open | [] | 2024-12-13T09:25:34Z | 2024-12-27T13:46:17Z | null | awmoe |
huggingface/sentence-transformers | 3,133 | How to avoid the long time waiting before start training? | Dear developer,
Thanks for the great sentence-transformers library!
I am finetuning the [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) using my own data following the tutorial from: https://sbert.net/docs/sentence_... | https://github.com/huggingface/sentence-transformers/issues/3133 | open | [] | 2024-12-13T09:10:32Z | 2024-12-25T03:46:50Z | null | awmoe |
huggingface/lighteval | 447 | [BUG] how to eval large scale model use 1dp+8pp? | ## Describe the bug
I tired to eval a large scale model use1dp+8pp with accelerate. I use the command like the following:
```
accelerate launch --multi_gpu --num_processes=1 run_evals_accelerate.py \
--model_args="pretrained=<path to model on the hub>" \
--model_parallel \
--tasks <task parameters> \
... | https://github.com/huggingface/lighteval/issues/447 | closed | [
"bug"
] | 2024-12-13T03:56:36Z | 2025-01-02T11:20:20Z | null | mxjmtxrm |
huggingface/diffusers | 10,196 | How to finetune Flux-dev full params, 80G OOM ... | I am using the [train_dreambooth_flux](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_flux.py) script to fine-tune the `flux-dev` model with full parameters using DeepSpeed Stage 2. However, I am still encountering out-of-memory issues on an 80GB GPU. Are there any solutions ava... | https://github.com/huggingface/diffusers/issues/10196 | open | [
"training"
] | 2024-12-12T09:24:18Z | 2025-08-20T13:19:20Z | null | huangjun12 |
huggingface/chat-ui | 1,627 | Cookie “hf-chat” has been rejected because there is an existing “secure” cookie. | ## Bug description
I use `ghcr.io/huggingface/chat-ui-db:latest` to host `ChatUI` in docker. If `PUBLIC_ORIGIN="http://localhost"` in `.env.local` and visit `ChatUI` through `http://localhost:3000`, it works well. Then I try to replace `localhost` by my domain name `qiangwulab.sjtu.edu.cn`. For the sake of testing, ... | https://github.com/huggingface/chat-ui/issues/1627 | open | [
"bug"
] | 2024-12-12T07:04:26Z | 2024-12-12T07:04:26Z | 0 | ljw20180420 |
huggingface/diffusers | 10,190 | How to use fluxfill to repalce background? | I want to use fluxfill to change the background, but I find that the prompt words are almost useless, and the output image is more like the original image.
I have tested multiple guidance_scale parameters, but found that the resulting image is more related to the original image, and less related to the prompt word. | https://github.com/huggingface/diffusers/issues/10190 | closed | [] | 2024-12-11T10:48:27Z | 2025-05-23T12:12:28Z | null | babyta |
huggingface/sentence-transformers | 3,132 | How to train a model with DDP for TSDAE | hello, I want to train a model using TSDAE method.
Is there any way to train with DDP(Multi-GPU)?
I already read your sample code.
But I'm not sure how to apply DenoisingAutoEncoderDataset in SentenceTransformerTrainer.
([[v3] Training refactor - MultiGPU, loss logging, bf16, etc](https://github.com/UKPLab/sen... | https://github.com/huggingface/sentence-transformers/issues/3132 | closed | [] | 2024-12-11T10:39:30Z | 2024-12-11T14:04:32Z | null | OnAnd0n |
huggingface/diffusers | 10,180 | Can't load multiple loras when using Flux Control LoRA | ### Describe the bug
I was trying out the FluxControlPipeline with the Control LoRA introduced in #9999 , but had issues loading in multiple loras.
For example, if I load the depth lora first and then the 8-step lora, it errors on the 8-step lora, and if I load the 8-step lora first and then the depth lora, it err... | https://github.com/huggingface/diffusers/issues/10180 | closed | [
"bug",
"help wanted",
"lora"
] | 2024-12-10T21:40:24Z | 2024-12-20T09:00:33Z | 11 | jonathanyin12 |
huggingface/transformers | 35,186 | How to convert my Mask2Former model (ResNet-50 backbone) to Hugging Face transformer | ### System Info
```shell
- `transformers` version: 4.34.0
- Platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.17
- Python version: 3.8.20
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.5
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
... | https://github.com/huggingface/transformers/issues/35186 | closed | [] | 2024-12-10T19:17:22Z | 2025-01-18T08:03:21Z | null | yujunwei04 |
huggingface/datasets | 7,318 | Introduce support for PDFs | ### Feature request
The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"pat... | https://github.com/huggingface/datasets/issues/7318 | open | [
"enhancement"
] | 2024-12-10T16:59:48Z | 2024-12-12T18:38:13Z | 6 | yabramuvdi |
huggingface/diffusers | 10,172 | Raise an error when `len(gligen_images )` is not equal to `len(gligen_phrases)` in `StableDiffusionGLIGENTextImagePipeline` | To whom it may concern,
I found that when using `StableDiffusionGLIGENTextImagePipeline`, there is no error raised when `len(gligen_images )` is not equal to `len(gligen_phrases)`. And when I dig into the source code, it seems that these two features are zipped together in a for loop during the preprocessing. I gues... | https://github.com/huggingface/diffusers/issues/10172 | closed | [] | 2024-12-10T14:25:48Z | 2024-12-11T08:59:44Z | 1 | abcdefg133hi |
huggingface/lerobot | 568 | Do I need two SO 100 arms to get started? | I have printed and assembled one arms, the follower version. Do I need two arms to record datasets and do testing? | https://github.com/huggingface/lerobot/issues/568 | closed | [
"question",
"robots"
] | 2024-12-10T13:31:50Z | 2025-10-08T08:45:58Z | null | rabhishek100 |
huggingface/transformers | 35,152 | how to load the weight of decoder.embed_tokens.weight seperately from the shared weight? | ### System Info
- `transformers` version: 4.46.3
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.17
- Python version: 3.8.20
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorfl... | https://github.com/huggingface/transformers/issues/35152 | closed | [
"bug"
] | 2024-12-08T15:46:55Z | 2025-01-22T08:03:52Z | null | SoSongzhi |
huggingface/datasets | 7,311 | How to get the original dataset name with username? | ### Feature request
The issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub.
The solution used now is to get the dataset name, config and split, then `... | https://github.com/huggingface/datasets/issues/7311 | open | [
"enhancement"
] | 2024-12-08T07:18:14Z | 2025-01-09T10:48:02Z | null | npuichigo |
huggingface/lerobot | 555 | To bulid my own policy, but have errors TypeError: '>' not supported between instances of 'int' and 'dict' | I improved the act policy in lerobot framework and created a new policy named myact. I mainly did the following:
Create the my_act folder in the lerobot/common/policies/ path
Create 'configuration_my_act.py' and 'modeling_my_act.py' in the + my_act folder
Create lerobot/configs/policy/myact yaml, which is modified t... | https://github.com/huggingface/lerobot/issues/555 | closed | [
"enhancement",
"question"
] | 2024-12-07T09:10:35Z | 2025-04-07T16:08:38Z | null | zhouzhq2021 |
huggingface/diffusers | 10,144 | Why mochi diffusers video output is worse than mochi official code? | ### Describe the bug
The quality of video is worse.
### Reproduction
Run the code with official prompt
### Logs
_No response_
### System Info
diffusers@main
### Who can help?
@a-r-r-o-w @yiyixuxu | https://github.com/huggingface/diffusers/issues/10144 | closed | [
"bug",
"stale"
] | 2024-12-07T05:53:57Z | 2025-01-07T15:38:38Z | 10 | foreverpiano |
huggingface/peft | 2,264 | Guidance Needed on Two-Stage Fine-Tuning with LoRA(SFT and DPO) for Model Adaptation | # I am planning to perform a two-stage fine-tuning process and need some guidance on how to proceed.
## First Stage
1. Load Base Model: I start by loading the base model, qwen1.5 32B.
2. Apply LoRA Fine-Tuning: I then apply LoRA fine-tuning to this base model and obtain a new model state.
3. Save Adapter Mode... | https://github.com/huggingface/peft/issues/2264 | closed | [] | 2024-12-06T13:35:20Z | 2025-01-06T10:50:09Z | 5 | none0663 |
huggingface/transformers | 35,118 | How to load local transformers? | transformers==4.47.0.dev0
I want to use my local transformers. And I tried to set `sys.insert(0,'xxx/transformers/src')` and `PYTHONPATH=xxx/transformers/src`, but they doesn't work.
PLZ, tell me why. | https://github.com/huggingface/transformers/issues/35118 | closed | [] | 2024-12-06T10:07:57Z | 2024-12-12T04:05:08Z | null | yiyexy |
huggingface/lerobot | 552 | Rounding to int32 makes robot less precise. Do we have a solid reason for doing this? | ### System Info
```Shell
Latest LeRobot. MacOS
```
### Information
- [X] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
1) Run teleoperation
2) Measure preciseness with rounding and without.
at lerobot/common/robot_devices/robots/manipula... | https://github.com/huggingface/lerobot/issues/552 | closed | [
"bug",
"question",
"stale"
] | 2024-12-05T16:31:49Z | 2025-10-08T13:08:50Z | null | 1g0rrr |
huggingface/tokenizers | 1,696 | How to determine the splicing logic in post_processor based on the sentence to be tokenized? | For example,
```python
def post_processor(self, token_ids_0, token_ids_1=None):
if "cls" in token_ids_0:
return processors.TemplateProcessing(
single=f"{cls} $A {sep}",
pair=f"{cls} $A {sep} $B {cls}",
special_tokens=[
... | https://github.com/huggingface/tokenizers/issues/1696 | open | [] | 2024-12-05T14:05:13Z | 2024-12-05T14:05:13Z | null | gongel |
huggingface/peft | 2,262 | Could you provide example code for AdaLoRA finetuning decoder-only model? | ### Feature request
The current [example of AdaLoRA](https://github.com/huggingface/peft/blob/b2922565c4c4445706a87cf7b988c828b451fe61/examples/conditional_generation/peft_adalora_seq2seq.py) is on **facebook/bart-base**. Since AdaLoRA requires hand-crafted calculations on loss, would it be possible to provide me som... | https://github.com/huggingface/peft/issues/2262 | closed | [] | 2024-12-05T12:03:31Z | 2025-01-18T15:03:29Z | 4 | SpeeeedLee |
huggingface/diffusers | 10,129 | Does StableDiffusion3 have an image2image pipeline with ControlNet? | I want to use `ControlNet` with `StableDiffusion3`, providing a prompt, an original image, and a control image as inputs. However, I found that the `StableDiffusion3ControlNetPipeline` only supports prompts and control images as inputs. The `StableDiffusionControlNetImg2ImgPipeline` allows for providing a prompt, an or... | https://github.com/huggingface/diffusers/issues/10129 | closed | [
"New pipeline/model",
"contributions-welcome"
] | 2024-12-05T09:40:03Z | 2025-01-02T20:02:33Z | 1 | ZHJ19970917 |
huggingface/diffusers | 10,128 | Is there any plan to support fastercache? | Expect to support fastercache, https://github.com/Vchitect/FasterCache | https://github.com/huggingface/diffusers/issues/10128 | closed | [
"wip",
"performance"
] | 2024-12-05T09:11:19Z | 2025-03-21T04:05:06Z | 4 | songh11 |
huggingface/datasets | 7,306 | Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values). | ### Describe the bug
When creating a dataset from a list of datapoints, information is lost of the individual items.
Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below.
-> What is the best way to create... | https://github.com/huggingface/datasets/issues/7306 | open | [] | 2024-12-05T09:07:53Z | 2024-12-05T09:09:38Z | 0 | ai-nikolai |
huggingface/lerobot | 549 | Low accuracy for act policy on pushT env | The highest success rate is 44%, as n_decoder_layers=7. Are there any other tricks for this? | https://github.com/huggingface/lerobot/issues/549 | closed | [
"question",
"policies",
"stale"
] | 2024-12-05T06:18:06Z | 2025-10-19T02:32:37Z | null | KongCDY |
huggingface/Google-Cloud-Containers | 128 | Can we use Multi-LORA CPU | Hi,
Im currently following this doc: https://huggingface.co/docs/google-cloud/en/examples/gke-tgi-multi-lora-deployment
After got a bug: "Can’t scale up due to exceeded quota" and do some research, I suspect that my free trial (300$) account is not able to increase GPU quota (even I have activated my account to n... | https://github.com/huggingface/Google-Cloud-Containers/issues/128 | open | [
"question"
] | 2024-12-05T05:42:51Z | 2024-12-12T10:06:43Z | null | AndrewNgo-ini |
huggingface/peft | 2,260 | Is it possible to support the transformer engine when using Lora in Megatron? | ### Feature request
I am currently using the Megatron framework and want to use Lora for training. I saw that the Megatron format is supported at https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/tp_layer.py RowParallelLinear and ColumnParallelLinear do the adaptation. But if I use the transformer eng... | https://github.com/huggingface/peft/issues/2260 | closed | [] | 2024-12-05T03:24:15Z | 2025-01-12T15:03:29Z | 3 | liulong11 |
huggingface/diffusers | 10,120 | memory consumption of dreambooth+SD3 | Hi, I am running dreambooth SD3 with a single A100 GPU, I reduced resolution to 256; but it still need more memory than a single A100 has? I am wondering is this huge memory consumption normal?
```
!python train_dreambooth_sd3.py \
--pretrained_model_name_or_path="stabilityai/stable-diffusion-3-medium-diffusers"... | https://github.com/huggingface/diffusers/issues/10120 | closed | [
"bug",
"stale",
"training"
] | 2024-12-04T19:39:04Z | 2025-01-27T01:30:18Z | 5 | KolvacS-W |
huggingface/diffusers | 10,112 | Detail-Daemon diffusers | **Describe the solution you'd like.**
Detail-Daemon: https://github.com/Jonseed/ComfyUI-Detail-Daemon
How to implement Detail-Daemon in diffusers, as seen in https://github.com/Jonseed/ComfyUI-Detail-Daemon. Will there be a better official component in the future? | https://github.com/huggingface/diffusers/issues/10112 | open | [
"wip",
"consider-for-modular-diffusers"
] | 2024-12-04T09:14:39Z | 2025-01-03T18:01:24Z | 10 | NicholasCao |
huggingface/lerobot | 547 | How to make a custom LeRobotDataset with v2? | Hi folks, thanks for the amazing open source work!
I am trying to make a custom dataset to use with the LeRobotDataset format.
The readme says to copy the example scripts here which I've done, and I have a working format script of my own.
https://github.com/huggingface/lerobot/blob/8e7d6970eaf5a64b8af6ec45586d... | https://github.com/huggingface/lerobot/issues/547 | closed | [
"question",
"dataset",
"stale"
] | 2024-12-04T08:00:19Z | 2025-10-08T08:28:34Z | null | alik-git |
huggingface/lerobot | 545 | Poor success rate in complex scenarios | Hi I used Moss robot to play with and train ACT policy, when it comes to one lego piece, it can finish grabbing task at high success rate after recording 50+ episodes with different pose & location variants, but generalization on multi-piece random location is not promising.
When I started to add complexity (for exa... | https://github.com/huggingface/lerobot/issues/545 | closed | [
"question",
"policies",
"stale"
] | 2024-12-04T06:20:31Z | 2025-10-08T08:28:45Z | null | mydhui |
huggingface/frp | 14 | where is the code of frpc-gradio-0.3 | https://github.com/huggingface/frp/issues/14 | closed | [] | 2024-12-04T05:37:34Z | 2025-03-11T00:55:39Z | null | BoyuanJiang | |
huggingface/peft | 2,255 | Is this the right way to check whether a model has been trained as expected? | I'd like to check whether my PEFT model has been trained as intended, i.e. whether the PEFT weights have changed, but not the base weights. The following code works, but I'm sure a PEFT specialist will suggest a better way.
```python
import tempfile
import torch
from datasets import load_dataset
from peft impo... | https://github.com/huggingface/peft/issues/2255 | closed | [] | 2024-12-03T17:36:00Z | 2024-12-04T12:01:37Z | 5 | qgallouedec |
huggingface/peft | 2,251 | a guide to add a new fine-tuning method in the doc | ### Feature request
Hello, I am a researcher in the finetune area. Can you publish a guide to add a new fine-tuning method in the doc? I think researchers like me are glad to experiment their methods based on this repo.
### Motivation
Researchers like me are glad to experiment their methods based on this repo, but d... | https://github.com/huggingface/peft/issues/2251 | closed | [] | 2024-12-03T13:46:02Z | 2024-12-04T02:12:35Z | 2 | YF-T |
huggingface/diffusers | 10,076 | Do we have any script covert from hf format to orginal format? | **Is your feature request related to a problem? Please describe.**
scripts/convert_cogvideox_to_diffusers.py
in this script, we can convert cogvideox -> diffusers. Do we have the opposite script?
cc @yiyixuxu
| https://github.com/huggingface/diffusers/issues/10076 | open | [
"good first issue",
"contributions-welcome",
"conversion script"
] | 2024-12-02T07:49:34Z | 2024-12-02T18:22:50Z | 1 | foreverpiano |
huggingface/trl | 2,424 | How to calculate the loss of multi-turn dialogue training data? | In a single data entry containing multiple turns of dialogue, abbreviated as Q1 + A1 + Q2 + A2, does this project calculate the loss only for the last answer of the multi-turn dialogue, or for each answer? | https://github.com/huggingface/trl/issues/2424 | closed | [
"❓ question",
"🏋 SFT"
] | 2024-12-02T07:47:17Z | 2025-01-20T02:47:34Z | null | NUMB1234 |
huggingface/diffusers | 10,074 | how to install diffusers 0.32.0 | FluxFillPipeline Function need =0.32.0 But I don't know how to install it, can anyone help me? Thanks in advance | https://github.com/huggingface/diffusers/issues/10074 | closed | [] | 2024-12-02T07:05:24Z | 2024-12-02T19:11:34Z | null | babyta |
huggingface/diffusers | 10,070 | Xformers info , memory efficient atttention unavailable | ### Describe the bug
I just started learning Stable Diffuision on Win11. After I installed xformers, I found several memory_efficient_attention string is unavailable. Is it possible to make them available? Thanks for any help.
### Reproduction
xFormers 0.0.28.post3
memory_efficient_attention.ckF: ... | https://github.com/huggingface/diffusers/issues/10070 | open | [
"bug",
"stale"
] | 2024-12-01T16:14:21Z | 2025-01-01T15:03:09Z | 1 | Stareshine |
huggingface/Google-Cloud-Containers | 126 | Deployment error on GKE | Hello!
I deployed Gemma 2 2b it on GKE with autopilot mode following these instructions https://cloud.google.com/kubernetes-engine/docs/tutorials/serve-gemma-gpu-tgi#autopilot. There's this error Node scale up in zones us-central1-c associated with this pod failed: GCE quota exceeded. Pod is at risk of not being sched... | https://github.com/huggingface/Google-Cloud-Containers/issues/126 | closed | [
"question"
] | 2024-12-01T14:09:29Z | 2025-01-07T08:39:07Z | null | piksida |
huggingface/lerobot | 538 | questions about load dataset for localhost, make own policy and use headless eval mode | Hello, I'm trying to download a data set on hugging face to the local and then call this data set from the local. For example, 'aloha_sim_insertion_scripted_image' , its format is many 'episode_000000.parquet' files . Then how to load this format by LeRobotDataset() func or other ways?
Second, I want to create my ow... | https://github.com/huggingface/lerobot/issues/538 | closed | [
"question",
"stale"
] | 2024-12-01T03:32:06Z | 2025-10-19T02:32:41Z | null | zhouzhq2021 |
huggingface/lerobot | 536 | How auto calibration works | Is there any details about run_arm_auto_calibration_moss and run_arm_auto_calibration_so100 we can refer? I read the code but couldn't fully understand.
When should we use auto_calibration, instead of the manual calibration calculating the homing_offset of the rotated (90d) pose?
What to check whether my underst... | https://github.com/huggingface/lerobot/issues/536 | closed | [
"question",
"robots",
"stale"
] | 2024-11-30T18:04:23Z | 2025-10-08T08:37:24Z | null | wzds2015 |
huggingface/accelerate | 3,269 | 🤨Question: What if model has float16 dtype and `mixed_precision` is set to fp16 as well? | As the title:
**🤨Question: What if model has float16 dtype and `mixed_precision` is set to fp16 as well?**
- Will it computate in original float16? Like Auto-Mixed-Precision never exist
- or some modules, which are easy to overflow(e.g. BatchNorm, LayerNorm), will be upcasted to float32, as AMP fp32->fp16 does?... | https://github.com/huggingface/accelerate/issues/3269 | closed | [] | 2024-11-29T17:55:58Z | 2025-01-07T15:33:26Z | null | townwish4git |
huggingface/chat-macOS | 36 | Document how to download and install a local model | 1st, thanks very much for this work!
I'm a bit of nube here.
The 'Get' button takes you to web page for the example, however chat-macOS instruction are not part of the options. And also where do you place the downloaded model for the "add +" option and where do the models go? Is there a way to configure where model... | https://github.com/huggingface/chat-macOS/issues/36 | open | [] | 2024-11-29T17:18:43Z | 2024-11-29T17:18:43Z | null | deepcoder |
huggingface/diffusers | 10,055 | Training script for a Controlnet based on SD3 does not work | ### Describe the bug
Hi @sayakpaul and all others :)
The training script for a Control-net based on Stable Diffusion 3 seems to not work.
**RuntimeError: Given groups=1, weight of size [1536, 17, 2, 2], expected input[4, 16, 64, 64] to have 17 channels, but got 16 channels instead**
I tried to follow th... | https://github.com/huggingface/diffusers/issues/10055 | open | [
"bug",
"stale"
] | 2024-11-29T13:46:29Z | 2025-02-03T15:03:46Z | 17 | Putzzmunta |
huggingface/diffusers | 10,050 | Is there any img2img KDiffusion equivalent of StableDiffusionKDiffusionPipeline? | ### Model/Pipeline/Scheduler description
I'm working on result alignment between diffusers and A1111 webui.
In txt2img scene, I can achieve via `StableDiffusionKDiffusionPipeline`, refer to https://github.com/huggingface/diffusers/issues/3253.
But in img2img scene, is there any KDiffusion pipeline equivalent?
I... | https://github.com/huggingface/diffusers/issues/10050 | open | [
"stale"
] | 2024-11-29T07:47:11Z | 2024-12-29T15:03:05Z | 2 | juju812 |
huggingface/diffusers | 10,043 | F5-TTS Integration | ### Model/Pipeline/Scheduler description
F5-TTS is a fully non-autoregressive text-to-speech system based on flow matching with Diffusion Transformer (DiT).
It has excellent voice cloning capabilities, and audio generation is of quite high quality.
### Open source status
- [X] The model implementation is available.... | https://github.com/huggingface/diffusers/issues/10043 | open | [
"help wanted",
"contributions-welcome"
] | 2024-11-28T11:14:18Z | 2025-11-02T18:46:02Z | 11 | nityanandmathur |
huggingface/lerobot | 533 | How to merge multiple recorded datasets? | Hi, Thank you so much for the automatic resume during data recording,sometimes ubstable camera issues or other situations (e.g. do not have enough time to finish recording) might cause process stopping.
I was wondering is there anyway to merge multiple recorded datasets? for instance I have two datasets 'cube grabbi... | https://github.com/huggingface/lerobot/issues/533 | closed | [
"question",
"dataset"
] | 2024-11-28T01:53:28Z | 2025-10-08T08:33:31Z | null | mydhui |
huggingface/transformers | 34,981 | How to Log Training Loss at Step Zero in Hugging Face Trainer or SFT Trainer? | ### Feature request
log train loss on start
----
’m using the Hugging Face `Trainer` (or `SFTTrainer`) for fine-tuning, and I want to log the training loss at step 0 (before any training steps are executed). I know there’s an `eval_on_start` option for evaluation, but I couldn't find a direct equivalent for trai... | https://github.com/huggingface/transformers/issues/34981 | open | [
"Feature request"
] | 2024-11-28T00:24:43Z | 2024-11-29T07:35:28Z | null | brando90 |
huggingface/transformers.js | 1,055 | Support for Typescript docs | ### Question
I have been trying to implement server side sentiment analysis using this [tutorial](https://huggingface.co/docs/transformers.js/main/en/tutorials/next#prerequisites) but its in Javascript. I looked through the docs but there seems to be no information on implementing it using Typescript. So far I have in... | https://github.com/huggingface/transformers.js/issues/1055 | open | [
"question"
] | 2024-11-26T21:38:54Z | 2024-11-27T02:20:59Z | null | SadmanYasar |
huggingface/datasets | 7,299 | Efficient Image Augmentation in Hugging Face Datasets | ### Describe the bug
I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient.
... | https://github.com/huggingface/datasets/issues/7299 | open | [] | 2024-11-26T16:50:32Z | 2024-11-26T16:53:53Z | 0 | fabiozappo |
huggingface/lerobot | 527 | Is there a `select_actions` abstraction? | This line references a `select_actions` function which doesn't seem to exist. This functionality (abstract away access to the future action queue, instead of just returning the first action) would be useful - did it use to / will it exist?
https://github.com/huggingface/lerobot/blob/96c7052777aca85d4e55dfba8f81586103b... | https://github.com/huggingface/lerobot/issues/527 | closed | [
"question",
"policies",
"stale"
] | 2024-11-26T14:22:31Z | 2025-10-08T08:33:51Z | null | genemerewether |
huggingface/diffusers | 10,025 | attention mask for transformer Flux | ### Describe the bug
Is it possible to get back the `attention_mask` argument in the flux attention processor
```
hidden_states = F.scaled_dot_product_attention(query, key, value, dropout_p=0.0, is_causal=False,attn_mask=attention_mask)
```
https://github.com/huggingface/diffusers/blob/main/src/diffusers/mo... | https://github.com/huggingface/diffusers/issues/10025 | closed | [
"bug"
] | 2024-11-26T08:51:20Z | 2024-12-05T00:22:37Z | 19 | christopher5106 |
huggingface/accelerate | 3,263 | How to load checkpoint shards one by one to avoid OOM error? | ### System Info
```Shell
- `Accelerate` version: 1.1.0
- Platform: Linux-5.10.112-005.ali5000.al8.x86_64-x86_64-with-glibc2.17
- `accelerate` bash location: /home/admin/anaconda3/envs/llama_factory/bin/accelerate
- Python version: 3.10.14
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- ... | https://github.com/huggingface/accelerate/issues/3263 | closed | [] | 2024-11-26T08:25:37Z | 2025-01-06T15:06:50Z | null | amoyplane |
huggingface/lerobot | 525 | Train a RL agent (without initial dataset) | Hi,
I'm currently working on trying to integrate the following environment in the repo : https://github.com/perezjln/gym-lowcostrobot
I would like to use it for learning a RL agent in sim and try it out on the real robot after.
However, the current training script requires to have a local or online pre-recorded da... | https://github.com/huggingface/lerobot/issues/525 | closed | [
"enhancement",
"question",
"simulation"
] | 2024-11-25T20:02:38Z | 2025-04-07T16:19:01Z | null | alexcbb |
huggingface/chat-ui | 1,592 | Add Markdown support for user messages | ## Describe your feature request
In pr #1562 , a WSIWYG editor has been added to the text input area, however, when a text is sent, it is displayed in unrendered markdown. The idea is to use `marked` to conditionally render certain elements in the user's sent message into markdown, and leave others untouched.
The... | https://github.com/huggingface/chat-ui/issues/1592 | open | [
"enhancement"
] | 2024-11-25T17:26:10Z | 2024-11-27T20:42:19Z | 2 | Mounayer |
huggingface/accelerate | 3,260 | How to Properly Resume Multi-GPU Training with accelerate launch Without OOM or Loss Issues? | I encountered an issue while running multi-GPU training using `accelerate launch`. I am using 4 GPUs for training, and during the process, I save my model state using:
```python
accelerator.save_state(state_path)
```
Later, I attempt to resume training by loading the model parameters with:
```python
acceler... | https://github.com/huggingface/accelerate/issues/3260 | closed | [] | 2024-11-25T17:19:06Z | 2025-05-29T10:26:13Z | null | tqxg2018 |
huggingface/chat-ui | 1,589 | Models using OpenAI endpoint have caching enabled | When using models that are currently using the OpenAI endpoint type on HuggingChat (Nemotron, llama 3.2, qwen coder) they seem to have caching enabled.
This means retrying will just reload the previous response extremely quickly. This is not the intended behaviour and does not match what is happening when using the T... | https://github.com/huggingface/chat-ui/issues/1589 | closed | [
"huggingchat"
] | 2024-11-25T12:47:01Z | 2025-03-12T12:56:00Z | 1 | nsarrazin |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.