repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 โ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/finetrainers | 25 | how to fix it ? training/cogvideox_text_to_video_lora.py FAILED | ### System Info / ็ณป็ตฑไฟกๆฏ
cuda11.8
x2 3090
linux ubuntu 22.04 lts
pytorch2.4
### Information / ้ฎ้ขไฟกๆฏ
- [X] The official example scripts / ๅฎๆน็็คบไพ่ๆฌ
- [X] My own modified scripts / ๆ่ชๅทฑไฟฎๆน็่ๆฌๅไปปๅก
### Reproduction / ๅค็ฐ่ฟ็จ
andb: You can sync this run to the cloud by running:
wandb: wandb sync /home/dev_ml/cogvideox-factory/wandb/offline-run-20241011_154425-t76nveyh
wandb: Find logs at: wandb/offline-run-20241011_154425-t76nveyh/logs
[rank0]:I1011 15:44:57.956000 124307873129088 torch/_dynamo/utils.py:335] TorchDynamo compilation metrics:
[rank0]:I1011 15:44:57.956000 124307873129088 torch/_dynamo/utils.py:335] Function, Runtimes (s)
[rank0]:V1011 15:44:57.956000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats constrain_symbol_range: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.956000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats evaluate_expr: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _simplify_floor_div: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_guard_rel: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _find: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats has_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats size_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats simplify: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _update_divisible: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats replace: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_evaluate_static: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_implications: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_axioms: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats safe_expand: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats uninteresting_files: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
W1011 15:45:01.515000 129677780091520 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 177223 closing signal SIGTERM
E1011 15:45:02.282000 129677780091520 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 177222) of binary: /home/dev_ml/cogvideox-factory/venv/bin/python3.10
Traceback (most recent call last):
File "/home/dev_ml/cogvideox-factory/venv/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1159, in launch_command
multi_gpu_launcher(args)
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 793, in multi_gpu_launcher
distrib_run.run(args)
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
training/cogvideox_text_to_video_lora.py FAILED
--------------------------------- | https://github.com/huggingface/finetrainers/issues/25 | closed | [] | 2024-10-11T08:49:23Z | 2024-12-23T07:40:41Z | null | D-Mad |
huggingface/finetrainers | 22 | What resolution size is recommended for MP4 videos? What should the bitrate be set to? Should the video use H.264 or H.265 encoding? | About Dataset Preparation,
What resolution size is recommended for MP4 videos? What should the bitrate be set to? Should the video use H.264 or H.265 encoding?
example๏ผ 1280X720, 5mbps below. recommended H.264 encoder.
Is any suggestion here? | https://github.com/huggingface/finetrainers/issues/22 | closed | [] | 2024-10-11T05:12:57Z | 2024-10-14T07:20:36Z | null | Erwin11 |
huggingface/accelerate | 3,156 | how to load model with fp8 precision for inference? | ### System Info
```Shell
is it posible to load the model using accelerate library with fp8 inference?
i have H100 gpu accesses.
```
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-72B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Expected behavior
... | https://github.com/huggingface/accelerate/issues/3156 | closed | [] | 2024-10-11T04:31:47Z | 2024-12-02T15:07:58Z | null | imrankh46 |
huggingface/diffusers | 9,643 | Flux does not support multiple Controlnets? | ### Describe the bug
I'm encountering an issue with the FluxControlNetPipeline. The `controlnet` parameter is supposed to accept a `List[FluxControlNetModel]`. However, when I attempt to execute my code, I run into the following error:
```
Traceback (most recent call last):
File "/opt/tiger/test_1/h.py", line 8, in <module>
pipe = FluxControlNetPipeline.from_pretrained('/mnt/bn/x/sd_models/flux_schnell/', controlnet=controlnet, torch_dtype=torch.bfloat16).to("cuda")
File "/opt/tiger/miniconda3/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 940, in from_pretrained
model = pipeline_class(**init_kwargs)
File "/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux_controlnet.py", line 206, in __init__
self.register_modules(
File "/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 162, in register_modules
library, class_name = _fetch_class_library_tuple(module)
File "/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 731, in _fetch_class_library_tuple
library = not_compiled_module.__module__.split(".")[0]
AttributeError: 'list' object has no attribute '__module__'. Did you mean: '__mul__'?
```
### Reproduction
```
import torch
from diffusers import FluxControlNetPipeline, FluxControlNetModel
controlnet = [
FluxControlNetModel.from_pretrained("InstantX/FLUX.1-dev-controlnet-canny", torch_dtype=torch.bfloat16),
FluxControlNetModel.from_pretrained("InstantX/FLUX.1-dev-controlnet-canny", torch_dtype=torch.bfloat16),
]
pipe = FluxControlNetPipeline.from_pretrained('/mnt/bn/x/sd_models/flux_schnell/', controlnet=controlnet, torch_dtype=torch.bfloat16).to("cuda")
```
### Logs
_No response_
### System Info
- ๐ค Diffusers version: 0.31.0.dev0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.10.14
- PyTorch version (GPU?): 2.3.1+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.24.5
- Transformers version: 4.38.2
- Accelerate version: 0.33.0
- PEFT version: 0.12.0
- Bitsandbytes version: 0.44.1
- Safetensors version: 0.4.4
- xFormers version: 0.0.27
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9643 | closed | [
"bug"
] | 2024-10-11T03:47:06Z | 2024-10-11T17:39:20Z | 1 | RimoChan |
huggingface/diffusers | 9,639 | How to use my own trained lora in local computer? | local_model_path = r"D:\downloads\FLUX.1-schnell"
pipe = FluxPipeline.from_pretrained(local_model_path, torch_dtype=torch.bfloat16)
#lora not working by this way
pipe.load_lora_weights("XLabs-AI/flux-lora-collection", weight_name="disney_lora.safetensors")
pipe.load_lora_weights(r"D:\AI\stable-diffusion-webui-forge\models\Lora\myflux\myhsr.safetensors")
pipe.fuse_lora()
pipe.unload_lora_weights()
#pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
pipe.enable_sequential_cpu_offload()
But it seems not loading my own lora properly. | https://github.com/huggingface/diffusers/issues/9639 | closed | [] | 2024-10-10T23:19:47Z | 2024-11-10T08:49:08Z | null | derekcbr |
huggingface/evaluation-guidebook | 14 | [TOPIC] How to design a good benchmark depending on your eval goals | Eval goals can be finding a good model for you vs ranking models vs choosing a good training config.
Request by Luca Soldaini
Cf https://x.com/soldni/status/1844409854712218042 | https://github.com/huggingface/evaluation-guidebook/issues/14 | closed | [] | 2024-10-10T16:20:40Z | 2025-09-18T08:31:15Z | null | clefourrier |
huggingface/diffusers | 9,633 | Confusion about accelerator.num_processes in get_scheduler | In the example code from [train_text_to_image_sdxl.py](https://github.com/huggingface/diffusers/blob/e16fd93d0a40156c1f49fde07f6f2eb438983927/examples/text_to_image/train_text_to_image_sdxl.py#L974):
```python
num_warmup_steps = args.lr_warmup_steps * args.gradient_accumulation_steps
```
But in [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/e16fd93d0a40156c1f49fde07f6f2eb438983927/examples/text_to_image/train_text_to_image.py#L830):
```python
num_warmup_steps_for_scheduler = args.lr_warmup_steps * accelerator.num_processes
```
Why is there such a difference in these two cases? | https://github.com/huggingface/diffusers/issues/9633 | closed | [
"stale"
] | 2024-10-10T08:39:12Z | 2024-11-09T15:37:33Z | 5 | hj13-mtlab |
huggingface/transformers.js | 968 | It's ready | ### Question
The project I've been working on for the part few months is now ready-enough to reveal to the world. Transformers.js is an essential part of it, and I just want to say thank you for your amazing work.
https://www.papeg.ai
As you can see in the source code, there are lots of workers that implement Transformers.js workers; translation, image description, STT, TTS, speaker verification, image- and music generation, RAG embedding, and more!
https://github.com/flatsiedatsie/papeg_ai
Keep on rockin' !
// Reddit post: https://www.reddit.com/r/LocalLLaMA/comments/1g0jehn/ive_been_working_on_this_for_6_months_free_easy/
(Feel free to close this issue at any time) | https://github.com/huggingface/transformers.js/issues/968 | closed | [
"question"
] | 2024-10-10T04:39:48Z | 2025-05-29T22:49:24Z | null | flatsiedatsie |
huggingface/datasets | 7,211 | Describe only selected fields in README | ### Feature request
Hi Datasets team!
Is it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co/datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some fields in order not to overcomplicate the Dataset Preview and filter out some fields
### Motivation
The `Results` dataset for the Open LLM Leaderboard contains json files with a complex nested structure. I would like to add `README.md` there to use the SQL console, for example. But if I describe the structure of this dataset completely, it will overcomplicate the use of Dataset Preview and the total number of columns will exceed 50
### Your contribution
I'm afraid I'm not familiar with the project structure, so I won't be able to open a PR, but I'll try to help with something else if possible | https://github.com/huggingface/datasets/issues/7211 | open | [
"enhancement"
] | 2024-10-09T16:25:47Z | 2024-10-09T16:25:47Z | 0 | alozowski |
huggingface/transformers.js | 965 | Error: cannot release session. invalid session id | ### Question
I'm trying to get ASR + segmentation to run on a mobile phone (Pixel 6A, 6GB ram). This time on Brave mobile ;-)
ASR alone works fine. But I have a question about also getting the speaker recognition to run (segmentation+verification).
In the example implementation a `promiseAll` is used to run both ASR and Segmentation in paralel. For my implementation I've tried to run them one after the other, hoping that this would mean less memory is needed. E.g:
- Create ASR instance
-- Get text and chunks from audio
- Dispose of ASR instance
- Create segmentation instance
-- Get segments from audio
- Dispose of segmentation instance
- Create verification instance
-- Run verification on chunks of audio from each segment
- Dispose of verification instance
I don't know if it's related, but I noticed the error below:
<img width="550" alt="Screenshot 2024-10-09 at 15 11 13" src="https://github.com/user-attachments/assets/27873ca1-218b-44b9-8d9a-3af3a46bdb5c">
My questions are:
- Is it a valid assumption that doing things consequtively will allow this cascade to run on devices with less memory? Or was there a good reason that a promiseAll was used?
- What does the error mean?
- Is running them consecutively part of why the error occurs?
- Can I use `quantized` with the segmentation and verification models in order to save memory? Currently the ASR (tiny-whisper.en_timestamped) is 114MB, and then the segmentation and verification seem to be 512 MB together.
I haven't split up loading the segmentation and verification instances yet, as I thought I'd get your opinion first.
```
class SegmentationSingleton {
static instance = null;
static segmentation_model_id = 'onnx-community/pyannote-segmentation-3.0';
static segmentation_instance = null;
static segmentation_processor = null;
static loaded_segmentation = false;
static verification_model_id = 'Xenova/wavlm-base-plus-sv'; // Xenova/wavlm-base-plus-sv
//static verification_model_id = 'onnx-community/wespeaker-voxceleb-resnet34-LM';
static verification_instance = null;
static verification_processor = null;
static instance_exists(){
return this.segmentation_instance != null;
}
static set_to_null(var_to_null=null){
if(typeof var_to_null == 'string' && typeof this[var_to_null] != 'undefined'){
this[var_to_null] = null;
//console.log("SegmentationSingleton: set_to_null: ", var_to_null);
}
}
//static async getInstance(progress_callback=null,model_name='onnx-community/whisper-base_timestamped',preferences={},load_segmentation=true) {
static async getInstance(progress_callback=null,preferences={}) {
//console.log("Whisper_worker: SegmentationSingleton: getInstance");
if(self.is_mobile){
console.log("mobile, so setting quantized to true for segmentation AI's");
preferences['quantized'] = true;
}
this.loaded_segmentation = true
console.log("segmentationSingleton: creating segmentation instances");
this.segmentation_processor ??= AutoProcessor.from_pretrained(this.segmentation_model_id, {
...preferences,
progress_callback,
});
this.segmentation_instance ??= AutoModelForAudioFrameClassification.from_pretrained(this.segmentation_model_id, {
// NOTE: WebGPU is not currently supported for this model
// See https://github.com/microsoft/onnxruntime/issues/21386
device: 'wasm',
//dtype: 'fp32',
dtype: 'q8',
...preferences,
progress_callback,
});
if(this.verification_model_id.endsWith('wespeaker-voxceleb-resnet34-LM')){
self.similarity_threshold = 0.5;
self.perfect_simillarity_threshold = 0.7;
}
else{
self.similarity_threshold = 0.95;
self.perfect_simillarity_threshold = 0.98;
}
this.verification_processor ??= AutoProcessor.from_pretrained(this.verification_model_id, {
device: 'wasm',
dtype: 'fp32',
//device: 'webgpu',
//dtype: 'q8',
...preferences,
progress_callback,
});
this.verification_instance ??= AutoModel.from_pretrained(this.verification_model_id, {
device: 'wasm',
dtype: 'fp32',
//device: 'webgpu',
//dtype: 'q8',
...preferences,
progress_callback,
});
return Promise.all([this.segmentation_processor, this.segmentation_instance, this.verification_processor, this.verification_instance]);
}
}
``` | https://github.com/huggingface/transformers.js/issues/965 | open | [
"question"
] | 2024-10-09T13:57:48Z | 2024-10-09T15:51:02Z | null | flatsiedatsie |
huggingface/chat-ui | 1,509 | (BUG) Oath login splash is BROKEN/does NOT work | On newer versions of chat-ui the login splash screen does not work. Say for instance you have oauth setup and are not logged in. You should get a popup prompting you to logina nd not see the interface. This used to work without a problem. I just realized this no longer working on the newer versions. I have oauth set up through huggingface working perfectly.
Note.. even though the splash is not shown someone would be prevented from using the chatbot as it just wont work if your not logged in. However i kinda like the splash.. Anyone know how to get this working again?? already messed with it? save me some time. thank you huggingface for creating this project. Are we going to be getting any of the newer options being implemented into Huggingchat like specifically the continue button and new search/agent control popup panel vs just search on/off?? Thanks and wish yall the best
***Splash on 0.8.4 (Working)

***Splash on 0.9.3 (Not Working)

| https://github.com/huggingface/chat-ui/issues/1509 | closed | [
"bug"
] | 2024-10-08T18:06:01Z | 2024-11-27T15:02:46Z | 2 | bpawnzZ |
huggingface/trl | 2,196 | How to exit training when the loss is less than a specified value in SFTTrainer? | I asked this question in ChatGPT first, it gave the answer below:
```
from trl import SFTTrainer
from transformers import TrainingArguments
from unsloth import is_bfloat16_supported
# Define customized Trainer class
class CustomSFTTrainer(SFTTrainer):
def __init__(self, *args, min_loss_threshold=0.001, **kwargs):
super().__init__(*args, **kwargs)
self.min_loss_threshold = min_loss_threshold
def train(self, *args, **kwargs):
# Rewrite the train() method to monitor the loss.
for step, batch in enumerate(self.get_train_dataloader()):
outputs = self.model(**batch)
loss = outputs.loss
loss.backward()
self.optimizer.step()
self.lr_scheduler.step()
self.optimizer.zero_grad()
# If the loss is less than a specified value, exit training.
if loss.item() < self.min_loss_threshold:
print(f"Stopping training early at step {step} as loss {loss.item()} is below threshold {self.min_loss_threshold}")
break
# Print loss log.
if step % self.args.logging_steps == 0:
print(f"Step {step}, Loss: {loss.item()}")
# Initialize the customized Trainer.
trainer = CustomSFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=ds_split['train'],
dataset_text_field="text",
max_seq_length=max_seq_length,
dataset_num_proc=2,
min_loss_threshold=0.001, # Specify the loss threshold
args=TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=4,
warmup_steps=5,
max_steps=200,
learning_rate=2e-4,
fp16=not is_bfloat16_supported(),
bf16=is_bfloat16_supported(),
logging_steps=1,
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="linear",
seed=3407,
output_dir="outputs",
),
)
trainer.train()
```
However, the code above occurred error as below:
`# Calls into the C++ engine to run the backward pass RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 482, 3584]], which is output 0 of MulBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). `
I feedbacked the erorr to ChatGPT, it advised to add 2 lines in the code:
```
...
loss = outputs.loss
# Avoid inplace-updating
loss = loss.clone()
loss.backward()
...
```
I re-ran the code, it occurred errors as below:
```
RuntimeError Traceback (most recent call last)
[<ipython-input-8-079eb3ca0b07>](https://localhost:8080/#) in <cell line: 2>()
1 torch.autograd.set_detect_anomaly(True)
----> 2 trainer_stats = trainer.train()
3 frames
[/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py](https://localhost:8080/#) in _engine_run_backward(t_outputs, *args, **kwargs)
767 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)
768 try:
--> 769 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
770 t_outputs, *args, **kwargs
771 ) # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 256, 3584]], which is output 0 of MulBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
```
What should I do? | https://github.com/huggingface/trl/issues/2196 | closed | [
"โ question",
"๐ SFT"
] | 2024-10-08T03:13:27Z | 2024-10-08T10:39:51Z | null | fishfree |
huggingface/safetensors | 532 | Documentation about multipart safetensors | ### Feature request
Add examples to documentation about handling with multipart safetensors files (`*-00001.safetensors`, `*-00002.safetensors`, etc). How to load/save them?
### Motivation
This is widespread format but README and Docs don't contain enough information about it.
### Your contribution
Can't help by myself | https://github.com/huggingface/safetensors/issues/532 | closed | [] | 2024-10-07T20:14:48Z | 2025-01-03T17:36:31Z | 6 | attashe |
huggingface/diffusers | 9,599 | Why there is no LoRA only finetune example of FLUX.1? | **Is your feature request related to a problem? Please describe.**
The only example of LoRA finetune for FLUX.1 I discovered is here:
https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_flux.py
which is a dreambooth example. The dreambooth is VRAM intensive and not useful for scenario that dataset is big enough and does not need regularization images.
**Describe the solution you'd like.**
A LoRA only example for FLUX.1
**Describe alternatives you've considered.**
Provide some tips for me to modify by myself.
| https://github.com/huggingface/diffusers/issues/9599 | closed | [] | 2024-10-07T06:22:54Z | 2024-10-09T12:48:32Z | 3 | eeyrw |
huggingface/chat-ui | 1,506 | Add support for local models | ## Describe your feature request
I was looking for an open-source alternative to PocketPal, which allows to converse with local models on iOS and Android https://apps.apple.com/us/app/pocketpal-ai/id6502579498 and I was wondering if HuggingChat could be this alternative? The idea is to have an e2e open-source solution, providing e2e privacy.
I hope I didn't miss anything in the app allowing to support this.
Thanks
## Screenshots (if relevant)
## Implementation idea
I'm happy to help provided support from the community and the HuggingFace team. I have experience on web development, but not with running LLM on mobile.
| https://github.com/huggingface/chat-ui/issues/1506 | closed | [
"enhancement"
] | 2024-10-06T20:18:24Z | 2024-10-07T13:45:45Z | 3 | arnaudbreton |
huggingface/tokenizers | 1,644 | How to build a custom tokenizer on top of a exsiting Llama 3.2 tokenizer? | Hi,
I was trying to create a custom tokenizer for a different language which is not included in llama 3.2 tokenizer.
I could not find exactly what tokenizer I can use from hf which is exact alternative to Llama's tokenizer [link](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py), so that I will be able to train a new tokenizer.
Currently I am using following code to train a tokenizer, but final example does not match with the one Llama 3.2 has.
I would be nice if anyone could share their experience of adapting a Llama model to a new language.
```
import json
import argparse
from datasets import load_dataset, concatenate_datasets
from tokenizers import SentencePieceBPETokenizer
from transformers import LlamaTokenizerFast, AutoTokenizer
from tqdm import tqdm
from typing import List
hf_datasets = ["yakhyo/uz-wiki", "yakhyo/uz-news", "agentlans/high-quality-english-sentences"]
def normalize_text(text: str) -> str:
"""
Normalize Uzbek characters, replacing variations of oโ, o', o`, and โ (curved apostrophe).
"""
return text.replace("โ", "'").replace("`", "'").replace("โ", "'").replace("()", "")
def prepare_datasets(datasets_list: List[str]):
all_data = []
for dataset_name in datasets_list:
try:
data = load_dataset(dataset_name)
for split in ["train", "test", "validation"]:
try:
all_data.append(data[split])
except KeyError:
pass
except:
print(f"dataset: `{dataset_name}` not found, skipping...")
concat_data = []
for data in tqdm(all_data):
data = data.map(lambda example: {"text": normalize_text(example["text"])})
data = data.remove_columns([col for col in data.column_names if col != "text"])
concat_data.append(data)
return concatenate_datasets(concat_data)
def main(args):
dataset = prepare_datasets(hf_datasets)
# select num_samples from the dataset
dataset = dataset.shuffle(seed=42).select(range(len(dataset)))
# Create a SentencePieceBPETokenizer
tokenizer = SentencePieceBPETokenizer(
replacement="ฤ "
)
# Train the SentencePieceBPETokenizer on the dataset
tokenizer.train_from_iterator(
iterator=dataset['text'],
vocab_size=args.vocab_size,
show_progress=True,
special_tokens=[
"<unk>",
"<s>",
"</s>",
"<pad>"
],
)
# Save the tokenizer
tokenizer.save("new-sentencepiece-tokenizer.json", pretty=True)
# Load reference tokenizer
if args.reference_tokenizer is not None:
reference_tokenizer = AutoTokenizer.from_pretrained(args.reference_tokenizer)
reference_tokenizer.save_pretrained("reference-tokenizer")
else:
raise ValueError(
"No tokenizer name provided or no hub token provided. Try using --reference_tokenizer 'meta-llama/Llama-2-7b-hf'")
# Read and dump the json file for the new tokenizer and the reference tokenizer
with open("new-sentencepiece-tokenizer.json") as f:
new_llama_tokenizer_json = json.load(f)
with open("reference-tokenizer/tokenizer.json") as f:
reference_tokenizer_json = json.load(f)
# Add the reference tokenizer's config to the new tokenizer's config
new_llama_tokenizer_json["normalizer"] = reference_tokenizer_json["normalizer"]
new_llama_tokenizer_json["pre_tokenizer"] = reference_tokenizer_json["pre_tokenizer"]
new_llama_tokenizer_json["post_processor"] = reference_tokenizer_json["post_processor"]
new_llama_tokenizer_json["decoder"] = reference_tokenizer_json["decoder"]
new_llama_tokenizer_json["model"]['fuse_unk'] = reference_tokenizer_json["model"]['fuse_unk']
new_llama_tokenizer_json["model"]['byte_fallback'] = reference_tokenizer_json["model"]['byte_fallback']
# Dump the new tokenizer's config
with open("new-sentencepiece-tokenizer.json", "w") as f:
json.dump(new_llama_tokenizer_json, f, indent=2, ensure_ascii=False)
# Load the new tokenizer as a LlamaTokenizerFast
new_llama_tokenizer = LlamaTokenizerFast(
tokenizer_file="new-sentencepiece-tokenizer.json",
unk_token="<unk>",
unk_token_id=0,
bos_token="<s>",
bos_token_id=1,
eos_token="</s>",
eos_token_id=2,
pad_token="<pad>",
pad_token_id=3,
padding_side="right",
)
# Save the new tokenizer
new_llama_tokenizer.save_pretrained("new-llama-tokenizer")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Llama Tokenizer using SentencePieceBPE")
parser.add_argument(
"--reference_tokenizer",
type=str,
default=None,
help="The name of the reference tokenizer to use"
)
parser.ad | https://github.com/huggingface/tokenizers/issues/1644 | closed | [
"training"
] | 2024-10-05T13:18:55Z | 2025-02-26T12:06:15Z | null | yakhyo |
huggingface/datasets | 7,196 | concatenate_datasets does not preserve shuffling state | ### Describe the bug
After concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156
This means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623
I also noticed that the number of shards is the same after concatenation, which I found surprising, but I don't understand the internals well enough to know whether this is actually surprising or not
### Steps to reproduce the bug
```python
import datasets
import torch.utils.data
def gen(shards):
yield {"shards": shards}
def main():
dataset1 = datasets.IterableDataset.from_generator(
gen, gen_kwargs={"shards": list(range(25))} # TODO: how to understand this?
)
dataset2 = datasets.IterableDataset.from_generator(
gen, gen_kwargs={"shards": list(range(25, 50))} # TODO: how to understand this?
)
dataset1 = dataset1.shuffle(buffer_size=1)
dataset2 = dataset2.shuffle(buffer_size=1)
print(dataset1.n_shards)
print(dataset2.n_shards)
dataset = datasets.concatenate_datasets(
[dataset1, dataset2]
)
print(dataset.n_shards)
# dataset = dataset1
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=0,
)
for i, batch in enumerate(dataloader):
print(batch)
print("\nNew epoch")
dataset = dataset.set_epoch(1)
for i, batch in enumerate(dataloader):
print(batch)
if __name__ == "__main__":
main()
```
### Expected behavior
Shuffling state should be preserved
### Environment info
Latest datasets | https://github.com/huggingface/datasets/issues/7196 | open | [] | 2024-10-03T14:30:38Z | 2025-03-18T10:56:47Z | 1 | alex-hh |
huggingface/diffusers | 9,575 | diffusers version update to 0.27.0 from 0.20.0, training code seems not work | I have trained an inpainting model using diffusers 0.20.0. The trained model works as expected. However, something seems wrong when I update the diffusers version to 0.27.0, while keeping the training code and other requirements the same. The training code runs successfully, but the inference outputs look like noise. Is there any point that should be noticed in this case? | https://github.com/huggingface/diffusers/issues/9575 | closed | [] | 2024-10-03T14:30:21Z | 2024-10-15T08:58:36Z | 4 | huangjun12 |
huggingface/transformers | 33,909 | How to implement weight decay towards the pre-trained model? | Hello, let me one question.
If using HF Trainer for supervised fune-tuning, how do I implement penalizing the distance between starting and current weights? This was shown to be effective in https://arxiv.org/abs/1706.03610 | https://github.com/huggingface/transformers/issues/33909 | open | [
"Usage",
"Feature request"
] | 2024-10-03T11:18:53Z | 2024-10-22T13:16:26Z | null | sedol1339 |
huggingface/datasets | 7,189 | Audio preview in dataset viewer for audio array data without a path/filename | ### Feature request
Huggingface has quite a comprehensive set of guides for [audio datasets](https://huggingface.co/docs/datasets/en/audio_dataset). It seems, however, all these guides assume the audio array data to be decoded/inserted into a HF dataset always originates from individual files. The [Audio-dataclass](https://github.com/huggingface/datasets/blob/3.0.1/src/datasets/features/audio.py#L20) appears designed with this assumption in mind. Looking at its source code it returns a dictionary with the keys `path`, `array` and `sampling_rate`.
However, sometimes users may have different pipelines where they themselves decode the audio array. This feature request has to do with wishing some clarification in guides on whether it is possible, and in such case how users can insert already decoded audio array data into datasets (pandas DataFrame, HF dataset or whatever) that are later saved as parquet, and still get a functioning audio preview in the dataset viewer.
Do I perhaps need to write a tempfile of my audio array slice to wav and capture the bytes object with `io.BytesIO` and pass that to `Audio()`?
### Motivation
I'm working with large audio datasets, and my pipeline reads (decodes) audio from larger files, and slices the relevant portions of audio from that larger file based on metadata I have available.
The pipeline is designed this way to avoid having to store multiple copies of data, and to avoid having to store tens of millions of small files.
I tried [test-uploading parquet files](https://huggingface.co/datasets/Lauler/riksdagen_test) where I store the audio array data of decoded slices of audio in an `audio` column with a dictionary with the keys `path`, `array` and `sampling_rate`. But I don't know the secret sauce of what the Huggingface Hub expects and requires to be able to display audio previews correctly.
### Your contribution
I could contribute a tool agnostic guide of creating HF audio datasets directly as parquet to the HF documentation if there is an interest. Provided you help me figure out the secret sauce of what the dataset viewer expects to display the preview correctly. | https://github.com/huggingface/datasets/issues/7189 | open | [
"enhancement"
] | 2024-10-02T16:38:38Z | 2024-10-02T17:01:40Z | 0 | Lauler |
huggingface/transformers.js | 958 | Zombies in memory - something is blocking (re)loading of Whisper after a page is closed and re-opened | ### Question
I've been trying to debug this issue all afternoon, but haven't gotten any further. The code runs on desktop, but not on Android Chrome.
This is with V3 Alpha 19.
<img width="571" alt="Screenshot 2024-10-02 at 16 06 16" src="https://github.com/user-attachments/assets/c5fbb2cb-0cdf-431a-8099-021d19a10384">
<img width="569" alt="Screenshot 2024-10-02 at 16 06 40" src="https://github.com/user-attachments/assets/d09a6b09-0a05-4d38-af0e-d1c88a08003c">
<img width="569" alt="Screenshot 2024-10-02 at 16 06 56" src="https://github.com/user-attachments/assets/fc3de899-dfdb-425a-92c1-69e3c40b4fd8">
| https://github.com/huggingface/transformers.js/issues/958 | closed | [
"question"
] | 2024-10-02T14:10:27Z | 2024-10-18T12:47:17Z | null | flatsiedatsie |
huggingface/diffusers | 9,567 | [community] Improving docstrings and type hints | There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int] = None, parameter_3: float = 42.0) -> Civilization:
r"""
Function that creates a simulation.
Args:
parameter_1 (`str` or `List[str]`):
Description of game level.
parameter_2 (`int`, *optional*):
Kardashev scale of civilization.
parameter_3 (`float`, defaults to `42.0`):
Difficulty scale.
Returns:
[`~simulations.objects.Civilization`]
A civilization simulation with provided initialization parameters.
"""
```
Some examples that don't follow the docstring convention are:
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L89): missing explanations
- [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L132): does not contain mixin-related documentation whereas as [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L154) does
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/utils/import_utils.py#L672): function explanation after "Args", but should be before
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/pipelines/deepfloyd_if/pipeline_output.py#L14): same reason as above
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L518): incorrect indentation
There are also many places where docstrings are completely missing or inadequately explained. If you feel something needs an improvement, you can open a PR with your suggestions too! Additionally, type hints are not appropriate/correctly used at many occurrences and mismatch the accompanying docstrings - these could use an improvement too!
Please limit your PRs to changes to a single file in each PR. Changes must be only related to docstrings/type hints. Feel free to ping either @yiyixuxu, @stevhliu or me for reviews. | https://github.com/huggingface/diffusers/issues/9567 | closed | [
"documentation",
"good first issue",
"contributions-welcome"
] | 2024-10-02T03:20:44Z | 2025-11-13T22:45:59Z | 16 | a-r-r-o-w |
huggingface/datasets | 7,186 | pinning `dill<0.3.9` without pinning `multiprocess` | ### Describe the bug
The [latest `multiprocess` release](https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17) requires `dill>=0.3.9` which causes issues when installing `datasets` without backtracking during package version resolution. Is it possible to add a pin for multiprocess so something like `multiprocess<=0.70.16` so that the `dill` version is compatible?
### Steps to reproduce the bug
NA
### Expected behavior
NA
### Environment info
NA | https://github.com/huggingface/datasets/issues/7186 | closed | [] | 2024-10-01T22:29:32Z | 2024-10-02T06:08:24Z | 0 | shubhbapna |
huggingface/chat-ui | 1,499 | Error 500 "RPError" | OpenID Connect + SafeNet Trusted Access (STA) | Hello,
I would like to deploy OpenID Connect with SafeNet Trusted Access (STA).
From this 3-minute video, I've done all the steps, except for OAuth.tools which I don't use :
https://www.youtube.com/watch?v=hSWXFSadpQQ
Here's my bash script that deploys the containers | ```deploy.sh``` :
```bash
#!/bin/bash
# previous containers removed
sudo docker rm -f ollama
sudo docker rm -f mongodb
sudo docker rm -f chat-ui
sudo docker rm -f nginx
# previous networks removed
sudo docker network rm backend >/dev/null 2>&1
sudo docker network rm proxy >/dev/null 2>&1
# create networks
sudo docker network create backend
sudo docker network create proxy
# ollama
sudo docker run -d -p 11434:11434 -e HTTPS_PROXY="${HTTPS_PROXY}" -v /home/<my-user>/chat-ui/ollama:/root/.ollama --name ollama --network backend ollama-with-ca
sleep 5
sudo docker exec ollama taskset -c 0-40 ollama run llama3.1
# mongodb
sudo docker run -d -p 27017:27017 -v mongodb-data:/data/db --name mongodb --network backend mongo:latest
# chat-ui
sudo docker run -d -p 3000:3000 -e HTTPS_PROXY="${HTTPS_PROXY}" --mount type=bind,source="$(pwd)/.env.local",target=/app/.env.local -v chat-ui:/data --name chat-ui --network backend ghcr.io/huggingface/chat-ui-db
sudo docker network connect proxy chat-ui
# nginx
sudo docker run -d -p 80:80 -p 443:443 -v "$(pwd)/nginx:/etc/nginx/conf.d" -v "$(pwd)/ssl:/etc/ssl" --name nginx --network proxy nginx:latest
```
Here's my ```nginx``` configuration :
```nginx
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name <my-chat-ui>.fr;
return 301 https://$host$request$uri;
}
server {
listen 443 ssl;
server_name <my-chat-ui>.fr;
ssl_certificate /etc/ssl/chat-ui.crt;
ssl_certificate_key /etc/ssl/chat-ui.key;
proxy_connect_timeout 60;
proxy_send_timeout 60;
proxy_read_timeout 60;
send_timeout 60;
client_max_body_size 2G;
proxy_buffering off;
client_header_buffer_size 8k;
location / {
proxy_pass http://chat-ui:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
add_header 'Access-Control-Allow-Origin' 'https://<my-chat-ui>.fr' always;
}
}
```
Finally, here's my ```.env.local``` using Llama3.1 8B model :
```.env
MONGODB_URL=mongodb://mongodb:27017
HF_TOKEN=hf_*****
OPENID_CONFIG=`{
"PROVIDER_URL": "https://idp.eu.safenetid.com/auth/realms/<realm-ID>-STA/protocol/openid-connect/auth",
"CLIENT_ID": "*****",
"CLIENT_SECRET": "*****",
"SCOPES": "openid profile"
}`
MODELS=`[
{
"name": "Ollama | Llama3.1",
"id": "llama3.1-8b",
"description": "llama3.1-8b",
"chatPromptTemplate": "<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\n\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}<|start_header_id|>assistant<|end_header_id|>\n\n",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["<|end_of_text|>", "<|eot_id|>"]
},
"endpoints": [
{
"type": "ollama",
"url" : "http://ollama:11434",
"ollamaName" : "llama3.1:latest"
}
]
}
]`
```
And I got this error when I press on "Login" button :

When I do the command ```sudo docker logs chat-ui```, I see this line :
```{"level":50,"time":1727703253975,"pid":30,"hostname":"fe9d8f548283","locals":{"sessionId":"3b700cd7b4efc2a2b47c0f13134904e01f01c3b7d6ff05c6726390e19ea5d431"},"url":"https://ia.chu-lyon.fr/login","params":{},"request":{},"message":"Internal Error","error":{"name":"RPError"},"errorId":"8d7d74e3-b12c-4c1e-9dc5-9847d5e61ea2","status":500}```
**Note that by adding the ```OPENID_CONFIG``` (with probably incorrect data), the application stops working completely and I can't launch prompts or delete/edit existing ones !**
**When I comment ```OPENID_CONFIG```, everything starts working properly again.**
I don't really know what to put exactly, especially for ```PROVIDER_URL``` and ```SCOPES```.
Can you help me to resolve this issue ?
Thanks in advance. | https://github.com/huggingface/chat-ui/issues/1499 | open | [
"support"
] | 2024-09-30T12:54:16Z | 2024-09-30T12:57:51Z | 0 | avirgos |
huggingface/diffusers | 9,560 | FP32 training for sd3 controlnet | Hi,
I have been use `examples\controlnet\train_controlnet_sd3.py` for controlnet training for a while, and I have some confusion and would like your advice
1. In the line 1097:
`vae.to(accelerator.device, dtype=torch.float32)`
It seems we should use fp32 for VAE, but as far as I know, SD3 currently has no fp32 checkpoints, so does it really work if we populate fp16 into fp32?
2. Before running the train script, `accelerate config` can specify whether to use mixed precision or not, since SD3 only has fp16 checkpoint at present, I don't know how to choose this option, whether to choose 'fp16' or 'no'.
Really appreciate your advice!
@sayakpaul @DavyMorgan
| https://github.com/huggingface/diffusers/issues/9560 | closed | [
"stale"
] | 2024-09-30T08:07:04Z | 2024-10-31T15:13:19Z | 11 | xduzhangjiayu |
huggingface/huggingface_hub | 2,578 | What is the highest Python version currently supported? | ### Describe the bug
I utilized Hugging Face Spaces to construct my application, which was built using Gradio, zerogpuspace, and the link is: https://huggingface.co/spaces/tanbw/CosyVoice
In the readme.md, I specified the Python version as 3.8.9, but the version of Python that the application prints out is still 3.1. What is the highest Python version currently supported?



### Reproduction
_No response_
### Logs
_No response_
### System info
```shell
- huggingface_hub version: 0.24.5
- Platform: Linux-5.10.223-211.872.amzn2.x86_64-x86_64-with-glibc2.36
- Python version: 3.10.13
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/user/.cache/huggingface/token
- Has saved token ?: False
- Configured git credential helpers: store
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.0.1
- Jinja2: 3.1.4
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: 10.4.0
- hf_transfer: 0.1.8
- gradio: 4.44.0
- tensorboard: N/A
- numpy: 1.26.4
- pydantic: 2.7.0
- aiohttp: 3.10.0
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/user/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/user/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/user/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: True
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
```
| https://github.com/huggingface/huggingface_hub/issues/2578 | closed | [
"bug"
] | 2024-09-29T14:37:38Z | 2024-09-30T07:05:29Z | null | tanbw |
huggingface/diffusers | 9,555 | [Flux Controlnet] Add control_guidance_start and control_guidance_end | It'd be nice to have `control_guidance_start` and `control_guidance_start` parameters added to flux Controlnet and Controlnet Inpainting pipelines.
I'm currently making experiments with Flux Controlnet Inpainting but the results are poor even with a `controlnet_conditioning_scale` set to 0.6.
I have to set `controlnet_conditioning_scale` to 0.4 to have non broken results.
Maybe giving more control with the guidance start and end would help reach better results ?
| https://github.com/huggingface/diffusers/issues/9555 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-09-29T12:37:39Z | 2024-10-10T12:29:03Z | 8 | simbrams |
huggingface/hub-docs | 1,435 | How to check if a space is duplicated from another one using HF API? | I cannot find any related specifications in the documentation...Thanks! | https://github.com/huggingface/hub-docs/issues/1435 | open | [] | 2024-09-28T23:52:08Z | 2025-01-16T17:08:34Z | null | zhimin-z |
huggingface/diffusers | 9,551 | How to use x-labs flux controlnet models in diffusers? | ### Model/Pipeline/Scheduler description
The following controlnets are supported in Comfy UI, but was wondering how we can use these in diffusers as well for developers. Afaik, there is no from_single_file method for FluxControlNet to load the safetensors?
### Open source status
- [x] The model implementation is available.
- [x] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
https://huggingface.co/XLabs-AI/flux-controlnet-canny
https://huggingface.co/XLabs-AI/flux-controlnet-canny-v3
_No response_ | https://github.com/huggingface/diffusers/issues/9551 | closed | [] | 2024-09-28T20:01:15Z | 2024-09-29T06:59:46Z | null | neuron-party |
huggingface/text-generation-inference | 2,583 | How to turn on the KV cache when serve a model? | ### System Info
TGI 2.3.0
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
The TTFT is really slower than VLLM. Can't be improved? if so how to turn on the KV cache when launch a model?
```
model=HuggingFaceH4/zephyr-7b-beta
# share a volume with the Docker container to avoid downloading weights every run
volume=$PWD/data
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \
ghcr.io/huggingface/text-generation-inference:2.3.0 --model-id $model
```
### Expected behavior
Improve the TTFT and latency | https://github.com/huggingface/text-generation-inference/issues/2583 | open | [] | 2024-09-28T19:32:15Z | 2024-10-25T12:47:02Z | null | hahmad2008 |
huggingface/transformers.js | 948 | Getting Local models/wasm working with Create React App | ### Question
I realize there's been a lot of talk about this in other issues, but I'm trying to gather if getting local-only model and wasm files will work with Create React App. I'm using `WhisperForConditionalGeneration` from `@huggingface/transformers` version `3.0.0-alpha.9`.
My setup:
```
env.allowRemoteModels = false;
env.allowLocalModels = true;
env.backends.onnx.wasm.wasmPaths = process.env.PUBLIC_URL + "/dictation/";
env.localModelPath = process.env.PUBLIC_URL + "/dictation/models/";
```
... and in my `{packagename}/public/models` folder I've got:
```
ort-wasm-simd-threaded.jsep.wasm
models/config.json
models/generation_config.json
models/preprocessor_config.json
models/tokenizer_config.json
models/tokenizer.json
models/onnx/decoder_model_merged_q4.onnx
models/onnx/encoder_model.onnx
```
This returns the `SyntaxError: Unexpected token '<', "<!DOCTYPE "... is not valid JSON` error that has been [discussed in other issues](https://github.com/xenova/transformers.js/issues/142). If I set `env.allowRemoteModels = true;` and
`env.allowLocalModels = false;`, and clear my application cache, this works fine. My questions on that:
1. How can I get the `wasm` file to load locally only? It caches fine and calls locally (
http://localhost:3000/dictation/ort-wasm-simd-threaded.jsep.wasm) after the initial CDN call, but I don't want to rely on an external CDN.
2. How can I get the model files to only call locally? (we will need to further train our own models). I have yet to get this working, but I assume the above error is to blame.
3. The main question: is this a limitation with CRA? I noticed that if I load the wasm file from the CDN first, it caches fine locally. It's just that initial call to the wasm local file (if not cached from the CDN) that fails, which people have said may be a CRA issue.
Thanks! Sorry for the long-winded question. Happy to provide any more code if needed. | https://github.com/huggingface/transformers.js/issues/948 | closed | [
"question"
] | 2024-09-26T20:42:33Z | 2024-09-26T21:26:30Z | null | stinoga |
huggingface/blog | 2,369 | How to finetune jina-embeddings-v3 by lora? | https://github.com/huggingface/blog/issues/2369 | open | [] | 2024-09-26T07:25:16Z | 2024-09-26T07:25:16Z | null | LIUKAI0815 | |
huggingface/text-generation-inference | 2,569 | Question: What is preferred way to cite TGI/repo? Didnt see a citation file. | https://github.com/huggingface/text-generation-inference/issues/2569 | open | [] | 2024-09-26T02:07:42Z | 2024-09-26T02:07:42Z | null | mkultraWasHere | |
huggingface/lerobot | 454 | Venv isn't needed in docker | I noticed in your docker files you are using a virtual environment. Docker is already a virtual environment at the system level. Is there a reason for using a python virtual environment as well? Typically, this is redundant/unnecessary and you'd only use venv or similar on your local machine.
If there isn't a good reason we could go ahead and delete these dependencies from the docker images. | https://github.com/huggingface/lerobot/issues/454 | closed | [
"enhancement",
"question",
"stale"
] | 2024-09-25T16:33:17Z | 2025-10-23T02:29:11Z | null | MichaelrMentele |
huggingface/diffusers | 9,528 | load_ip_adapter for distilled sd models | Is it possible to load IP-Adapter for distilled SD v1 or v2 based models such as nota-ai/bk-sdm-tiny or nota-ai/bk-sdm-v2-tiny?
When I tried to load ip adapter using bk-sdm-tiny
```python
pipe.load_ip_adapter(
"h94/IP-Adapter",
subfolder="models",
weight_name="ip-adapter-plus_sd15.bin",
low_cpu_mem_usage=False,
ignore_mismatched_sizes=True
)
```
I got errors, probably because of differences in unet structures.
```
RuntimeError: Error(s) in loading state_dict for IPAdapterAttnProcessor2_0:
size mismatch for to_k_ip.0.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for to_v_ip.0.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([640, 768]).
```
How can I solve this problems? | https://github.com/huggingface/diffusers/issues/9528 | closed | [
"stale"
] | 2024-09-25T04:31:00Z | 2025-01-12T06:01:40Z | 7 | kmpartner |
huggingface/chat-ui | 1,486 | Getting 403 on chat ui config for aws sagemaker endpoint |
Hi All,
Looking into configuring chat ui with aws sagemaker endpoint and getting following error:

```
DOTENV_LOCAL was found in the ENV variables. Creating .env.local file.
{"level":30,"time":1727231014113,"pid":23,"hostname":"fbe21dc3ad38","msg":"Starting server..."}
{"level":30,"time":1727231014147,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] Begin check..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Update search assistants\" already applied. Skipping..."}
Listening on 0.0.0.0:3000
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Update deprecated models in assistants with the default model\" should not be applied for this run. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Add empty 'tools' record in settings\" already applied. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Convert message updates to the new schema\" already applied. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Convert message files to the new schema\" already applied. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Trim message updates to reduce stored size\" already applied. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] \"Reset tools to empty\" already applied. Skipping..."}
{"level":30,"time":1727231014175,"pid":23,"hostname":"fbe21dc3ad38","msg":"[MIGRATIONS] All migrations applied. Releasing lock"}
{"level":30,"time":1727231014207,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"createdAt","span":"day","type":"conversation","msg":"Computing conversation stats"}
{"level":30,"time":1727231014216,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"updatedAt","span":"day","type":"conversation","msg":"Computing conversation stats"}
{"level":30,"time":1727231014219,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"createdAt","span":"day","type":"message","msg":"Computing conversation stats"}
{"level":30,"time":1727231014220,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"updatedAt","span":"week","type":"conversation","msg":"Computing conversation stats"}
{"level":30,"time":1727231014224,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"createdAt","span":"week","type":"conversation","msg":"Computing conversation stats"}
{"level":30,"time":1727231014227,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-01T00:00:00.000Z","dateField":"createdAt","span":"month","type":"message","msg":"Computing conversation stats"}
{"level":30,"time":1727231014229,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"createdAt","span":"day","type":"conversation","msg":"Computed conversation stats"}
{"level":30,"time":1727231014229,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"updatedAt","span":"day","type":"conversation","msg":"Computed conversation stats"}
{"level":30,"time":1727231014230,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-25T00:00:00.000Z","dateField":"createdAt","span":"day","type":"message","msg":"Computed conversation stats"}
{"level":30,"time":1727231014230,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"updatedAt","span":"week","type":"conversation","msg":"Computed conversation stats"}
{"level":30,"time":1727231014231,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"createdAt","span":"week","type":"message","msg":"Computing conversation stats"}
{"level":30,"time":1727231014235,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-01T00:00:00.000Z","dateField":"createdAt","span":"month","type":"message","msg":"Computed conversation stats"}
{"level":30,"time":1727231014236,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"createdAt","span":"week","type":"conversation","msg":"Computed conversation stats"}
{"level":30,"time":1727231014236,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-22T00:00:00.000Z","dateField":"createdAt","span":"week","type":"message","msg":"Computed conversation stats"}
{"level":30,"time":1727231014238,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-01T00:00:00.000Z","dateField":"createdAt","span":"month","type":"conversation","msg":"Computing conversation stats"}
{"level":30,"time":1727231014239,"pid":23,"hostname":"fbe21dc3ad38","minDate":"2024-09-01T00:00:00.000Z","dateField":"updatedAt","span":"month","type":"conversation","msg":"Computing conve | https://github.com/huggingface/chat-ui/issues/1486 | open | [
"support"
] | 2024-09-25T02:41:08Z | 2024-09-25T02:41:08Z | 0 | nauts |
huggingface/chat-macOS | 7 | Asking "what time is it?" will always return the local time of Paris, regardless of your location (โR+) | <img width="487" alt="Screenshot 2024-09-24 at 11 54 17โฏAM" src="https://github.com/user-attachments/assets/02d26c05-ae37-4caf-a3ff-5bc6aec42068">
I wonder how can we localize questions like this. I've tried โR+ which always gives me the local time of Paris. Qwen2.5-72B and Llama 3.1 make up another non-specific time that's not my local time. I have web-search enabled too, and I can see that they're using it too, but they can't get it right, even when I give them my exact location both in the model's system prompt on HuggingChat, or in the chat context of the app itself.
| https://github.com/huggingface/chat-macOS/issues/7 | open | [
"good first issue"
] | 2024-09-24T23:09:31Z | 2024-10-23T20:08:57Z | null | Reza2kn |
huggingface/diffusers | 9,520 | UNetMotionModel.dtype is really expensive to call, is it possible to cache it during inference? | **What API design would you like to have changed or added to the library? Why?**
we are using class UNetMotionModel(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin, PeftAdapterMixin)
and its `forward()` implementation is calling self.dtype, which is very expensive

from my profiling trace result, calling self.dtype takes 6-10ms each time.
can we somehow cache it to save time?

I took a look at ModelMixin.dtype() property function, it get all parameters of the model into tuple to check only first parameter's dtype, i don't thinkmake sense to do this everytime. right?

**What use case would this enable or better enable? Can you give us a code example?**
We are using this model to do video generation, so the inference is running repeatedly. Is it easy to optimize this ~10ms latency?
Thanks! | https://github.com/huggingface/diffusers/issues/9520 | closed | [
"wip",
"performance"
] | 2024-09-24T18:03:28Z | 2025-01-02T13:40:51Z | 7 | xiang9156 |
huggingface/chat-ui | 1,484 | Header prompt displayed using Llama3.1 with ollama | Hello,
I'm using the ```llama3.1:latest``` model with ```ollama``` and I'm having trouble correctly initializing the ```chatPromptTemplate``` variable.
I used this Github issue to initialize this variable : https://github.com/huggingface/chat-ui/issues/1035
Here is my ```.env.local``` file :
```.env
MONGODB_URL=mongodb://mongodb:27017
HF_TOKEN=<hf-token>
PUBLIC_APP_NAME=<name>
MODELS=`[
{
"name": "Ollama | Llama3.1",
"chatPromptTemplate": "<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\n\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["<|end_of_text|>", "<|eot_id|>"]
},
"endpoints": [
{
"type": "ollama",
"url" : "http://ollama:11434",
"ollamaName" : "llama3.1:latest"
}
]
}
]`
```
But ```<|start_header_id|>assistant<|end_header_id|>``` appears on every response :

Can you help me make it disappear by modifying ```chatPromptTemplate``` variable ?
Thanks in advance.
| https://github.com/huggingface/chat-ui/issues/1484 | closed | [
"support"
] | 2024-09-24T13:33:16Z | 2024-09-30T08:43:06Z | 3 | avirgos |
huggingface/diffusers | 9,508 | AnimateDiff SparseCtrl RGB does not work as expected | Relevant comments are [this](https://github.com/huggingface/diffusers/pull/8897#issuecomment-2255416318) and [this](https://github.com/huggingface/diffusers/pull/8897#issuecomment-2255478105).
AnimateDiff SparseCtrl RGB does not work similar to other implementations and cannot replicate their outputs. This makes me believe that there is something incorrect with our SparseControlNet or MotionAdapter implementation.
When comparing the results of the [original](https://github.com/guoyww/AnimateDiff)/[Comfy](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved) implementation to Diffusers implementation, one can notice that if an image is used with an unrelated prompt, the Diffusers implementation ignores the image and just follows the prompt whereas the other implementations try to incorporate both.
Since the original and Comfy implementations produce this behaviour consistently, this seems more like a problem with Diffusers implementation. However, I've not been able to spot differences in implementation just by comparing the code visually. I also tried matching outputs layerwise and it seemed to be alright (although I didn't investigate this as deeply as I should have due to other priorities).
If someone from the community actively following/using the AnimateDiff implementations can help determine the cause of this bug, it would be really awesome and helpful. | https://github.com/huggingface/diffusers/issues/9508 | open | [
"bug",
"help wanted",
"stale",
"contributions-welcome",
"advanced"
] | 2024-09-23T21:42:54Z | 2025-08-10T16:47:50Z | 9 | a-r-r-o-w |
huggingface/lerobot | 451 | Inquiry about Implementation of "Aloha Unleashed" | First and foremost, I would like to extend my heartfelt gratitude for your incredible work on the Lerobo project.
I recently came across the paper "Aloha Unleashed" published by the Aloha team a few months ago, and I am curious to know if there are any plans to implement the methodologies and findings from this paper into the Lerobo project.
Thank you once again for your hard work and for providing such a fantastic tool to the community. I look forward to your response.
paper link๏ผhttps://aloha-unleashed.github.io/ | https://github.com/huggingface/lerobot/issues/451 | open | [
"question",
"robots"
] | 2024-09-23T09:14:56Z | 2025-08-20T19:42:37Z | null | lightfate |
huggingface/text-generation-inference | 2,541 | How to serve local models with python package (not docker) | ### System Info
`pip install text-generation `with version '0.6.0'
I need to use python package not docker
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
```
from text_generation import Client
# Initialize the client
client = Client("/path/to/model/locally")
# Generate text
response = client.generate("Your input text here")
```
error:
```
MissingSchema: Invalid URL '/path/to/model/locally': No scheme supplied. Perhaps you meant [/path/to/model/locally](/path/to/model/locally?
```
also I tried this as with some models also on huggingface and local models doesn't work!
```
from text_generation import InferenceAPIClient
client = InferenceAPIClient("NousResearch/Meta-Llama-3.1-8B-Instruct")
text = client.generate("Why is the sky blue?").generated_text
print(text)
# ' Rayleigh scattering'
# Token Streaming
text = ""
for response in client.generate_stream("Why is the sky blue?"):
if not response.token.special:
text += response.token.text
print(text)
```
error:
```
NotSupportedError: Model `NousResearch/Meta-Llama-3.1-8B-Instruct` is not available for inference with this client.
Use `huggingface_hub.inference_api.InferenceApi` instead.
```
### Expected behavior
- I can load any model ( local or form HF hub)
| https://github.com/huggingface/text-generation-inference/issues/2541 | open | [] | 2024-09-20T21:10:09Z | 2024-09-26T06:55:50Z | null | hahmad2008 |
huggingface/competitions | 41 | how to debug a script submission | is there way to see logs or errors of a script based submission | https://github.com/huggingface/competitions/issues/41 | closed | [] | 2024-09-20T18:04:44Z | 2024-09-30T16:08:42Z | null | ktrapeznikov |
huggingface/diffusers | 9,485 | Can we allow making everything on gpu/cuda for scheduler? | **What API design would you like to have changed or added to the library? Why?**
Is it possible to allow setting every tensor attribute of scheduler to cuda device?
In https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lcm.py
It looks like that attributes like `scheduler.alphas_cumprod` are tensors on cpu, but the scheduler.set_timesteps() allows setting `scheduler.timesteps` to gpu/cuda device. Isn't this causing device mismatch when indexing scheduler.alphas_cumprod with scheduler.timesteps? Below is the code snippet that the pipline is indexing a cpu tensor(alphas_cumprod) with a gpu tensor(timestep)

I simply added following lines to print the timestep and self.alphas_cumprod type and device at the begining of the `scheduler.step()`
```
print("Printing scheduler.step() timestep")
print(type(timestep))
print(isinstance(timestep, torch.Tensor))
print(timestep.device)
print("Printing scheduler.step() self.alphas_cumprod")
print(type(self.alphas_cumprod))
print(isinstance(self.alphas_cumprod, torch.Tensor))
print(self.alphas_cumprod.device)
```
Output when running text-to-image:
```
Printing scheduler.step() timestep
<class 'torch.Tensor'>
True
cuda:0
Printing scheduler.step() self.alphas_cumprod
<class 'torch.Tensor'>
True
cpu
```
**What use case would this enable or better enable? Can you give us a code example?**
We are using a modified LCMScheduler (99% same as the original LCMScheduler) for video generations, it's generating frames repeatedly in a loop. for most of the time, this step doesn't cause performance issue. But we did see intermittent high cpu usage and latency for `alpha_prod_t = self.alphas_cumprod[timestep]`. And from torch.profiler and tracing output, it. shows high latency for this specific step. We are wondering if this is the performance bottleneck.

| https://github.com/huggingface/diffusers/issues/9485 | open | [
"stale",
"scheduler",
"performance"
] | 2024-09-20T12:38:16Z | 2024-12-17T15:04:46Z | 14 | xiang9156 |
huggingface/optimum | 2,032 | ONNX support for decision transformers | ### Feature request
I am trying to train off-line RL using decision transformer, convert to .onnx.
```
from pathlib import Path
from transformers.onnx import FeaturesManager
feature = "sequence-classification"
# load config
model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=feature)
onnx_config = model_onnx_config(model.config)
# export
onnx_inputs, onnx_outputs = transformers.onnx.export(
#preprocessor=tokenizer,
model=model,
config=onnx_config,
opset=13,
output=Path("trained_models/DT-model.onnx")
)
```
Get the below error:
```
KeyError: "decision-transformer is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'clip', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'data2vec-vision', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'groupvit', 'ibert', 'imagegpt', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'longformer', 'marian', 'mbart', 'mobilebert', 'mobilenet-v1', 'mobilenet-v2', 'mobilevit', 'mt5', 'm2m-100', 'owlvit', 'perceiver', 'poolformer', 'rembert', 'resnet', 'roberta', 'roformer', 'segformer', 'squeezebert', 'swin', 't5', 'vision-encoder-decoder', 'vit', 'whisper', 'xlm', 'xlm-roberta', 'yolos'] are supported. If you want to support decision-transformer please propose a PR or open up an issue."
```
### Motivation
I would want to use trained models in Godot-RL-Agents. Currently agents are trained using PPO OR imitation learning and bothe support onnx format. Supporting decision transformers could hugely help training models navigating complex scenarios.
### Your contribution
I would be interested to raise a PR. But at this time, I have no idea how to go about this. With little bit of guidance, I can try. | https://github.com/huggingface/optimum/issues/2032 | closed | [
"onnx"
] | 2024-09-20T08:45:28Z | 2024-11-25T13:00:02Z | 1 | ra9hur |
huggingface/setfit | 558 | How to improve the accuracy while classifying short text with less context | Hi, my usecase is to classify Job Title into Functional Areas. I finetuned `all-mpnet-base-v2` with the help of setfit by providing some 10+ examples for each class (Functional Areas).
I got `82%` accuracy on running the evaluation on my test set. I observed some of the simple & straightforward job titles are classified into wrong label with `0.6` score.
For example:
```
Query: SDET
Predicted Label: Big Data / DWH / ETL
Confidence Scores:
Label: Accounting / Finance, Confidence: 0.0111
Label: Backend Development, Confidence: 0.0140
Label: Big Data / DWH / ETL, Confidence: 0.6092
```
Here **SDET** should have labelled as `QA / SDET` but it is classified to `Big Data / DWH / ETL` with `0.62` score. Few shot examples used for both classes doesn't have anything in common which could confuse the model except one example whose title is `Data Quality Engineer` and it is under `Big Data / DWH / ETL`.
**Few shot examples** (added only for 2 here)
```py
{ "QA / SDET": [
"Quality Assurance Engineer",
"Software Development Engineer in Test (SDET)",
"QA Automation Engineer",
"Test Engineer",
"QA Analyst",
"Manual Tester",
"Automation Tester",
"Performance Test Engineer",
"Security Test Engineer",
"Mobile QA Engineer",
"API Tester",
"Load & Stress Test Engineer",
"Senior QA Engineer",
"Test Automation Architect",
"QA Lead",
"QA Manager",
"End-to-End Tester",
"Game QA Tester",
"UI/UX Tester",
"Integration Test Engineer",
"Quality Control Engineer",
"Test Data Engineer",
"DevOps QA Engineer",
"Continuous Integration (CI) Tester",
"Software Test Consultant"
],
"Big Data / DWH / ETL": [
"Big Data Engineer",
"Data Warehouse Developer",
"ETL Developer",
"Hadoop Developer",
"Spark Developer",
"Data Engineer",
"Data Integration Specialist",
"Data Pipeline Engineer",
"Data Architect",
"Database Administrator",
"ETL Architect",
"Data Lake Engineer",
"Informatica Developer",
"DataOps Engineer",
"BI Developer",
"Data Migration Specialist",
"Data Warehouse Architect",
"ETL Tester",
"Big Data Platform Engineer",
"Apache Kafka Engineer",
"Snowflake Developer",
"Data Quality Engineer",
"Data Ingestion Engineer",
"Big Data Consultant",
"ETL Manager"
]
}
```
**TrainingArgs**
```py
args = TrainingArguments(
batch_size=16,
num_epochs=1,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
)
```
**Here is the complete set of functional areas.**
```py
functional_areas = [
"Accounting / Finance",
"Backend Development",
"Big Data / DWH / ETL",
"Brand Management",
"Content Writing",
"Customer Service",
"Data Analysis / Business Intelligence",
"Data Science / Machine Learning",
"Database Admin / Development",
"DevOps / Cloud",
"Embedded / Kernel Development",
"Event Management",
"Frontend Development",
"Full-Stack Development",
"Functional / Technical Consulting",
"General Management / Strategy",
"IT Management / IT Support",
"IT Security",
"Mobile Development",
"Network Administration",
"Online Marketing",
"Operations Management",
"PR / Communications",
"QA / SDET",
"SEO / SEM",
"Sales / Business Development"
]
```
My guess is accuracy is low because of short text (which is just job title). Please suggest few things which I can try out to improve the accuracy of the model. | https://github.com/huggingface/setfit/issues/558 | open | [] | 2024-09-20T06:09:07Z | 2024-11-11T11:23:31Z | null | 29swastik |
huggingface/safetensors | 527 | [Question] Comparison with the zarr format? | Hi,
I know that safetensors are widely used nowadays in HF, and the comparisons made in this repo's README file make a lot of sense.
However, I am now surprised to see that there is no comparison with zarr, which is probably the most widely used format to store tensors in an universal, compressed and scalable way.
Is there any particular reason why safetensors was created instead of just using zarr, which has been around for longer (and has nice benefits such as good performance in object storage reads and writes)?
Thank you! | https://github.com/huggingface/safetensors/issues/527 | open | [] | 2024-09-19T13:32:17Z | 2025-01-13T17:56:46Z | 13 | julioasotodv |
huggingface/transformers | 33,584 | How to fine tune Qlora with Custum trainer. | Full model fine-tuning code is given below. How can i modify the code to train Qlora based model.
```import sys
import os
current_directory = os.path.dirname(os.path.abspath(__file__))
sys.path.append(current_directory)
from src.custom_dataset import RawFileDataset
import copy
import random
from dataclasses import dataclass, field
from typing import Optional, Dict, Sequence
import os
import torch
import torch.distributed
import transformers
from transformers import Trainer
IGNORE_INDEX = -100
DEFAULT_PAD_TOKEN = "[PAD]"
DEFAULT_EOS_TOKEN = "</s>"
DEFAULT_BOS_TOKEN = "</s>"
DEFAULT_UNK_TOKEN = "</s>"
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(default="facebook/opt-125m")
@dataclass
class DataArguments:
data_path: str = field(default=None, metadata={"help": "Path to the training data."})
train_file: str = field(default=None, metadata={"help": "train file name"})
val_file: str = field(default=None, metadata={"help": "val file name"})
@dataclass
class TrainingArguments(transformers.TrainingArguments):
cache_dir: Optional[str] = field(default=None)
optim: str = field(default="adamw_torch")
model_max_length: int = field(
default=512,
metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."},
)
def safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str):
"""Collects the state dict and dump to disk."""
state_dict = trainer.model.state_dict()
if trainer.args.should_save:
cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()}
del state_dict
trainer._save(output_dir, state_dict=cpu_state_dict) # noqa
def smart_tokenizer_and_embedding_resize(
special_tokens_dict: Dict,
tokenizer: transformers.PreTrainedTokenizer,
model: transformers.PreTrainedModel,
):
"""Resize tokenizer and embedding.
Note: This is the unoptimized version that may make your embedding size not be divisible by 64.
"""
num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict)
model.resize_token_embeddings(len(tokenizer))
if num_new_tokens > 0:
input_embeddings = model.get_input_embeddings().weight.data
output_embeddings = model.get_output_embeddings().weight.data
input_embeddings_avg = input_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
output_embeddings_avg = output_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
input_embeddings[-num_new_tokens:] = input_embeddings_avg
output_embeddings[-num_new_tokens:] = output_embeddings_avg
def _tokenize_fn(strings: Sequence[str], tokenizer: transformers.PreTrainedTokenizer) -> Dict:
"""Tokenize a list of strings."""
tokenized_list = [
tokenizer(
text,
return_tensors="pt",
padding="longest",
max_length=tokenizer.model_max_length,
truncation=True,
)
for text in strings
]
input_ids = labels = [tokenized.input_ids[0] for tokenized in tokenized_list]
input_ids_lens = labels_lens = [
tokenized.input_ids.ne(tokenizer.pad_token_id).sum().item() for tokenized in tokenized_list
]
return dict(
input_ids=input_ids,
labels=labels,
input_ids_lens=input_ids_lens,
labels_lens=labels_lens,
)
def preprocess(
sources: Sequence[str],
targets: Sequence[str],
tokenizer: transformers.PreTrainedTokenizer,
) -> Dict:
"""Preprocess the data by tokenizing."""
examples = [s + t for s, t in zip(sources, targets)]
examples_tokenized, sources_tokenized = [_tokenize_fn(strings, tokenizer) for strings in (examples, sources)]
input_ids = examples_tokenized["input_ids"]
labels = copy.deepcopy(input_ids)
for label, source_len in zip(labels, sources_tokenized["input_ids_lens"]):
label[:source_len] = IGNORE_INDEX
return dict(input_ids=input_ids, labels=labels)
@dataclass
class DataCollatorForSupervisedDataset(object):
"""Collate examples for supervised fine-tuning."""
tokenizer: transformers.PreTrainedTokenizer
def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:
### one can customize here, since we set the T for joint loss as 2
batch_input_ids1, batch_input_ids2 = [], []
batch_attention_mask1, batch_attention_mask2 = [], []
batch_labels1, batch_labels2 = [], []
for instance in instances:
instance1, instance2 = instance["instance_1"], instance["instance_2"]
batch_input_ids1.append(instance1["input_ids"])
batch_input_ids2.append(instance2["input_ids"])
batch_attention_mask1.append(instance1["attention_mask"])
batch_attention_mask2.append(instan | https://github.com/huggingface/transformers/issues/33584 | closed | [
"trainer",
"Quantization"
] | 2024-09-19T09:40:00Z | 2024-10-28T08:05:06Z | null | ankitprezent |
huggingface/diffusers | 9,470 | Prompt scheduling in Diffusers like A1111 | Hi everyone, I have a question that how to implement the [prompt scheduling feature](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing) in A1111 by diffusers library.
**Example prompt:** Official portrait of a smiling world war ii general, `[male:female:0.99]`, cheerful, happy, detailed face, 20th century, highly detailed, cinematic lighting, digital art painting by Greg Rutkowski.

| https://github.com/huggingface/diffusers/issues/9470 | closed | [] | 2024-09-19T09:07:30Z | 2024-10-19T17:22:23Z | 5 | linhbeige |
huggingface/chat-ui | 1,476 | Update docs to explain how to use `tokenizer` field for chat prompt formats | ## Bug description
In README.md, it's stated that the prompts used in production for HuggingChat can be found in PROMPTS.md.
However, PROMPTS.md has not been updated for 7 months and there are several prompts missing for newer models.
| https://github.com/huggingface/chat-ui/issues/1476 | open | [
"bug",
"documentation"
] | 2024-09-18T22:49:53Z | 2024-09-20T18:05:05Z | null | horsten |
huggingface/transformers.js | 935 | Is converting a Gemma 2B quantized compatible with transformers.js/onnx? | ### Question
I'm new to dev and wanted to know if converting a gemma 2b using the Optimum converter would work for this model? | https://github.com/huggingface/transformers.js/issues/935 | open | [
"question"
] | 2024-09-18T15:57:55Z | 2024-09-24T20:26:53Z | null | iamhenry |
huggingface/dataset-viewer | 3,063 | Simplify test code where a dataset is set as gated | [huggingface_hub@0.25.0](https://github.com/huggingface/huggingface_hub/releases/tag/v0.25.0) provides an API to set a repository as gated.
We had included a custom version of `update_repo_settings` because it lacked a `gated` parameter. Now we can switch back to the `huggingface_hub` method
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/jobs/cache_maintenance/tests/utils.py#L41
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/services/admin/tests/fixtures/hub.py#L24
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/services/worker/tests/fixtures/hub.py#L35 | https://github.com/huggingface/dataset-viewer/issues/3063 | closed | [
"good first issue",
"tests",
"refactoring / architecture",
"dependencies"
] | 2024-09-18T09:08:14Z | 2025-07-17T15:00:40Z | null | severo |
huggingface/transformers.js | 934 | Repeating tokens in TextStreamer | ### Question
```
import {
AutoTokenizer,
AutoModelForCausalLM,
TextStreamer,
InterruptableStoppingCriteria,
} from "@huggingface/transformers";
class TextGenerationPipeline {
static model = null;
static tokenizer = null;
static streamer = null;
static async getInstance(
progress_callback = null,
model_id = "onnx-community/Phi-3.5-mini-instruct-onnx-web",
) {
this.tokenizer = AutoTokenizer.from_pretrained(model_id, {
progress_callback,
});
this.model = AutoModelForCausalLM.from_pretrained(model_id, {
// dtype: "q4",
dtype: "q4f16",
device: "webgpu",
use_external_data_format: true,
progress_callback,
});
return Promise.all([this.tokenizer, this.model]);
}
}
const stopping_criteria = new InterruptableStoppingCriteria();
let past_key_values_cache = null;
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
if (request.action === "initializeLlmModel") {
console.log("setting up llm");
const initialize = async () => {
const [tokenizer, model] = await TextGenerationPipeline.getInstance(
(x) => {
console.log(x);
},
request.model_id,
);
const inputs = tokenizer("a");
const generatedOutput = await model.generate({
...inputs,
max_new_tokens: 1,
});
console.log(generatedOutput);
sendResponse({ status: "success" });
};
initialize();
return true;
}
if (request.action === "generateText") {
console.log("generating text");
async function generateText() {
const [tokenizer, model] = await TextGenerationPipeline.getInstance();
const text_callback_function = (output) => {
console.log(output);
if (output) {
chrome.runtime.sendMessage({
action: "chatMessageChunk",
chunk: output,
});
}
};
const streamer = new TextStreamer(tokenizer, {
skip_prompt: true,
skip_special_tokens: true,
callback_function: text_callback_function,
});
const inputs = tokenizer.apply_chat_template(request.messages, {
add_generation_prompt: true,
return_dict: true,
});
const { past_key_values, sequences } = await model.generate({
...inputs,
past_key_values: past_key_values_cache,
// Sampling
// do_sample: true,
// top_k: 3,
// temperature: 0.2,
max_new_tokens: 1024,
stopping_criteria,
return_dict_in_generate: true,
streamer,
});
past_key_values_cache = past_key_values;
const decoded = tokenizer.batch_decode(sequences, {
skip_special_tokens: false,
});
console.log(decoded);
sendResponse({ generatedOutput: decoded, status: "success" });
}
generateText();
return true;
}
});
```
In the `text_callback_function` it is sending same token multiple times. What could be the reason? I am handling it on the frontend for the time being but was wondering what is the reason? What am I doing wrong here?
Thank you so much for the help in advance! | https://github.com/huggingface/transformers.js/issues/934 | closed | [
"question"
] | 2024-09-18T02:53:36Z | 2025-10-13T04:50:11Z | null | chandeldivyam |
huggingface/transformers.js | 933 | Uncaught (in promise) TypeError: r.logits is not iterable | ### Question
Hey guys,
I have been trying to train a model for text classification then convert it to an onnx file for use in transformers js following this video
https://www.youtube.com/watch?v=W_lUGPMW_Eg
I keep getting the error Uncaught (in promise) TypeError: r.logits is not iterable
Any ideas on where I might be going wrong or if something has changed since this was released?
This is my basic code, I have python hosting the files locally
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TinyBERT Model in Vanilla JS</title>
</head>
<body>
<h1>TinyBERT Model Inference</h1>
<p>Enter text for classification:</p>
<input type="text" id="inputText" placeholder="Enter your text here" size="50"/>
<button id="runModel">Run Model</button>
<p><strong>Prediction:</strong> <span id="prediction"></span></p>
<script type="module">
import { pipeline, env } from "https://cdn.jsdelivr.net/npm/@xenova/transformers";
document.getElementById('runModel').addEventListener('click', async function () {
const inputText = document.getElementById('inputText').value;
// Load the TinyBERT model for sequence classification from local files
const classifier = await pipeline('text-classification', './finalModel/');
// Run the model to get the prediction
const result = await classifier(inputText);
// Display the result
document.getElementById('prediction').innerText = JSON.stringify(result);
});
</script>
</body>
</html>
``` | https://github.com/huggingface/transformers.js/issues/933 | open | [
"question"
] | 2024-09-16T20:26:02Z | 2024-09-17T19:35:26Z | null | Joseff-Evans |
huggingface/chat-ui | 1,472 | Mistral api configuration without Cloudflare | I'd like to setup a local deployment using **only the mistral API**: https://docs.mistral.ai/api.
Can i use ChatUI without an HF deployment and Cloudflare account?
I leave the .env unchanged and overwrite the env.local with the following code
```yml
AGENT_ID=<my_agent_id_from_mistral>
MISTRAL_API_KEY==<mytoken>
MODELS='[
{
"name": "mistral-large",
"displayName": "mistralai",
"description": "Mistral standard",
"websiteUrl": "https://docs.mistral.ai/",
"preprompt": "",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"top_k": 5,
"stream": true,
"agent_id": "{AGENT_ID}",
"tool_choice": "auto",
"max_new_tokens": 4096
},
"endpoints": [
{
"type": "openai",
"baseURL": "https://api.mistral.ai/v1",
"defaultHeaders": {
"Authorization": "Bearer {MISTRAL_API_KEY}"
}
}
]
},
{
"name": "mistral-embed",
"displayName": "Mistral-embedbedings",
"description": "Mistral embedding model.",
"chunkCharLength": 1024,
"endpoints": [
{
"type": "openai",
"baseURL": "https://api.mistral.ai/v1",
"defaultHeaders": {
"Authorization": "Bearer {MISTRAL_API_KEY}"
}
}
]
}
]'
MONGODB_URL=mongodb://localhost:27017/
PUBLIC_APP_ASSETS=chatui
PUBLIC_APP_COLOR=blue
PUBLIC_APP_NAME="Mistral Local"
```
Not quite sure though if the agend_id is overwritten by the "name". | https://github.com/huggingface/chat-ui/issues/1472 | open | [
"support"
] | 2024-09-16T18:51:09Z | 2024-09-17T08:43:40Z | 0 | JonasMedu |
huggingface/transformers.js | 932 | Best small model for text generation? | ### Question
I'm looking to build a AI Journaling app that helps you reflect from your journal entries
I'm looking for a model like (GPT or Claude) that will take the selected text and provide insights based on a prompt I provide
In this case the prompt will provide suggestions based on psychology techniques like CBT and ACT to help you with your life.
Any ideas on which small model will be able to accomplish this? I've tried GPT2, t5- small, and I couldn't get Phi-3 to work | https://github.com/huggingface/transformers.js/issues/932 | open | [
"question"
] | 2024-09-16T18:06:23Z | 2024-09-26T08:06:35Z | null | iamhenry |
huggingface/distil-whisper | 149 | How to load using openai-whisper package to load the model? | How to load using openai-whisper package to load the model? | https://github.com/huggingface/distil-whisper/issues/149 | open | [] | 2024-09-15T15:08:46Z | 2024-09-15T15:08:46Z | null | lucasjinreal |
huggingface/competitions | 40 | How to modify the competition | Hi! I created a new competition using the [tool given here](https://huggingface.co/spaces/competitions/create). All good up till here.
Then I had the space automatically running. To modify the competition, I cloned the repository of the space locally with the command given on the UI
```
git clone https://huggingface.co/spaces/cmdgentest/commandgen
```
When I inspected the contents, it had only two files - `Dockerfile` and `README.md`. This was surprising as i expected the files mentioned [here](https://huggingface.co/docs/competitions/en/competition_repo).
However, I still created these files myself and pushed the changes to the spaces repo. Once the space was restarted and running, I still wasn't able to see the changes I made.
At this point I am confused where exactly should I put files like `conf.json` in my case. | https://github.com/huggingface/competitions/issues/40 | closed | [
"stale"
] | 2024-09-15T13:45:26Z | 2024-10-08T15:06:28Z | null | dakshvar22 |
huggingface/speech-to-speech | 101 | I am really really curious about how to set up this project on a server to serve multiple users. I have been trying for a long time but haven't come up with a very good solution. | https://github.com/huggingface/speech-to-speech/issues/101 | open | [] | 2024-09-15T13:42:18Z | 2025-02-04T15:44:31Z | null | demoBBB | |
huggingface/transformers | 33,489 | passing past_key_values as a tuple is deprecated, but unclear how to resolve | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.44.2
- Platform: Linux-5.4.0-167-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: NA
- Using GPU in script?: yes
- GPU type: NVIDIA A40
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments
from trl import SFTTrainer, SFTConfig
from accelerate import Accelerator
from peft import LoraConfig
import math, os, random
from datetime import datetime
# Select rows to train on
initial_rows = 50000
annealing_rows = 10000
eval_rows = 10000 # Only 10000 rows for evaluation
batch_size = 8
ga = 4
learning_rate=1e-3
def setup_environment():
os.environ['WANDB_DISABLED'] = 'true'
return Accelerator()
def load_model_and_tokenizer():
model_name = "Trelis/80M-0.0090-cosmopedia"
model_kwargs = {
"torch_dtype": torch.bfloat16,
}
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-360M-Instruct")
model = AutoModelForCausalLM.from_pretrained(model_name, **model_kwargs)
return model, tokenizer
def load_and_preprocess_train_dataset(start_idx, num_rows):
dataset = load_dataset("TIGER-Lab/WebInstructSub", split="train",
streaming=True
)
dataset = dataset.skip(start_idx).take(num_rows)
def format_instruction(example):
return {
"messages": [
{"role": "user", "content": example["question"]},
{"role": "assistant", "content": example["answer"]}
]
}
formatted_dataset = dataset.map(format_instruction)
return formatted_dataset
def format_instruction_for_trainer(example):
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-360M-Instruct")
return tokenizer.apply_chat_template(
example["messages"],
truncation=True,
padding="max_length",
max_length=2048,
tokenize=False,
)
def load_and_preprocess_eval_dataset():
dataset = load_dataset("TIGER-Lab/WebInstructSub", split="train")
# Get the total number of rows in the dataset
total_rows = len(dataset)
# Generate a list of random indices
random_indices = random.sample(range(total_rows), eval_rows)
# Select the random rows
dataset = dataset.select(random_indices)
def format_instruction(example):
return {
"messages": [
{"role": "user", "content": example["question"]},
{"role": "assistant", "content": example["answer"]}
]
}
formatted_dataset = dataset.map(format_instruction, remove_columns=dataset.column_names)
return formatted_dataset
def main():
accelerator = setup_environment()
model, tokenizer = load_model_and_tokenizer()
print(model.device)
# Combined training dataset (streaming)
total_rows = initial_rows + annealing_rows
train_dataset = load_and_preprocess_train_dataset(0, total_rows)
# Evaluation dataset (non-streaming, last 1000 rows)
eval_dataset = load_and_preprocess_eval_dataset()
# Calculate steps
num_epochs = 1
total_steps = (total_rows * num_epochs) // (batch_size * ga)
initial_steps = (initial_rows * num_epochs) // (batch_size * ga)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
run_name = f"SFT-{total_rows}rows-lr{learning_rate}-{timestamp}"
training_args = SFTConfig(
output_dir=f"./Trelis_local/80M-0.015-cosmopedia-SFT-{run_name}",
run_name=run_name,
logging_dir=f"./logs/{run_name}",
eval_strategy="steps",
save_strategy="steps",
report_to="tensorboard",
num_train_epochs=num_epochs,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
warmup_steps=20,
logging_steps=int(total_steps * 0.1),
eval_steps=int(total_steps * 0.1),
save_steps=int(total_steps * 0.1),
learning_rate=learning_rate,
bf16=True,
max_steps=total_steps,
gra | https://github.com/huggingface/transformers/issues/33489 | closed | [
"bug"
] | 2024-09-14T13:58:18Z | 2025-11-29T04:50:43Z | null | RonanKMcGovern |
huggingface/lerobot | 436 | Image storage format | I am quite interested in using `LeRobotDataset` for large scale training. I am interested to get more context on the options for storing images so I am aware of the implications this might have:
- Did you by chance study if the mp4 video compression has any negative effects on the image quality in terms of model performance (or any studies you based your decision on)
- I see atm lerobot supports storing images either in `.mp4` or `.pt`, but not in `arrow` or `parquet` format as many other HF datasets do. Is there any specific reason you didn't add support for `arrow` / `parquet` which also provide memory mapping? Any ideas how pytorch would compare to `arrow` / `parquet` when using datasets of 100s of millions of examples?
| https://github.com/huggingface/lerobot/issues/436 | closed | [
"question",
"dataset",
"stale"
] | 2024-09-12T16:38:21Z | 2025-10-23T02:29:14Z | null | nikonikolov |
huggingface/lerobot | 435 | Open-X datasets | Thanks for the great work! I am interested in converting more of the open-x datasets to `LeRobotDataset`.
- I was wondering if there was any particular reason the entire open-x wasn't added already, e.g. some difficulties you encountered with some specific datasets?
- Do you have any tips where I should be extra careful when converting from RLDS to `LeRobotDataset` or it's generally as easy as calling the conversion script? | https://github.com/huggingface/lerobot/issues/435 | closed | [
"enhancement",
"question",
"dataset"
] | 2024-09-12T16:29:40Z | 2025-10-08T08:25:55Z | null | nikonikolov |
huggingface/lerobot | 432 | some questions about real world env | ### System Info
```Shell
all software cfg match author's project
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [X] My own task or dataset (give details below)
### Reproduction
I am planning to control my own robot left-arm. I've almost figure out all the parts if lerobot-dataset, then I want to make my own dataset respect to the aloha_sim_transfer_cube_human rather than "korch ALOHA teleop hardware system".
my questions are:
1) Must I keep such a high fps like 50 when collect data from camera and arm actions?
2) actions comes from human control on the arm, and state comes from reading operation, but how should I set the time gap between action and state?
### Expected behavior
answers from anyone | https://github.com/huggingface/lerobot/issues/432 | closed | [
"question"
] | 2024-09-12T09:53:23Z | 2025-10-08T08:27:48Z | null | NNsauce |
huggingface/chat-ui | 1,463 | Some bugs | ## Bug description
There are several issues that I have with the site, such as slow performance both on mobile and PC. When trying to select specific parts of the text, it goes back to the original message. Sometimes it occurs in errors that force me to always refresh the conversation. When I switch conversation I have to switch all of my messages to the latest ones.
But I feel it's not my internet that's causing the issue but something on the website.
## Steps to reproduce
The performance is quite mixed, but on mobile is unplayable. (Samsung A40)
Try to select any text, and it will direct you to the first message.
The last one I don't how to replicate except being unlucky with it.
### Specs
- **Windows 11**:
- **Librewolf 124.0.1-1**:
| https://github.com/huggingface/chat-ui/issues/1463 | open | [
"bug"
] | 2024-09-12T08:13:35Z | 2024-09-12T09:03:58Z | 0 | Ruyeex |
huggingface/transformers.js | 929 | what is pipeline? | https://github.com/huggingface/transformers.js/issues/929 | closed | [
"question"
] | 2024-09-12T05:09:05Z | 2024-10-04T10:24:42Z | null | chakravarthi-vatala | |
huggingface/diffusers | 9,417 | Suggestion for speeding up `index_for_timestep` by removing sequential `nonzero()` calls in samplers | **Is your feature request related to a problem? Please describe.**
First off, thanks for the great codebase and providing so many resources! I just wanted to provide some insight into an improvement I made for myself, in case you'd like to include it for all samplers. I'm using the `FlowMatchEulerDiscreteScheduler` and after profiling, I've noticed that it's unexpectedly slowing down my training speeds. I'll describe the issue and proposed solution here rather than making a PR, since this would touch a lot of code and perhaps someone on the diffusers team would like to implement it.
**Describe the solution you'd like.**
This line in particular is very slow because it is a for loop `step_indices = [self.index_for_timestep(t, schedule_timesteps) for t in timestep]` and the `self.index_for_timestep()` is calling a nonzero() function which is slow.
https://github.com/huggingface/diffusers/blob/b9e2f886cd6e9182f1bf1bf7421c6363956f94c5/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L149
**Describe alternatives you've considered.**
I've changed the code as follows:
```python
# huggingface code
def index_for_timestep(self, timestep, schedule_timesteps=None):
if schedule_timesteps is None:
schedule_timesteps = self.timesteps
indices = (schedule_timesteps == timestep).nonzero()
# The sigma index that is taken for the **very** first `step`
# is always the second index (or the last index if there is only 1)
# This way we can ensure we don't accidentally skip a sigma in
# case we start in the middle of the denoising schedule (e.g. for image-to-image)
pos = 1 if len(indices) > 1 else 0
return indices[pos].item()
```
changed to =>
```python
# my code
def index_for_timestep(self, timestep, schedule_timesteps=None):
if schedule_timesteps is None:
schedule_timesteps = self.timesteps
num_steps = len(schedule_timesteps)
start = schedule_timesteps[0].item()
end = schedule_timesteps[-1].item()
indices = torch.round(((timestep - start) / (end - start)) * (num_steps - 1)).long()
return indices
```
and
```python
# huggingface code
# self.begin_index is None when scheduler is used for training, or pipeline does not implement set_begin_index
if self.begin_index is None:
step_indices = [self.index_for_timestep(t, schedule_timesteps) for t in timestep]
```
changed to =>
```python
# my code
# self.begin_index is None when scheduler is used for training, or pipeline does not implement set_begin_index
if self.begin_index is None:
step_indices = self.index_for_timestep(timestep, schedule_timesteps)
```
**Additional context.**
Just wanted to bring this modification to your attention since it could be a training speedup for folks. ๐ Especially when someone has a large batch size > 1 and this for loop it occurring with nonzero search operations. Some other small changes might be necessary to ensure compatibility of the function changes, but I suspect it could help everyone. Thanks for the consideration!
| https://github.com/huggingface/diffusers/issues/9417 | open | [
"help wanted",
"wip",
"contributions-welcome",
"performance"
] | 2024-09-11T14:54:37Z | 2025-02-08T10:26:47Z | 11 | ethanweber |
huggingface/cosmopedia | 29 | What is the best way to cite the work? | This is absolutely fantastic work. Thank you very much for making it public.
What is the best way to cite this dataset/project? Is there any paper I can cite or should I cite the blog-post? | https://github.com/huggingface/cosmopedia/issues/29 | closed | [] | 2024-09-11T14:34:54Z | 2024-09-11T14:36:15Z | null | vijetadeshpande |
huggingface/diffusers | 9,416 | [Schedulers] Add SGMUniform | Thanks to @rollingcookies, we can see in this [issue](https://github.com/huggingface/diffusers/issues/9397) that this schedulers works great with the Hyper and probably also Lighting loras/unets.
It'd be fantastic if someone can contribute this scheduler to diffusers.
Please let me know if someone is willing to do this. | https://github.com/huggingface/diffusers/issues/9416 | closed | [
"help wanted",
"contributions-welcome",
"advanced"
] | 2024-09-11T13:59:27Z | 2024-09-23T23:39:56Z | 12 | asomoza |
huggingface/transformers | 33,416 | The examples in the examples directory are mostly for fine-tuning pre-trained models๏ผhow to trian from scratch | ### Model description
no
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | https://github.com/huggingface/transformers/issues/33416 | open | [
"New model"
] | 2024-09-11T03:32:53Z | 2024-10-03T23:28:42Z | null | zc-Chao |
huggingface/diffusers | 9,407 | callback / cannot yield intermediate images on the fly during inference | Hi,
in advance apologies if this has been asked already, or if I'm just misusing the diffusers API.
Using `diffusers==0.30.2`
**What API design would you like to have changed or added to the library? Why?**
I will illustrate straight away the general issue with my use case: I need to call a (FLUX) diffusers pipeline from some endpoint of mine, passing a callback that decodes latents and saves on disk intermediate images obtained from them, at the end of each step. So far, so good: I do manage to get the intermediate images saved on disk. I do this using the pipeline argument `callback_on_step_end`
Now, I'd like to _**yield**_ (in the pythonic meaning) these intermediate images on the fly, as soon as they're available, ie at the end of each inference step. I need to do so from my endpoint. That's where my problem is.
I could not make this idea work using with diffusers callback mechanism.
I mean, I did manage that by subclassing the pipeline, copy-pasting the dunder call method code and overriding it, but this is not maintainable, especially since the FLUX code evolves rapidly nowadays.
Also, note that currently diffusers assigns the result of the call to the callback to a variable and expects it to implement the `.pop` method, which might add constraints (diffusers typically expects a kwarg dict, see [here](https://github.com/huggingface/diffusers/blob/v0.30.2/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L1026)).
Another approach I thought of is to monitor the disk contents in a parallel process during the call to the pipeline.
But is there an easier way?
**What use case would this enable or better enable? Can you give us a code example?**
This allows to manipulate the objects produced by the callback live, instead of having to wait for the whole reverse diffusion to finish.
Thank you
cc @sayakpaul @yiyixuxu
also tagging @asomoza since I saw he is the contributor to the official callback interface
| https://github.com/huggingface/diffusers/issues/9407 | closed | [] | 2024-09-10T16:32:04Z | 2024-09-25T12:28:20Z | 8 | Clement-Lelievre |
huggingface/transformers.js | 928 | The inference speed on the mobile end is a bit slow | ### Question
If it is a mobile device that does not support WebGPU, how can we improve the inference speed of the model? I have tried WebWorker, but the results were not satisfactory | https://github.com/huggingface/transformers.js/issues/928 | open | [
"question"
] | 2024-09-10T09:14:16Z | 2024-09-11T08:46:33Z | null | Gratifyyy |
huggingface/transformers.js | 927 | Error with Using require for ES Modules in @xenova/transformers Package | ### Question
trying to use require to import the Pipeline class from the @xenova/transformers package, but encounter the following error:
const { Pipeline } = require('@xenova/transformers');
^
Error [ERR_REQUIRE_ESM]: require() of ES Module D:\Z-charity\dating_app_backend\node_modules@xenova\transformers\src\transformers.js from D:\Z-charity\dating_app_backend\controllers\authController.js not supported.
Instead change the require of transformers.js in D:\Z-charity\dating_app_backend\controllers\authController.js to a dynamic import() which is available in all CommonJS modules.
at Object. (D:\Z-charity\dating_app_backend\controllers\authController.js:10:22) {
code: 'ERR_REQUIRE_ESM'
Issue with Dynamic Import
const getPipeline = async () => {
const { Pipeline } = await import('@xenova/transformers');
return new Pipeline('text-classification', 'xenova/bert-base-uncased');
};
{
"message": "Server error",
"error": "Must implement _call method in subclass"
}
Reproduction
trying to use require to import the Pipeline class from the @xenova/transformers package, but encounter the following error:
const { Pipeline } = require('@xenova/transformers');
^
Error [ERR_REQUIRE_ESM]: require() of ES Module D:\Z-charity\dating_app_backend\node_modules@xenova\transformers\src\transformers.js from D:\Z-charity\dating_app_backend\controllers\authController.js not supported.
Instead change the require of transformers.js in D:\Z-charity\dating_app_backend\controllers\authController.js to a dynamic import() which is available in all CommonJS modules.
at Object. (D:\Z-charity\dating_app_backend\controllers\authController.js:10:22) {
code: 'ERR_REQUIRE_ESM'
Issue with Dynamic Import
const getPipeline = async () => {
const { Pipeline } = await import('@xenova/transformers');
return new Pipeline('text-classification', 'xenova/bert-base-uncased');
};
{
"message": "Server error",
"error": "Must implement _call method in subclass"
}
| https://github.com/huggingface/transformers.js/issues/927 | closed | [
"question"
] | 2024-09-10T06:02:53Z | 2024-12-08T19:17:31Z | null | qamarali205 |
huggingface/transformers.js | 925 | V3 - WebGPU Whisper in Chrome Extention | ### Question
Can [webGPU accelerated whisper](https://huggingface.co/spaces/Xenova/whisper-webgpu) run in a chrome extension?
I checked the space and found the dependency `"@xenova/transformers": "github:xenova/transformers.js#v3"` which I imported in a chrome extension. When I tried to import it, it didn't work.
```
Module not found: Error: Can't resolve '@xenova/transformers' in 'D:\projects\mosaic8\browser-extension\src\utils'
resolve '@xenova/transformers' in 'D:\projects\mosaic8\browser-extension\src\utils'
Parsed request is a module
using description file: D:\projects\mosaic8\browser-extension\package.json (relative path: ./src/utils)
Field 'browser' doesn't contain a valid alias configuration
resolve as module
D:\projects\mosaic8\browser-extension\src\utils\node_modules doesn't exist or is not a directory
D:\projects\mosaic8\browser-extension\src\node_modules doesn't exist or is not a directory
D:\projects\mosaic8\browser-extension\node_modules doesn't exist or is not a directory
looking for modules in D:\projects\mosaic8\node_modules
single file module
using description file: D:\projects\mosaic8\package.json (relative path: ./node_modules/@xenova/transformers)
no extension
Field 'browser' doesn't contain a valid alias configuration
D:\projects\mosaic8\node_modules\@xenova\transformers is not a file
.ts
Field 'browser' doesn't contain a valid alias configuration
D:\projects\mosaic8\node_modules\@xenova\transformers.ts doesn't exist
.tsx
Field 'browser' doesn't contain a valid alias configuration
D:\projects\mosaic8\node_modules\@xenova\transformers.tsx doesn't exist
.js
Field 'browser' doesn't contain a valid alias configuration
D:\projects\mosaic8\node_modules\@xenova\transformers.js doesn't exist
.jsx
Field 'browser' doesn't contain a valid alias configuration
D:\projects\mosaic8\node_modules\@xenova\transformers.jsx doesn't exist
existing directory D:\projects\mosaic8\node_modules\@xenova\transformers
using description file: D:\projects\mosaic8\node_modules\@xenova\transformers\package.json (relative path: .)
using exports field: ./dist/transformers.js
using description file: D:\projects\mosaic8\node_modules\@xenova\transformers\package.json (relative path: ./dist/transformers.js)
no extension
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js doesn't exist
.ts
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js.ts doesn't exist
.tsx
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js.tsx doesn't exist
.js
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js.js doesn't exist
.jsx
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js.jsx doesn't exist
as directory
D:\projects\mosaic8\node_modules\@xenova\transformers\dist\transformers.js doesn't exist
```
I might be doing something I don't know maybe. What could the issue here be?
What I can understand is that it is trying to search for a ts/tsx/js/jsx file (as specified in the `webpack.config.js` and it is unable to get it. | https://github.com/huggingface/transformers.js/issues/925 | open | [
"question"
] | 2024-09-10T02:52:41Z | 2025-01-18T16:03:26Z | null | chandeldivyam |
huggingface/diffusers | 9,402 | [Flux ControlNet] Add img2img and inpaint pipelines | We recently added img2img and inpainting pipelines for Flux thanks to @Gothos contribution.
We also have controlnet support for Flux thanks to @wangqixun.
It'd be nice to have controlnet versions of these pipelines since there's been requests to have them.
Basically, we need to create two new pipelines that add the controlnet support from this [pipeline ](https://github.com/huggingface/diffusers/blob/f28a8c257afe8eeb16b4deb973c6b1829f6aea59/src/diffusers/pipelines/flux/pipeline_flux_controlnet.py) to the corresponding pipellines.
- [X] [Image to image](https://github.com/huggingface/diffusers/blob/f28a8c257afe8eeb16b4deb973c6b1829f6aea59/src/diffusers/pipelines/flux/pipeline_flux_img2img.py)
- [X] [Inpaint](https://github.com/huggingface/diffusers/blob/f28a8c257afe8eeb16b4deb973c6b1829f6aea59/src/diffusers/pipelines/flux/pipeline_flux_inpaint.py)
Related issue: #9158
Let me know if someone is interested in contributing this. | https://github.com/huggingface/diffusers/issues/9402 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-09-10T02:08:32Z | 2024-10-25T02:22:19Z | 11 | asomoza |
huggingface/transformers.js | 924 | Steps for suppressing strings | ### Question
What is the syntax for suppressing strings from showing up in the output text? Should I be doing that in my code, or is there a config option for it? I'm trying to remove everything that isn't a word:
```
const suppressedStrings = [
"[BLANK_AUDIO]",
"[CLEARS THROAT]",
"[Coughing]",
"[inaudible]",
"[MUSIC]",
"[MUSIC PLAYING]",
"[Pause]",
"(keyboard clicking)",
];
``` | https://github.com/huggingface/transformers.js/issues/924 | open | [
"question"
] | 2024-09-09T21:44:16Z | 2025-01-24T17:53:47Z | null | stinoga |
huggingface/diffusers | 9,395 | [Q] Possibly unused `self.final_alpha_cumprod` | Hello team, quick question to make sure I understand the behavior of the `step` function in LCM Scheduler.
https://github.com/huggingface/diffusers/blob/a7361dccdc581147620bbd74a6d295cd92daf616/src/diffusers/schedulers/scheduling_lcm.py#L534-L543
Here, it seems that the condition `prev_timestep >= 0` is always `True`, because `timestep` and `self.timesteps[prev_step_index]` cannot be negative. This would mean that `self.final_alpha_cumprod` is never used. Is there a way in which `prev_timestep` can be negative? | https://github.com/huggingface/diffusers/issues/9395 | open | [
"stale"
] | 2024-09-09T17:35:08Z | 2024-11-09T15:03:23Z | 7 | fdtomasi |
huggingface/chat-ui | 1,458 | Chat ui sends message prompt 404 | ```
MONGODB_URL='mongodb://localhost:27017'
PLAYWRIGHT_ADBLOCKER='false'
MODELS=`[
{
"name": "Local minicpm",
"tokenizer": "minicpm",
"preprompt": "",
"chatPromptTemplate": "<s>{{preprompt}}{{#each messages}}{{#ifUser}}<|user|>\n{{content}}<|end|>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{content}}<|end|>\n{{/ifAssistant}}{{/each}}",
"parameters": {
"stop": ["<|end|>", "<|endoftext|>", "<|assistant|>"],
"temperature": 0.7,
"max_new_tokens": 1024,
"truncate": 3071
},
"endpoints": [{
"type" : "openai",
"baseURL": "***/v1/chat/completions",
"defaultHeaders": {
"x-portkey-config": '{ "Authorization": "Bearer apikey" }'
}
}],
},
]`
```
Prompt for the following error๏ผ
```
ERROR (15839): 404 status code (no body)
err: {
"type": "NotFoundError",
"message": "404 status code (no body)",
"stack":
Error: 404 status code (no body)
at APIError.generate (file:///Users/user/Desktop/chat-ui/node_modules/openai/error.mjs:50:20)
at OpenAI.makeStatusError (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:268:25)
at OpenAI.makeRequest (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:311:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/Users/user/Desktop/chat-ui/src/lib/server/endpoints/openai/endpointOai.ts:111:36)
at async Module.generateFromDefaultEndpoint (/Users/user/Desktop/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:11:23)
at async generateTitle (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/title.ts:53:10)
at async Module.generateTitleForConversation (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/title.ts:16:19)
"status": 404,
"headers": {
"connection": "keep-alive",
"content-encoding": "gzip",
"content-type": "text/plain; charset=utf-8",
"date": "Mon, 09 Sep 2024 13:29:16 GMT",
"transfer-encoding": "chunked",
"vary": "Accept-Encoding"
}
}
[21:29:16.156] ERROR (15839): 404 status code (no body)
err: {
"type": "NotFoundError",
"message": "404 status code (no body)",
"stack":
Error: 404 status code (no body)
at APIError.generate (file:///Users/user/Desktop/chat-ui/node_modules/openai/error.mjs:50:20)
at OpenAI.makeStatusError (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:268:25)
at OpenAI.makeRequest (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:311:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/Users/user/Desktop/chat-ui/src/lib/server/endpoints/openai/endpointOai.ts:111:36)
at async Module.generate (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/generate.ts:8:30)
at async textGenerationWithoutTitle (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/index.ts:62:3)
"status": 404,
"headers": {
"connection": "keep-alive",
"content-encoding": "gzip",
"content-type": "text/plain; charset=utf-8",
"date": "Mon, 09 Sep 2024 13:29:16 GMT",
"transfer-encoding": "chunked",
"vary": "Accept-Encoding"
}
}
```
Accessing through Postman alone is normal | https://github.com/huggingface/chat-ui/issues/1458 | open | [
"support"
] | 2024-09-09T13:31:56Z | 2024-09-13T09:32:24Z | 2 | nextdoorUncleLiu |
huggingface/chat-ui | 1,456 | could you provide an easy way to force output as json? | current I use
preprompt:'only output json. Do not output anything that is not json. Do not use markdown format. Must begin with {.'
But llama is not smart enough to output json form. It always begin with Here is the JSON answer or begin with ```(markdown format) for give me unvalid json string.
It seems preprompt is not enough to force json format. Could you provide an easy way to output just json. Or maybe the method is in tools. | https://github.com/huggingface/chat-ui/issues/1456 | open | [
"enhancement"
] | 2024-09-09T11:34:17Z | 2024-10-06T18:35:29Z | 1 | ghost |
huggingface/diffusers | 9,392 | [Scheduler] Add SNR shift following SD3, would the rest of the code need to be modified? | **What API design would you like to have changed or added to the library? Why?**
With the increasing resolution of image or video generation, we need to introduce more noise at smaller T, such as SNR shift following SD3. I have observed that CogVideoX's schedule has already implemented [this](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim_cogvideox.py#L214). If I add this line to the DDPM schedule, would the rest of the code (e.g., noise addition, sampling, etc.) need to be modified? I assume it wouldn't, but I seek a precise response.
**What use case would this enable or better enable? Can you give us a code example?**
```
class DDPMScheduler(SchedulerMixin, ConfigMixin):
def __init__(snr_shift_scale, **kwarg)
# predefine beta and alpha
self.alphas_cumprod = self.alphas_cumprod / (snr_shift_scale + (1 - snr_shift_scale) * self.alphas_cumprod)
# other code
# Other functions are the same as before
```
| https://github.com/huggingface/diffusers/issues/9392 | open | [
"stale"
] | 2024-09-09T09:19:37Z | 2025-01-05T15:05:04Z | 7 | LinB203 |
huggingface/speech-to-speech | 96 | How to designate Melo TTS model to use my trained model? | Hi,
I am using Melo as TTS. And I trained with my datasets. How to designate Melo (here at speech to speech) to use my model?
Thanks! | https://github.com/huggingface/speech-to-speech/issues/96 | closed | [] | 2024-09-08T20:36:23Z | 2024-09-10T14:42:58Z | null | insufficient-will |
huggingface/huggingface_hub | 2,526 | How can I rename folders in given repo? I need to rename folders | ### Describe the bug
I am try to rename like below but it fails :/
```
from huggingface_hub import HfApi
import os
# Initialize the Hugging Face API
api = HfApi()
# Set the repository name
repo_name = "MonsterMMORPG/3D-Cartoon-Style-FLUX"
# Define the folder renaming mappings
folder_renames = {
"Training-Checkpoints-NO-Captions": "Training-Checkpoints-Inconsistent-DATASET-NO-Captions",
"Training-Checkpoints-With-Captions": "Training-Checkpoints-Inconsistent-DATASET-With-Captions"
}
# Function to rename folders
def rename_folder(repo_name, old_name, new_name):
try:
api.move_folder(
repo_id=repo_name,
path_in_repo=old_name,
new_path=new_name,
commit_message=f"Rename folder '{old_name}' to '{new_name}'"
)
print(f"Successfully renamed '{old_name}' to '{new_name}'")
except Exception as e:
print(f"Error renaming '{old_name}' to '{new_name}': {str(e)}")
# Iterate through the folder renaming mappings and rename each folder
for old_name, new_name in folder_renames.items():
rename_folder(repo_name, old_name, new_name)
print("Folder renaming process completed.")
```
### Reproduction
_No response_
### Logs
_No response_
### System info
```shell
latest
```
| https://github.com/huggingface/huggingface_hub/issues/2526 | closed | [
"bug"
] | 2024-09-07T17:23:54Z | 2024-09-09T10:49:26Z | null | FurkanGozukara |
huggingface/transformers | 33,359 | [Docs] How to build offline HTML or Docset files for other documentation viewers? | ### Feature request
How can I build the docs into HTML files for use with other documentation viewers like [Dash](https://www.kapeli.com/dash) , [Dash-User-Contributions](https://github.com/Kapeli/Dash-User-Contributions)?
I successfully built the PyTorch docs for Dash by working directly in their `docs/` directory. Iโm wondering if a similar process exists for Hugging Face libraries.
### Motivation
The Dash docset viewer is very useful for viewing multiple documentation sets in one place, even offline. It would be great to support it and include all Hugging Face libraries.
### Your contribution
Iโve built the PyTorch docs for Dash, so Iโm familiar with incorporating and generating docsets. | https://github.com/huggingface/transformers/issues/33359 | closed | [
"Documentation",
"Feature request"
] | 2024-09-06T15:51:35Z | 2024-09-10T23:43:57Z | null | ueoo |
huggingface/transformers | 33,343 | How to install transformers==4.45, two or three days I can install successfully, but today cannot. | ### System Info
torch2.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
pip install git+https://github.com/huggingface/transformers.git
### Expected behavior
How to install the latest transformers | https://github.com/huggingface/transformers/issues/33343 | closed | [
"Installation",
"bug"
] | 2024-09-06T08:23:00Z | 2024-10-16T08:04:10Z | null | HyacinthJingjing |
huggingface/optimum-nvidia | 149 | How to use TensorRT model converter | Referring to [src/optimum/nvidia/export/converter.py] -> class 'TensorRTModelConverter' this could 'Take a local model and create the TRTLLM checkpoint and engine'
Questions:
- What are applicable local model format? e.g. JAX, HuggingFace, DeepSpeed
- How to use this script individually to generate TRTLLM checkpoint/engine? Could you please share if any tutorial?
Thank you.
| https://github.com/huggingface/optimum-nvidia/issues/149 | open | [] | 2024-09-05T18:55:15Z | 2024-09-05T18:55:15Z | null | FortunaZhang |
huggingface/datasets | 7,139 | Use load_dataset to load imagenet-1K But find a empty dataset | ### Describe the bug
```python
def get_dataset(data_path, train_folder="train", val_folder="val"):
traindir = os.path.join(data_path, train_folder)
valdir = os.path.join(data_path, val_folder)
def transform_val_examples(examples):
transform = Compose([
Resize(256),
CenterCrop(224),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
def transform_train_examples(examples):
transform = Compose([
RandomResizedCrop(224),
RandomHorizontalFlip(),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
# @fengsicheng: This way is very slow for big dataset like ImageNet-1K (but can pass the network problem using local dataset)
# train_set = load_dataset("imagefolder", data_dir=traindir, num_proc=4)
# test_set = load_dataset("imagefolder", data_dir=valdir, num_proc=4)
train_set = load_dataset("imagenet-1K", split="train", trust_remote_code=True)
test_set = load_dataset("imagenet-1K", split="test", trust_remote_code=True)
print(train_set["label"])
train_set.set_transform(transform_train_examples)
test_set.set_transform(transform_val_examples)
return train_set, test_set
```
above the code, but output of the print is a list of None:
<img width="952" alt="image" src="https://github.com/user-attachments/assets/c4e2fdd8-3b8f-481e-8f86-9bbeb49d79fb">
### Steps to reproduce the bug
1. just ran the code
2. see the print
### Expected behavior
I do not know how to fix this, can anyone provide help or something? It is hurry for me
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.6
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | https://github.com/huggingface/datasets/issues/7139 | open | [] | 2024-09-05T15:12:22Z | 2024-10-09T04:02:41Z | 2 | fscdc |
huggingface/datasets | 7,138 | Cache only changed columns? | ### Feature request
Cache only the actual changes to the dataset i.e. changed columns.
### Motivation
I realized that caching actually saves the complete dataset again.
This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again.
### Your contribution
Is this even viable in the current architecture of the package?
I quickly looked into it and it seems it would require significant changes.
I would spend some time looking into this but maybe somebody could help with the feasibility and some plan to implement before spending too much time on it? | https://github.com/huggingface/datasets/issues/7138 | open | [
"enhancement"
] | 2024-09-05T12:56:47Z | 2024-09-20T13:27:20Z | 2 | Modexus |
huggingface/lerobot | 413 | Compatible off-the-shelf robots? | Huge thanks for making all of this available!
Can you recommend any (low-cost) off-the-shelf robots to work with? | https://github.com/huggingface/lerobot/issues/413 | closed | [
"question"
] | 2024-09-05T10:21:24Z | 2025-10-08T08:27:56Z | null | danielfriis |
huggingface/diffusers | 9,362 | IndexError: index 29 is out of bounds for dimension 0 with size 29 | ### Describe the bug
I have three problems because of the same reason.
1) TypeError: unsupported operand type(s) for +=: 'NoneType' and 'int'
# upon completion increase step index by one
self._step_index += 1 <---Error [here](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L303)
2) IndexError: index 29 is out of bounds for dimension 0 with size 29
sigma_next = self.sigmas[self.step_index + 1] <--- Error [here](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L295)
3) RuntimeError: Already borrowed
if _truncation is not None:
self._tokenizer.no_truncation() <--- Error here
Example: https://github.com/huggingface/tokenizers/issues/537
The reason, as I understood, is threads. Do you know, how can I solve this problem?
### Reproduction
```
from diffusers import (
FluxPipeline,
FlowMatchEulerDiscreteScheduler,
)
import torch
pipeline = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16
).to("cuda")
seed = 42
height = 720
width = 1280
generator = torch.Generator(device="cuda").manual_seed(seed)
pipeline(
prompt=prompt + ", highly detailed, all is depicted as silhouettes, without words",
guidance_scale=0.,
# num_inference_steps=10,
height=height,
width=width,
generator=generator,
max_sequence_length=256,
).images[0]
```
### Logs
```shell
For example:
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 1473, in wsgi_app
response = self.full_dispatch_request()
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 882, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 880, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "/app/main.py", line 29, in generate_image
image = imagegen.run(**data)
File "/app/image_generator.py", line 102, in run
return generate_image()
File "/app/image_generator.py", line 89, in generate_image
return self.pipeline(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 734, in __call__
latents = self.scheduler.step(noise_pred, t, latents, return_dict=False)[0]
File "/opt/conda/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py", line 295, in step
sigma_next = self.sigmas[self.step_index + 1]
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
```
### System Info
- ๐ค Diffusers version: 0.31.0.dev0
- Platform: Linux-5.4.0-171-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.13
- PyTorch version (GPU?): 2.2.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.24.6
- Transformers version: 4.44.2
- Accelerate version: 0.34.0
- PEFT version: 0.12.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.4
- xFormers version: not installed
- Accelerator: NVIDIA RTX A6000, 46068 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@yiyixuxu @sayakpaul @DN6 | https://github.com/huggingface/diffusers/issues/9362 | open | [
"bug",
"stale"
] | 2024-09-04T11:02:49Z | 2024-11-25T15:04:22Z | 8 | Anvarka |
huggingface/tokenizers | 1,627 | Rust: How to handle models with `precompiled_charsmap = null` | Hi guys,
I'm currently working on https://github.com/supabase/edge-runtime/pull/368 that pretends to add a rust implementation of `pipeline()`.
While I was coding the `translation` task I figured out that I can't load the `Tokenizer` instance for [Xenova/opus-mt-en-fr](https://huggingface.co/Xenova/opus-mt-en-fr) `onnx` model and their other `opus-mt-*` variants.
<details>
<summary>I got the following:</summary>
```rust
let tokenizer_path = Path::new("opus-mt-en-fr/tokenizer.json");
let tokenizer = Tokenizer::from_file(tokenizer_path).unwrap();
```
```
thread 'main' panicked at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/normalizers/mod.rs:143:26:
Precompiled: Error("invalid type: null, expected a borrowed string", line: 1, column: 28)
stack backtrace:
0: rust_begin_unwind
at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/std/src/panicking.rs:662:5
1: core::panicking::panic_fmt
at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/core/src/panicking.rs:74:14
2: core::result::unwrap_failed
at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/core/src/result.rs:1679:5
3: core::result::Result<T,E>::expect
at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/core/src/result.rs:1059:23
4: <tokenizers::normalizers::NormalizerWrapper as serde::de::Deserialize>::deserialize
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/normalizers/mod.rs:139:25
5: <serde::de::impls::OptionVisitor<T> as serde::de::Visitor>::visit_some
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/impls.rs:916:9
6: <&mut serde_json::de::Deserializer<R> as serde::de::Deserializer>::deserialize_option
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:1672:18
7: serde::de::impls::<impl serde::de::Deserialize for core::option::Option<T>>::deserialize
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/impls.rs:935:9
8: <core::marker::PhantomData<T> as serde::de::DeserializeSeed>::deserialize
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/mod.rs:801:9
9: <serde_json::de::MapAccess<R> as serde::de::MapAccess>::next_value_seed
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:2008:9
10: serde::de::MapAccess::next_value
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/mod.rs:1874:9
11: <tokenizers::tokenizer::serialization::TokenizerVisitor<M,N,PT,PP,D> as serde::de::Visitor>::visit_map
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/serialization.rs:132:55
12: <&mut serde_json::de::Deserializer<R> as serde::de::Deserializer>::deserialize_struct
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:1840:31
13: tokenizers::tokenizer::serialization::<impl serde::de::Deserialize for tokenizers::tokenizer::TokenizerImpl<M,N,PT,PP,D>>::deserialize
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/serialization.rs:62:9
14: <tokenizers::tokenizer::_::<impl serde::de::Deserialize for tokenizers::tokenizer::Tokenizer>::deserialize::__Visitor as serde::de::Visitor>::visit_newtype_struct
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/mod.rs:408:21
15: <&mut serde_json::de::Deserializer<R> as serde::de::Deserializer>::deserialize_newtype_struct
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:1723:9
16: tokenizers::tokenizer::_::<impl serde::de::Deserialize for tokenizers::tokenizer::Tokenizer>::deserialize
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/mod.rs:408:21
17: serde_json::de::from_trait
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:2478:22
18: serde_json::de::from_str
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:2679:5
19: tokenizers::tokenizer::Tokenizer::from_file
at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/mod.rs:439:25
20: transformers_rs::pipeline::tasks::seq_to_seq::seq_to_seq
at ./src/pipeline/tasks/seq_to_seq.rs:51:21
21: app::main
at ./examples/app/src/main.rs:78:5
22: core::ops::function::FnOnce::call_on | https://github.com/huggingface/tokenizers/issues/1627 | open | [
"Feature Request"
] | 2024-09-04T08:33:06Z | 2024-10-06T15:34:06Z | null | kallebysantos |
huggingface/optimum | 2,013 | Is it possible convert decoder_model_merged.onnx to tensorrt via trtexec command ? | At the first I convert whisper-tiny to onnx via optimum-cli
`optimum-cli export onnx --model openai/whisper-tiny --task automatic-speech-recognition-with-past whisper-tiny-onnx`
I got the some config, encoder and decoder_merged model
then I brought encoder and decoder_merged to convert to tensorrt via NGC version 23.09-py3, encoder not problem but decoder_merged got problem while converting.
`trtexec --onnx=/workspace/models/whisper-tiny-onnx/decoder_model_merged.onnx --saveEngine=/workspace/models/whisper-tiny-onnx/decoder_model_merged.plan`
the error happen :
`[5] Assertion failed: (node.output().size() <= static_cast<int32_t>(outputs.size())) && "Node has more output tensors than TRT expected."`

Can someone help me about this or Have another ways for good practice ? Please . . . | https://github.com/huggingface/optimum/issues/2013 | closed | [] | 2024-09-03T17:52:40Z | 2024-09-15T10:16:34Z | 3 | ccyrene |
huggingface/lerobot | 407 | Multi-Image support for VQ-BeT | Hello, I wanted to ask if there is a possibility to have VQ-BeT running on multiple camera's for some environments that have different views, like Robomimic? If so can someone give me points on what exactly I need to change, I would be happy to submit a PR once I get it working on my side and finish the ICLR deadline!
Currently, if I understand correctly we need to change the `VQBeTRgbEncoder`, it seems like it supports multiple camera views but there is an [assert statement](https://github.com/huggingface/lerobot/blob/27ba2951d128a3db2497d1337031e01fb995ccfe/lerobot/common/policies/vqbet/modeling_vqbet.py#L745) that checks the length of the image views to be 1. Is there a specific reason for this assert statement, i.e., I need to change something else? | https://github.com/huggingface/lerobot/issues/407 | closed | [
"question",
"policies"
] | 2024-09-03T17:00:23Z | 2025-10-08T08:27:39Z | null | bkpcoding |
huggingface/optimum | 2,009 | [Feature request] Add kwargs or additional options for torch.onnx.export | ### Feature request
In `optimum.exporters.onnx.convert import export_pytorch`, there could be an option to add additional kwargs to the function which could be passed to the torch.onnx.export function.
### Motivation
If such an option possible or will this ruin any of the other features, or is there a reason why there is no option available as of yet?
### Your contribution
Could contribute if this doesn't ruin any other features, or the current feature. | https://github.com/huggingface/optimum/issues/2009 | open | [
"onnx"
] | 2024-09-03T13:52:50Z | 2024-10-08T15:27:26Z | 0 | martinkorelic |
huggingface/speech-to-speech | 74 | How to integrate it with frontend | Hi, What steps should I follow to create a web app UI and integrate it?
Many thanks for considering my request. | https://github.com/huggingface/speech-to-speech/issues/74 | open | [] | 2024-09-03T12:18:52Z | 2024-09-03T13:52:08Z | null | shrinivasait |
huggingface/diffusers | 9,356 | pipeline_stable_diffusion_xl_adapter | ### Describe the bug
I want to rewrite the call function of the pipeline_stable_diffusion_xl_adapter. When I want to use the function prepare_ip_adapter_image_embeds, there is an error called "AttributeError: 'NoneType' object has no attribute 'image_projection_layers'". The error tells me that the attribution self.unet.encoder_hid_proj is 'NoneType'. The pre-trianed model is 'stabilityai/stable-diffusion-xl-base-1.0'. Is there anything wrong when I use it? Thank you.
### Reproduction
model_path = 'stabilityai/stable-diffusion-xl-base-1.0'
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-openpose-sdxl-1.0",)
scheduler = DDPMScheduler.from_pretrained(model_path, subfolder="scheduler")
pipe = AdapterPosePipeline.from_pretrained(model_path, adapter=adapter, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler).to(device)
image_embeds = self.prepare_ip_adapter_image_embeds(
image,
ip_adapter_image_embeds,
device,
batch_size * num_images_per_prompt,
self.do_classifier_free_guidance,
)
### Logs
```shell
root@autodl-container-9d8d46936f-161f523c:~/autodl-tmp/COMP5704_Pose_Driven/src# python run.py
/root/miniconda3/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
@torch.library.impl_abstract("xformers_flash::flash_fwd")
/root/miniconda3/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
@torch.library.impl_abstract("xformers_flash::flash_bwd")
/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/mediapipe_face/mediapipe_face_common.py:7: UserWarning: The module 'mediapipe' is not installed. The package will have limited functionality. Please install it using the command: pip install 'mediapipe'
warnings.warn(
Loading pipeline components...: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 7/7 [00:01<00:00, 4.87it/s]
/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/open_pose/body.py:34: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model_dict = util.transfer(self.model, torch.load(model_path))
/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/open_pose/hand.py:14: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model_dict = util.transfer(self.model, torch.load(model_path))
/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/open_pose/face.py:325: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. | https://github.com/huggingface/diffusers/issues/9356 | open | [
"bug",
"stale"
] | 2024-09-03T10:25:57Z | 2024-10-28T15:03:18Z | 6 | Yuhan291 |
huggingface/diffusers | 9,352 | Text generation? | Hi thanks for this great library!
There seems to be some diffusion models that generate text, instead of images. (For example, these two surveys: https://arxiv.org/abs/2303.06574, https://www.semanticscholar.org/paper/Diffusion-models-in-text-generation%3A-a-survey-Yi-Chen/41941f072db18972b610de9979e755afba35f11e). Therefore, it would be great if Diffusers could support this.
| https://github.com/huggingface/diffusers/issues/9352 | open | [
"wip"
] | 2024-09-03T06:54:38Z | 2024-11-23T04:57:37Z | 13 | fzyzcjy |
huggingface/speech-to-speech | 71 | How to run in ubuntu | I am trying to run it locally in my Ubuntu machine I have nvidia gpu and already setup CUDA.
```
python s2s_pipeline.py \
--recv_host 0.0.0.0 \
--send_host 0.0.0.0 \
--lm_model_name microsoft/Phi-3-mini-4k-instruct \
--init_chat_role system \
--stt_compile_mode reduce-overhead \
--tts_compile_mode default
```
This is the command I passed in the terminal but I am getting Error like this
```
(venv) basal-desktop@basal-desktop:/media/basal-desktop/E/speech-to-speech$ python s2s_pipeline.py --recv_host 0.0.0.0 --send_host 0.0.0.0 --lm_model_name microsoft/Phi-3-mini-4k-instruct --init_chat_role system --stt_compile_mode reduce-overhead --tts_compile_mode default
[nltk_data] Downloading package averaged_perceptron_tagger_eng to
[nltk_data] /home/basal-desktop/nltk_data...
[nltk_data] Package averaged_perceptron_tagger_eng is already up-to-
[nltk_data] date!
Using cache found in /home/basal-desktop/.cache/torch/hub/snakers4_silero-vad_master
2024-09-03 11:20:08,495 - STT.whisper_stt_handler - INFO - Warming up WhisperSTTHandler
You have passed task=transcribe, but also have set `forced_decoder_ids` to [[1, None], [2, 50360]] which creates a conflict. `forced_decoder_ids` will be ignored in favor of task=transcribe.
The attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
/tmp/tmp1sx5flzq/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp7dgszafh/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpgutcpzdq/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpxya7vifd/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpoxfa0b57/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp9sd15wgk/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpuimau_4j/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp2hzix58m/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmppnjhbdhp/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp2dvfaztp/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpaofqmu2k/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpcnc1scdn/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpnsf4b2jl/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpf_5rg_m_/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpnf8nvq6n/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp2f8iezjt/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp_om2_15p/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpc0t1q8vd/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpdsdc_2ef/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp7h6fpvoc/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmp4qfy9i7j/main.c:5:10: fatal error: Python.h: No such file or directory
5 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
/tmp/tmpsjvhjzmz/main.c:5:10: fatal error: Py | https://github.com/huggingface/speech-to-speech/issues/71 | closed | [] | 2024-09-03T06:02:45Z | 2024-10-01T07:45:20Z | null | Basal-Analytics |
huggingface/optimum | 2,006 | Support for gemma2-2b-it(gemma 2nd version) Model Export in Optimum for OpenVINO | ### Feature request
please provide Support for gemma2 Model Export in Optimum for OpenVINO
version:optimum(1.21.4)
transformers:4.43.4
### Motivation
I encountered an issue while trying to export a gemma2 model using the optimum library for ONNX export. The error message suggests that the gemma2 model is either a custom or unsupported architecture, and I need to provide a custom export configuration.
error:raise ValueError(
ValueError: Trying to export a gemma2 model, that is a custom or unsupported architecture, but no custom export configuration was passed as `custom_export_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum-intel/issues if you would like the model type gemma2 to be supported natively in the OpenVINO export
### Your contribution
It would be great if support for the gemma2 model could be added natively in the optimum library for OpenVINO export. Alternatively, detailed guidance on how to create a custom export configuration for this model would be appreciated.i | https://github.com/huggingface/optimum/issues/2006 | open | [
"onnx"
] | 2024-09-03T05:54:51Z | 2025-01-22T15:40:04Z | 2 | chakka12345677 |
huggingface/transformers | 33,270 | Static KV cache status: How to use it? Does it work for all models? | I see that there are many PRs about [StaticCache](https://github.com/huggingface/transformers/pulls?q=is%3Apr+StaticCache), but I couldn't find a clear documentation on how to use it.
#### What I want
* To not have Transformers allocate memory dynamically for the KV cache when using `model.generate()`, as that leads to increased memory usage (due to garbage collection not happening fast/often enough) and worse performance.
* To use that by default always, for every model, for every supported quantization backend (AutoAWQ, AutoGPTQ, AQLM, bitsandbytes, etc).
#### Who can help?
Maybe @gante | https://github.com/huggingface/transformers/issues/33270 | closed | [] | 2024-09-03T02:17:54Z | 2024-11-25T16:17:25Z | null | oobabooga |
huggingface/transformers.js | 917 | Where should I get `decoder_model_merged` file from? | ### Question
Hey,
I'm trying to use `whisper-web` demo with my finetuned model.
After I managed connecting my model to the demo application, I'm getting errors related to this:
https://github.com/xenova/transformers.js/blob/7f5081da29c3f77ee830269ab801344776e61bcb/src/models.js#L771
Basically, when `transformers.js` tries to load a whisper model, it looks for files called `decoder_model_merged.onnx` / `decoder_model_merged_quantized.onnx` / `decoder_model_merged_fp16.onnx`.
The thing is, that the conversion script didn't create any of these files.
That's how the conversion script output looks like:

Please help me figure out what am I missing here.
P.S. After I'll get it to work, I'll be happy to open a PR on `whisper-web` repository that will enable using local models together with remote (on HF hub) models.
Thanks ! | https://github.com/huggingface/transformers.js/issues/917 | closed | [
"question"
] | 2024-09-02T07:30:57Z | 2025-02-26T12:05:05Z | null | abuchnick-aiola |
huggingface/diffusers | 9,339 | SD3 inpatinting | I found the StableDiffusion3InpaintPipeline, where can i found the weight of SD3 inpainting | https://github.com/huggingface/diffusers/issues/9339 | closed | [
"stale"
] | 2024-09-02T05:00:19Z | 2024-10-02T15:43:24Z | 5 | ucasyjz |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.