repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js | 1,188 | It seems like Xenova/swin2SR-classical-sr-x2-64 model only work with image url?How to implement partial output with it? | ### Question
I have fun with react demo and Xenova/swin2SR-classical-sr-x2-64 model.
https://huggingface.co/Xenova/swin2SR-classical-sr-x2-64
I tried to give object URL to upscaler function but it doesn't work, I wonder if it only accepts image url.
Also I want to know how to do partial output like the translate react demo.
I tried to convert output data to base64 for rendering but It doesn't work.


Is it output png rawdata only? | https://github.com/huggingface/transformers.js/issues/1188 | open | [
"question"
] | 2025-02-10T02:18:32Z | 2025-02-16T00:50:36Z | null | codenoobforreal |
huggingface/transformers.js | 1,186 | Which undocumented transformersJS Generator parameters are supported? crapCheck ran fine. | ### Question
Sorry to bug you again Josh @xenova I was trying a set of generator parameters and things were working fine without errors so I tried the parameter "crapCheck" and it also ran without errors so now I am worried if anything works. In the docs it seems that these are supported:
Supported Parameters (Confirmed in Docs)
max_new_tokens: ✅ Yes (Controls the number of new tokens to generate)
do_sample: ✅ Yes (Enables sampling)
top_p: ✅ Yes (Nucleus sampling)
temperature: ✅ Yes (Controls randomness)
top_k: ✅ Yes (Top-k filtering)
num_return_sequences: ✅ Yes (Number of sequences to return)
Demo code [here](https://hpssjellis.github.io/my-examples-of-transformersJS/public/deepseek-r1-webgpu/deepseek-r1-webgpu-00.html) but without all the below parameters, just some of them.
Any suggestions on what may work and what to ignore?
```
const output = await generator(messages, {
max_new_tokens: myMaxT, // 512
do_sample: myDo_sample, // true
top_p: myTop_p, // 0.9
temperature: myTemperature, // 0.7
top_k: myTop_k, // testing if it does top_k 50
num_return_sequences: 1, // 1
streamer, // calls the function TextStreamer
min_length: myMin_length, // Ensures at least 20 tokens are generated
repetition_penalty: myRepetition_penalty, // 1.2
length_penalty: myLength_penalty, // 1.5
early_stopping: myEarly_stopping, // end testing true false
chain_of_thought: myChain_of_thought, // true
stopping_criteria: stoppingCriteria, // Use stopping criteria for clean stopping
crapCheck: 65, // fairly sure this is not supported
});
``` | https://github.com/huggingface/transformers.js/issues/1186 | open | [
"question"
] | 2025-02-09T05:35:57Z | 2025-02-09T05:35:57Z | null | hpssjellis |
huggingface/lighteval | 545 | couldn't find it in the cached files and it looks like Elron/bleurt-tiny-512, how to set the model path? | How to set the eval model path?
## Eval
when I use the script to eval model with MATH-500
`NUM_GPUS=8 # Set to 8 for 32B and 70B models
MODEL=Deepseek_R1_distill/Qwen2.5-32B-Open-R1-Distill/
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilisation=0.8,tensor_parallel_size=$NUM_GPUS"
OUTPUT_DIR=data/evals/Qwen2.5-32B-Open-R1-Distill
lighteval vllm $MODEL_ARGS "custom|math_500|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR
`
## Error
Error: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like Elron/bleurt-tiny-512 is not the path to a directory containing a file named
config.json.
Where to set the eval model path in the script? | https://github.com/huggingface/lighteval/issues/545 | closed | [] | 2025-02-08T07:26:28Z | 2025-05-15T15:27:30Z | null | bannima |
huggingface/open-r1 | 240 | How to do knowledge distillation training | In the deepseek r1 technical report, there is a small model based on distillation at the end; deepseek r1, as the teacher model, qwen and llama, as the student model, do SFT based on distilled data. However, it seems that the process of knowledge distillation is not involved here(open r1), that is, the process of the r1 teacher model modifying the output of the student model, but simply SFT based on distilled data. | https://github.com/huggingface/open-r1/issues/240 | open | [] | 2025-02-08T06:50:20Z | 2025-02-27T08:16:02Z | null | RyanOvO |
huggingface/transformers.js-examples | 42 | How to stop the transformerJS webGPU models when they chat for too long. | @xenova Hi Josh.
I am making several very capable TransformerJS single page applications and I really like what they are doing. My demo index page is [here](https://hpssjellis.github.io/my-examples-of-transformersJS/public/index.html), but I can't seem to stop any of my examples if they are taking too long and then be able to do another request. I have tried several methods with the streamer, a stopFlag or an AbortController but nothing seems to be error free.
Any suggestions I have included my single page application of deepseekR1 for reference.
(Note: Single page applications are great for beginners and can be easily downloaded and ran locally after the model is cached)
```
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<script type="module">
import { pipeline, TextStreamer } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.3.2';
// Needed for buttons to call module functions
window.myLoadModel = myLoadModel
window.myAskQuestion = myAskQuestion
let generator
//let myStopFlag = false; // Global stop flag
let streamer = null; // Keep track of streamer instance
let myModel
let abortController; // Add this global variable
abortController = new AbortController(); // Create the controller
let myContent = document.getElementById('myArea01').value
console.log(myContent)
// Create a text generation pipeline
async function myLoadModel() {
myModel = document.getElementById('myModelInput').value
const progressCallback = (progress) => {
// console.log(progress);
const myProg = parseInt(progress.progress);
document.getElementById('progress').textContent = `Loading: ${progress.file} at ${myProg}%`; //(progress * 100).toFixed(2)
};
generator = await pipeline("text-generation", myModel, { dtype: "q4f16", device: "webgpu", progress_callback: progressCallback });
document.getElementById('myLoadButton').disabled = true
document.getElementById('myAskButton').disabled = false
document.getElementById('progress').textContent = `Loading: Done!`;
}
async function myAskQuestion() {
document.getElementById('myTextarea01').value = '';
myContent = document.getElementById('myArea01').value;
const messages = [{ role: "user", content: myContent }];
// myStopFlag = false; // Reset stop flag before starting
// document.getElementById('myStopButton').disabled = false; // Enable stop button
// Clear any existing streamer instance before starting a new one
streamer = new TextStreamer(generator.tokenizer, {
skip_prompt: true,
callback_function: (text) => {
// if (myStopFlag) return; // Stop updating if stop flag is set
if (!window.startTime) {
window.startTime = performance.now();
}
const currentTime = performance.now();
const elapsedTime = (currentTime - window.startTime) / 1000;
document.getElementById('myTextarea01').value += text;
const generatedTokens = document.getElementById('myTextarea01').value.length;
const tokensPerSecond = generatedTokens / elapsedTime;
const progress = parseInt((generatedTokens * 100) / (myMaxT * 10));
document.getElementById('progress').textContent = `Answer progress: ~${progress}%, Tokens per second: ${tokensPerSecond.toFixed(2)}`;
if (progress >= 100) {
window.startTime = null;
}
},
});
const myMaxT = document.getElementById('myMaxTokens').value;
const myDo_sample = document.getElementById('myDo_sample').value;
const myTop_p = document.getElementById('myTop_p').value;
const myTemperature = document.getElementById('myTemperature').value;
const myChain_of_thought = document.getElementById('myChain_of_thought').value;
console.log(` maxT:${myMaxT}, do-sample:${myDo_sample}, top_p:${myTop_p}, temp:${myTemperature}, chain-of-thought:${myChain_of_thought}, `)
try {
const output = await generator(messages, {
max_new_tokens: myMaxT,
do_sample: myDo_sample,
top_p: myTop_p, // 0.9
temperature: myTemperature, // 0.7
streamer,
chain_of_thought: myChain_of_thought,
});
// if (!myStopFlag) {
let fullReply = output[0].generated_text.at(-1).content;
let myReply = fullReply.replace(/<think>/g, "").replace(/<\/think>/g, "\r\n\r\nResponse: ").replace(/```/g, "");
document.getElementById('myTextarea01').value = `Asking: ${myContent}\r\n\r\nAnswer: ${myReply}`;
// }
} catch (error) {
console.error('Error:', error);
}
}
</script>
</head>
<body>
<h1>DeepSeek-R1-webgpu in the browser</h1>
Open the console. shift-ctrl-i <br><br>
Fully javascript activated. If you don't want to completely download
<a href="onnx-community/DeepSeek-R1-Distill-Qwen-1.5B-ONNX">
onnx-community/DeepSeek-R1-Distill-Qwen-1.5B-ONNX </a> then you should probably close this page.<br><br>
It will load from cache if downloaded once.<br><br>
Uses the Web-gpu model or other models: <input id="myModelInput" type=text size=60 value="onnx-communit | https://github.com/huggingface/transformers.js-examples/issues/42 | closed | [] | 2025-02-08T04:38:51Z | 2025-02-08T22:05:23Z | null | hpssjellis |
huggingface/lerobot | 692 | How to evaluate policy on real robot and sim environment | I am working on evaluating a trained policy on a real robot and in a simulated environment (Isaac Gym). However, I am uncertain about the process and communication mechanisms involved.
My questions are:
- Evaluating on a real robot:
> How do I retrieve real-time observations from the real robot with Lerobot?
- Evaluating in simulation (Isaac Gym):
> Can I directly evaluate my trained policy in Isaac Gym?
| https://github.com/huggingface/lerobot/issues/692 | closed | [
"question",
"simulation"
] | 2025-02-07T13:40:27Z | 2025-10-17T11:20:29Z | null | ShiyaoExtendQA |
huggingface/diffusers | 10,743 | Support zero-3 for FLUX training | ### Describe the bug
Due to memory limitations, I am attempting to use Zero-3 for Flux training on 8 GPUs with 32GB each. I encountered a bug similar to the one reported in this issue: https://github.com/huggingface/diffusers/issues/1865. I made modifications based on the solution proposed in this pull request: https://github.com/huggingface/diffusers/pull/3076. However, the same error persists. In my opinion, the fix does not work as expected, at least not entirely. Could you advise on how to modify it further?
The relevant code from https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_flux.py#L1157 has been updated as follows:
```
def deepspeed_zero_init_disabled_context_manager():
"""
returns either a context list that includes one that will disable zero.Init or an empty context list
"""
deepspeed_plugin = AcceleratorState().deepspeed_plugin if accelerate.state.is_initialized() else None
print(f"deepspeed_plugin: {deepspeed_plugin}")
if deepspeed_plugin is None:
return []
return [deepspeed_plugin.zero3_init_context_manager(enable=False)]
with ContextManagers(deepspeed_zero_init_disabled_context_manager()):
text_encoder_one, text_encoder_two = load_text_encoders(text_encoder_cls_one, text_encoder_cls_two)
vae = AutoencoderKL.from_pretrained(
args.pretrained_model_name_or_path,
subfolder="vae",
revision=args.revision,
variant=args.variant,
)
```
### Reproduction
deepspeed config:
```json
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps":"auto",
"zero_optimization": {
"stage": 3,
"offload_optimizer": {"device": "cpu"},
"stage3_gather_16bit_weights_on_model_save": false,
"overlap_comm": false
},
"bf16": {
"enabled": true
},
"fp16": {
"enabled": false
}
}
```
accelerate config:
```
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: "config/ds_config.json"
distributed_type: DEEPSPEED
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 8
```
training shell:
```
#!/bin/bash
export MODEL_NAME="black-forest-labs/FLUX.1-dev"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="trained-flux"
export DS_SKIP_CUDA_CHECK=1
export ACCELERATE_CONFIG_FILE="config/accelerate_config.yaml"
ACCELERATE_CONFIG_FILE_PATH=${1:-$ACCELERATE_CONFIG_FILE}
FLUXOUTPUT_DIR=flux_lora_output
mkdir -p $FLUXOUTPUT_DIR
accelerate launch --config_file $ACCELERATE_CONFIG_FILE_PATH train_dreambooth_lora_flux.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--mixed_precision="bf16" \
--instance_prompt="a photo of sks dog" \
--resolution=1024 \
--train_batch_size=4 \
--guidance_scale=1 \
--gradient_accumulation_steps=1 \
--learning_rate=1e-4 \
--report_to="tensorboard" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=100 \
--gradient_checkpointing \
--seed="0"
```
### Logs
```shell
RuntimeError: 'weight' must be 2-D
```
### System Info
pytorch: 2.1.0
deepspeed: 0.14.0
accelerate: 1.3.0
diffusers: develop
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/10743 | closed | [
"bug"
] | 2025-02-07T12:50:44Z | 2025-10-27T09:33:59Z | 9 | xiaoyewww |
huggingface/alignment-handbook | 210 | Problem with multi-epoch training | Hi, I run the orpo code with 1 epoch and there was no issue. But when I tried to run the code with 5 epochs, I had the following error just at the start of the second epoch:
```
RuntimeError: Tensors of the same index must be on the same device and the same dtype except `step` tensors that can be CPU and float32 notwithstanding
```
Any idea of what could be wrong and how to fix it? Thank you! | https://github.com/huggingface/alignment-handbook/issues/210 | open | [] | 2025-02-07T04:50:41Z | 2025-02-07T04:50:41Z | 0 | sowmaster |
huggingface/smolagents | 521 | authenticated sessions with smolagents (how to be logged in during browser use) | **Is your feature request related to a problem? Please describe.**
I would like smolagents to be able to use websites with my login credentials.
**Describe the solution you'd like**
Either a way to give Helium credentials, or a way to use my actual browser, like: https://github.com/browser-use/browser-use/blob/main/examples/browser/real_browser.py
**Is this not possible with the current options.**
I'm fairly certain this is not possible with the current implementation. (If it is, can you make a demo code?)
**Describe alternatives you've considered**
I can use https://github.com/browser-use/browser-use/ instead
**Additional context**
https://github.com/browser-use/browser-use/ does a really good job of providing multiple options for this. | https://github.com/huggingface/smolagents/issues/521 | open | [
"enhancement"
] | 2025-02-06T15:51:53Z | 2025-02-06T15:51:53Z | null | rawwerks |
huggingface/open-r1 | 210 | How to push own dataset to hub with train and test dataset? | How do I push my own dataset to the hub along with the training and test datasets?
```python
train_distiset = pipeline.run(dataset=train_dataset)
test_distiset = pipeline.run(dataset=test_dataset)
```
There is a problem with the code above. | https://github.com/huggingface/open-r1/issues/210 | closed | [] | 2025-02-06T15:28:15Z | 2025-02-08T05:59:13Z | null | JACKYLUO1991 |
huggingface/peft | 2,364 | docs: broken links to boft | ### System Info
on page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
on page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft
Snippet:
Take a look at the following step-by-step guides on how to finetune a model with BOFT:
[Dreambooth finetuning with BOFT](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_dreambooth)
[Controllable generation finetuning with BOFT (ControlNet)](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_controlnet)
### Expected behavior
perhaps the links should lead to
https://github.com/huggingface/peft/blob/main/examples/boft_dreambooth/boft_dreambooth.md
https://github.com/huggingface/peft/blob/main/examples/boft_controlnet/boft_controlnet.md | https://github.com/huggingface/peft/issues/2364 | closed | [] | 2025-02-06T14:48:16Z | 2025-02-07T10:14:44Z | 1 | makelinux |
huggingface/open-r1 | 207 | DeepSeek RL-Zero: How to clone DeepSeek RL-Zero? | How to clone DeepSeek RL-Zero? | https://github.com/huggingface/open-r1/issues/207 | open | [] | 2025-02-06T13:45:33Z | 2025-02-06T13:45:33Z | null | win10ogod |
huggingface/smolagents | 501 | How to run open_deep_research? | How to run open_deep_research? | https://github.com/huggingface/smolagents/issues/501 | closed | [
"bug"
] | 2025-02-05T13:35:52Z | 2025-03-19T07:28:22Z | null | win4r |
huggingface/trl | 2,768 | How to log more metrics with wandb when using GRPO trainer and accelerate | ### Reproduction
```python
def correctness_reward_func(prompts, completions, answer, **kwargs) -> list[float]:
responses = [completion[0]["content"] for completion in completions]
q = prompts[0][-1]["content"]
extracted_responses = [extract_xml_answer(r) for r in responses]
# Get current step from trainer's state
current_step = trainer.state.global_step if hasattr(trainer, "state") else 0
# Initialize logger if not already done
global example_logger
if not hasattr(correctness_reward_func, "example_logger"):
example_logger = LocalExampleLogger()
correctness_reward_func.example_logger = example_logger
# Log each example
for i in range(len(responses)):
example_dict = {
"step": current_step,
"question": q,
"true_answer": answer[i],
"response": responses[i],
"extracted_response": extracted_responses[i],
"correct": extracted_responses[i] == answer[i],
"generation_idx": i, # Which generation attempt this was
}
example_logger.log_example(example_dict)
# Calculate marker counts and correctness for all responses
is_correct = [r == a for r, a in zip(extracted_responses, answer)]
uncertainty_counts = [count_uncertainty_markers(r) for r in responses]
internal_dialogue_counts = [count_internal_dialogue_markers(r) for r in responses]
reflective_counts = [count_reflective_markers(r) for r in responses]
# Separate counts for correct and incorrect responses
correct_indices = [i for i, correct in enumerate(is_correct) if correct]
incorrect_indices = [i for i, correct in enumerate(is_correct) if not correct]
# Log metrics using trainer's accelerator
if hasattr(trainer, "accelerator"):
### NONE OF THE BELOW ARE LOGGED ON WANDB
metrics = {
"correctness/correct_count": len(correct_indices),
"correctness/total_examples": len(responses),
"correctness/accuracy": len(correct_indices) / len(responses),
# Total markers across all responses
"markers/total/uncertainty": sum(uncertainty_counts),
"markers/total/internal_dialogue": sum(internal_dialogue_counts),
"markers/total/reflective": sum(reflective_counts),
# Markers in correct responses
"markers/correct/uncertainty": sum(
uncertainty_counts[i] for i in correct_indices
)
if correct_indices
else 0,
"markers/correct/internal_dialogue": sum(
internal_dialogue_counts[i] for i in correct_indices
)
if correct_indices
else 0,
"markers/correct/reflective": sum(
reflective_counts[i] for i in correct_indices
)
if correct_indices
else 0,
# Markers in incorrect responses
"markers/incorrect/uncertainty": sum(
uncertainty_counts[i] for i in incorrect_indices
)
if incorrect_indices
else 0,
"markers/incorrect/internal_dialogue": sum(
internal_dialogue_counts[i] for i in incorrect_indices
)
if incorrect_indices
else 0,
"markers/incorrect/reflective": sum(
reflective_counts[i] for i in incorrect_indices
)
if incorrect_indices
else 0,
}
trainer.accelerator.log(metrics, step=current_step)
return [2.0 if r == a else 0.0 for r, a in zip(extracted_responses, answer)]
.......
model_name = config["model"]["name"]
output_dir = config["training"]["output_dir"]
run_name = config["training"]["run_name"]
training_args = GRPOConfig(
output_dir=output_dir,
run_name=run_name,
learning_rate=config["training"]["learning_rate"],
adam_beta1=config["training"]["adam_beta1"],
adam_beta2=config["training"]["adam_beta2"],
weight_decay=config["training"]["weight_decay"],
warmup_ratio=config["training"]["warmup_ratio"],
lr_scheduler_type=config["training"]["lr_scheduler_type"],
logging_steps=config["training"]["logging_steps"],
bf16=config["training"]["bf16"],
per_device_train_batch_size=config["training"]["per_device_train_batch_size"],
gradient_accumulation_steps=config["training"]["gradient_accumulation_steps"],
num_generations=config["training"]["num_generations"],
max_prompt_length=config["training"]["max_prompt_length"],
max_completion_length=config["training"]["max_completion_length"],
num_train_epochs=config["training"]["num_train_epochs"],
save_steps=config["training"]["save_steps"],
max_grad_norm=config["training"]["max_grad_norm"],
report_to=["wandb"]
if (not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0)
else [],
log_on_each_node=False, # Only log on main node
use_vllm | https://github.com/huggingface/trl/issues/2768 | open | [
"✨ enhancement",
"⚡accelerate",
"🏋 GRPO"
] | 2025-02-05T03:59:10Z | 2025-02-05T03:59:54Z | null | andrewsiah |
huggingface/open-r1 | 183 | How to directly input embeddings into the model? | My data are embeddings of the tokens (i.e., already after tokenization), is there a way of directly inputting the embeddings into the DeepSeek open-r1 model?
For example, when I use the BERT model via Hugging Face, I can simply input the embeddings using the "inputs_embeds" parameter:
```
from transformers import BertModel
bert = BertModel.from_pretrained('bert-base-uncased')
outputs = bert(inputs_embeds = ...)
```
Is there a similar way of doing so with the DeepSeek open-r1 model?
Thank you! | https://github.com/huggingface/open-r1/issues/183 | open | [] | 2025-02-04T21:10:13Z | 2025-02-04T21:10:13Z | null | CCCC1800 |
huggingface/open-r1 | 180 | How to launch GRPO with vLLM on multi-node slurm? | How to write sbatch script to run GRPO with vLLM on multiple nodes? What should be `--num_processes`? Is [GRPOTrainer](https://github.com/huggingface/trl/blob/1f344c9377d87cd348d92b78f27afea8e66563d7/trl/trainer/grpo_trainer.py#L288-L298) compatible with multinode training? | https://github.com/huggingface/open-r1/issues/180 | open | [] | 2025-02-04T16:58:50Z | 2025-03-14T15:55:18Z | null | pbelevich |
huggingface/lerobot | 678 | The inverse kinematic solution code of so-100 | Are there any code of inverse kinematic of so-100, which just need the input of the x, y on my desk, then it can move to the target
coordinate?
Thanks for any response. | https://github.com/huggingface/lerobot/issues/678 | open | [
"question",
"robots"
] | 2025-02-04T03:58:17Z | 2025-10-15T16:55:01Z | null | gxy-1111 |
huggingface/diffusers | 10,710 | Is DDUF format supported? | I checked this PR, https://github.com/huggingface/diffusers/pull/10037 and it is merged
```
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"DDUF/FLUX.1-dev-DDUF", dduf_file="FLUX.1-dev.dduf", torch_dtype=torch.bfloat16
)
image = pipe(
"photo a cat holding a sign that says Diffusers", num_inference_steps=50, guidance_scale=3.5
).images[0]
image.save("cat.png")
```
```
(venv) C:\aiOWN\diffuser_webui>python FLUX_DDUF.py
Fetching 1 files: 100%|█████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]
Loading state_dict: 100%|███████████████████████████████████████████| 2/2 [00:32<00:00, 16.05s/it]
Loading pipeline components...: 29%|████████▊ | 2/7 [00:34<01:10, 14.12s/it]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading pipeline components...: 57%|█████████████████▋ | 4/7 [00:34<00:26, 8.73s/it]
Traceback (most recent call last):
File "C:\aiOWN\diffuser_webui\FLUX_DDUF.py", line 4, in <module>
pipe = DiffusionPipeline.from_pretrained(
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 951, in from_pretrained
loaded_sub_model = load_sub_model(
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\pipelines\pipeline_loading_utils.py", line 742, in load_sub_model
loaded_sub_model = load_method(name, **loading_kwargs)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 931, in from_pretrained
model_file = _merge_sharded_checkpoints(
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\models\model_loading_utils.py", line 365, in _merge_sharded_checkpoints
raise FileNotFoundError(f"Part file {file_name} not found.")
FileNotFoundError: Part file diffusion_pytorch_model-00003-of-00003.safetensors not found.
```
```
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Running on Google Colab?: No
- Python version: 3.10.11
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.27.1
- Transformers version: 4.48.1
- Accelerate version: 1.4.0.dev0
- PEFT version: 0.14.1.dev0
- Bitsandbytes version: 0.45.1
- Safetensors version: 0.5.2
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 4060 Laptop GPU, 8188 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
-
``` | https://github.com/huggingface/diffusers/issues/10710 | closed | [] | 2025-02-03T17:42:37Z | 2025-02-23T17:56:26Z | 4 | nitinmukesh |
huggingface/trl | 2,754 | How to do multi-node training for GRPO with DeepSpeed + vLLM? | ### Multi-Node Request
I am interested in doing multi-node (4 x 8 GPUs) reinforcement fine-tuning of 8B (or 14B) models using GRPO. However, given that at least 1 GPU needs to be assigned to vLLM, I am not sure how to exactly run multi-node setup? Would it be possible for you to share a simple set of scripts (config files and main .py file) with which I can test locally?
### Possible to give more GPUs to vLLM?
Also, in case of multi-node training, would it better to assign more GPUs to vLLM for faster (distributed) generation? Currently if I pass “cuda:6,7”, then it throws an error saying expected base 10 single digit number.
| https://github.com/huggingface/trl/issues/2754 | closed | [
"🚀 deepspeed",
"🏋 GRPO"
] | 2025-02-03T16:03:23Z | 2025-03-22T12:51:19Z | null | nikhilchandak |
huggingface/lerobot | 673 | configure_motor.py says it's increasing the max acceleration of feetech motors, but is decreasing it | I built my SO ARM 100s before reading the huggingface instructions, so I am trying to retroactively setup the servos properly. I looked into configure_motor.py to see what it was doing so I could configure it manually, and I notice that for Feetech motors it sets Maximum_Acceleration to 254 to " speedup acceleration and deceleration of the motors". I read that value from all of the servos in both arms and the setting I was shipped with is 306, which, I assume, means faster acceleration and deceleration than 254. | https://github.com/huggingface/lerobot/issues/673 | closed | [
"question",
"robots"
] | 2025-02-01T18:46:30Z | 2025-04-07T15:52:20Z | null | jbrownkramer |
huggingface/lerobot | 672 | Limited Range of Motion in 'Elbow Flex' Motor on SO-100 Follower Arm | # Issue: Limited Range of Motion in 'Elbow Flex' Motor on SO-100 Follower Arm
## Description
In my build of the SO-100 arm, the follower arm exhibits an issue where the motor labeled **'elbow flex'** is restricted to a movement range of approximately **90 degrees from the rest position**.
## Steps Taken to Troubleshoot
I have attempted the following troubleshooting steps:
- **Checked the servo separately**: The servo itself functions correctly and can move the full 360-degree range without issues.
- **Tested manual movement**: Manually tested the servo under normal teleoperation conditions with the weight of the arm.
- **Re-calibrated multiple times**: Repeated calibration to see if the issue persists.
- **Modified calibration JSON manually**: Editing the JSON file generated after calibration had no effect. The **homing_offset** field is the only one that causes any noticeable changes, but it only shifts the relative position of the follower to the leader, which is not a viable solution.
- **Swapped servos**: Replaced the servo with a new one to rule out hardware failure, but the issue remains.
## Expected Behavior
The **'elbow flex'** motor should be able to move the full intended range, similar to the leader arm, without being restricted to 90 degrees.
## Actual Behavior
The motor is constrained to only about **90 degrees of movement** from its rest position, despite the servo itself being capable of full rotation.
## Additional Notes
- The issue seems to persist despite changes in hardware and re-calibration.
- There may be an issue with how the calibration data is applied or interpreted.
- Any insights into possible firmware, software, or mechanical constraints would be appreciated.
---
Would appreciate any help or guidance on resolving this issue!
| https://github.com/huggingface/lerobot/issues/672 | closed | [
"question",
"robots",
"stale"
] | 2025-02-01T15:01:59Z | 2025-10-20T02:31:48Z | null | ParzivalExtrimis |
huggingface/sentence-transformers | 3,207 | How to increase batch size by using multiple gpus? | Hello! My fine-tuned model need a large batch size to get the best performance. I have multiple gpus with 40G VRAM each. How can i use them together to enlarge the batch size? Currently i can only set the batch size be 3 per GPU and seems they won't share the datas. How can i make the total batch size become 24? | https://github.com/huggingface/sentence-transformers/issues/3207 | open | [] | 2025-01-31T18:00:08Z | 2025-02-19T10:36:28Z | null | 13918763630 |
huggingface/optimum | 2,174 | Support for ONNX export of SeamlessM4TModel | ### Feature request
Add SeamlessM4Tv2 Model support to onnx_export_from_model.
### Motivation
Being able to deploy SeamlessM4Tv2 models to production using onnx.
### Your contribution
I got the speech-to-text model to ONNX, but I'm not able to generate the audio as expected, even though I'm trying to give the tgt_lang_token_ids as decoder_input_ids. I could help with by submitting a PR, but I might start creating the model_config/model_patcher first if it is needed.
EDIT: I got the speech-to-text model, not the speech-to-speech model. I'd like to export the t2u_model and the vocoder to onnx, but it seems that is giving problems, any advice on how to do it? | https://github.com/huggingface/optimum/issues/2174 | closed | [
"Stale"
] | 2025-01-30T15:10:31Z | 2025-03-18T02:07:02Z | 3 | AlArgente |
huggingface/diffusers | 10,683 | Would anyone consider a diffusers export_to_frames utility fuction? | **Is your feature request related to a problem? Please describe.**
The current `export_to_video` function in Hugging Face's Diffusers library exports a compressed video, but it's not straightforward for users to obtain raw, lossless PNG frames from a list of frames. This can be a problem for users who need to work with individual frames or want to export them in a specific format as part of a workflow.
**Describe the solution you'd like.**
I propose introducing a new function, `export_to_frames`, in `huggingface/diffusers/utils/export_utils.py`. This function would take a the frames (either NumPy arrays or PIL Image objects) and export each frame as a separate PNG file in a specified output directory. The function would also allow users to specify the frame rate and output directory.
**Describe alternatives you've considered.**
While users can currently solve this problem on their own by using other libraries or writing custom code, it would be beneficial to provide a simple and standard method for exporting raw, uncompressed PNG frames. This would save users time and effort, and make the Diffusers library more user-friendly.
**Additional context.**
I've included very rough example implementation of the proposed `export_to_frames` function below:
`
def export_to_frames(
video_frames: Union[List[np.ndarray], List[PIL.Image.Image]], output_dir: str = None, fps: int = 10
) -> str:
"""
Export each frame in a list of frames to a directory.
Args:
video_frames (Union[List[np.ndarray], List[PIL.Image.Image]]): A list of frames.
output_dir (str, optional): The directory where the frames will be saved. Defaults to None.
fps (int, optional): The frame rate. Defaults to 10.
Returns:
str: The path to the output directory.
"""
try:
imageio.plugins.ffmpeg.get_exe()
except AttributeError:
raise AttributeError(
(
"Found an existing imageio backend in your environment. Attempting to export frames with imageio. \n"
"Unable to find a compatible ffmpeg installation in your environment to use with imageio. Please install via pip install imageio-ffmpeg"
)
)
print( "video_frames",len(video_frames) )
if isinstance(video_frames[0], np.ndarray):
print( "numpy")
video_frames = [(frame * 255).astype(np.uint8) for frame in video_frames]
elif isinstance(video_frames[0], PIL.Image.Image):
print( "PIL")
video_frames = [np.array(frame) for frame in video_frames]
print( "video_frames",len(video_frames) )
for i, frame in enumerate(video_frames):
print( "frame", i )
filename = f"frame_{i:04d}.png"
if isinstance(frame, np.ndarray):
print("wrote via np")
imageio.imwrite(os.path.join(output_dir, filename), frame)
elif isinstance(frame, PIL.Image.Image):
print("wrote via PIL")
frame.save(os.path.join(output_dir, filename))
return output_dir`
This rough function was tested briefly but should be rewritten I'm just using it for illustrative purposes since it worked. Please let me know if this idea is worth considering further and if we could proceed with something like this in the standard utilities in future? | https://github.com/huggingface/diffusers/issues/10683 | open | [
"stale"
] | 2025-01-29T17:30:21Z | 2025-03-26T15:04:10Z | 4 | lovetillion |
huggingface/transformers.js | 1,174 | How to create a new onnx TTS model like mms-tts-eng | ### Question
First of all, congratulations on such a great library!
I would like to ask for your guidance and assistance in creating a new onnx model similar to the following one:
https://huggingface.co/Xenova/mms-tts-eng/tree/main
…but for the Malagasy language:
https://huggingface.co/facebook/mms-tts-mlg
Could you provide me with some advice on how to create that model?
Thank you so much. | https://github.com/huggingface/transformers.js/issues/1174 | closed | [
"question"
] | 2025-01-29T16:02:13Z | 2025-02-05T12:48:57Z | null | elloza |
huggingface/open-r1 | 113 | What is the GPU resource required to run Open-R1 (Deepseek-R1) locally? | I am trying to run it using Ollama with Open WebUI in a docker container, does it required a dedicated GPU with high VRAM or an integrated GPU?
Which model (8 billion, 9 billion, 12 billion) can be required with each GPU VRAM? | https://github.com/huggingface/open-r1/issues/113 | open | [] | 2025-01-29T14:08:47Z | 2025-01-29T21:17:17Z | null | ruidazeng |
huggingface/open-r1 | 100 | What is the compute needed for GRPO for 7B R1-Distill model? | Anybody who has tried GRPO over any of the R1-Distill models: what is the minimum GPU compute requirement to run the training?
Let's say for R1-Distill-Qwen-7B ?
I am talking about this from the README:
### GRPO
```
accelerate launch --config_file configs/zero3.yaml src/open_r1/grpo.py \
--output_dir DeepSeek-R1-Distill-Qwen-7B-GRPO \
--model_name_or_path deepseek-ai/DeepSeek-R1-Distill-Qwen-7B \
--dataset_name AI-MO/NuminaMath-TIR \
--max_prompt_length 256 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 16 \
--logging_steps 10 \
--bf16
``` | https://github.com/huggingface/open-r1/issues/100 | open | [] | 2025-01-29T03:01:03Z | 2025-02-10T09:17:47Z | null | iamansinha |
huggingface/diffusers | 10,677 | Support for training with Grayscale images? | I am trying to train an unconditional diffusion model on grayscale images using your [pipeline](https://huggingface.co/docs/diffusers/training/unconditional_training). When running training with the default parameters I discovered inferred images that contained colour (specifically green). Where it learnt such colours from I do not know but I would predict the issue lies within the initial processing of the image set:
`images = [augmentations(image.convert("RGB")) for image in examples["image"]]`
as such I created a fork of this [repo ](https://github.com/DavidGill159/diffusers/tree/main/examples/unconditional_image_generation)and changed this line to:
`images = [augmentations(image.convert("L")) for image in examples["image"]]`
I also updated the model configuration (UNet2DModel) to work with single-channel inputs and outputs by setting `in_channels=1` and `out_channels=1` when initialising the model.
Am I on the right track? or does the resolution lie elsewhere? I also noticed the resolution of the inferred images is very poor; not on par with the training set. What parameters can I adjust to improve this?
**Ultimately I am interested in a diffusion model that focuses more on the textural composition of images, rather than the colou**r. | https://github.com/huggingface/diffusers/issues/10677 | open | [
"stale"
] | 2025-01-28T22:25:19Z | 2025-02-28T15:02:57Z | 1 | DavidGill159 |
huggingface/diffusers | 10,675 | Difference in Flux scheduler configuration max_shift | ### Describe the bug
Could you please check if the value of 1.16 here...
https://github.com/huggingface/diffusers/blob/658e24e86c4c52ee14244ab7a7113f5bf353186e/src/diffusers/pipelines/flux/pipeline_flux.py#L78
...is intentional or maybe a typo?
`max_shift` is 1.15 both in the model configuration...
https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/scheduler/scheduler_config.json
...and in the original inference code by BFL:
https://github.com/black-forest-labs/flux/blob/d06f82803f5727a91b0cf93fcbb09d920761fba1/src/flux/sampling.py#L214
### Reproduction
-
### Logs
```shell
```
### System Info
-
### Who can help?
@yiyixuxu @DN6 | https://github.com/huggingface/diffusers/issues/10675 | closed | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-01-28T20:35:58Z | 2025-02-18T06:54:58Z | 2 | dxqb |
huggingface/transformers.js | 1,171 | Does the image generation model support using LoRA? | ### Question
I would like to implement an image generation feature to my website using a image generation model and a LoRA. Is LoRA supported in transformers.js? | https://github.com/huggingface/transformers.js/issues/1171 | open | [
"question"
] | 2025-01-28T19:48:38Z | 2025-02-11T23:11:27Z | null | hunkim98 |
huggingface/diffusers | 10,672 | Please support callback_on_step_end for following pipelines | **Is your feature request related to a problem? Please describe.**
Missing callback_on_step_end in these pipeline takes away the capability to show the progress in UI
**Describe the solution you'd like.**
Please support callback_on_step_end
**Describe alternatives you've considered.**
N.A.
**Additional context.**
1. AuraFlowPipeline
TypeError: AuraFlowPipeline.__call__() got an unexpected keyword argument 'callback_on_step_end'
2. LuminaText2ImgPipeline | https://github.com/huggingface/diffusers/issues/10672 | closed | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-01-28T16:26:56Z | 2025-02-16T17:28:58Z | 2 | nitinmukesh |
huggingface/transformers.js | 1,170 | Processing in image encoding for Florence 2 | ### Question
Hi,
while having a look at the code for generation with the Florence 2 model, I've noticed something weird. The original code for inference uses the [_encode_image](https://huggingface.co/microsoft/Florence-2-base-ft/blob/main/modeling_florence2.py#L2599) method for creating image features. However, looking at the [encode_image](https://github.com/huggingface/transformers.js/blob/main/src/models.js#L1861C1-L1874C6) used in `transformers.js`, I've noticed the postprocessing after the model forward pass is missing. Here's a minimal reproducible example:
```python
import onnxruntime as ort
from transformers import AutoModelForCausalLM, AutoProcessor
from PIL import Image
# The vision encoder was downloaded from:
# https://huggingface.co/onnx-community/Florence-2-base-ft/resolve/main/onnx/vision_encoder.onnx
ONNX_MODEL_PATH = "models/onnx/original/vision_encoder.onnx"
MODEL_NAME = "microsoft/Florence-2-base-ft"
# Image download link:
# https://upload.wikimedia.org/wikipedia/en/7/7d/Lenna_%28test_image%29.png
IMG_PATH = "lena.png"
PROMPT = "<MORE_DETAILED_CAPTION>"
processor = AutoProcessor.from_pretrained(
MODEL_NAME, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME, trust_remote_code=True)
image = Image.open(IMG_PATH)
inputs = processor(text=PROMPT, images=image, return_tensors="pt")
hf_out = model._encode_image(inputs["pixel_values"])
ort_vision_tower = ort.InferenceSession(ONNX_MODEL_PATH)
ort_out = ort_vision_tower.run(
None, {"pixel_values": inputs["pixel_values"].numpy()})[0]
print(hf_out.cpu().detach().numpy())
print()
print(ort_out)
```
The feature differences are pretty big:
```
[[[-0.4047455 0.51958734 -0.23121671 ... 1.0019573 -0.46846968
0.5289913 ]
[-0.08135182 -2.0622678 -0.50597775 ... 0.38061845 -0.7858853
-1.247189 ]
[ 0.69417834 -1.926735 -0.691345 ... -0.17574754 -0.98472327
-1.2420652 ]
...
[ 0.018062 1.2185848 -0.04483193 ... 0.61767036 -0.1832848
0.9324351 ]
[-0.13765828 0.7120823 0.12478658 ... -0.44853052 -0.6390534
0.37095645]
[ 0.58084226 1.6617624 -0.43527135 ... -0.92560166 -0.47037867
-0.81996024]]]
[[[-0.52661824 0.508744 -0.24130312 ... 0.91191643 -0.39472336
1.1632534 ]
[-0.18091503 -2.2187433 -0.7923498 ... 0.6103708 -0.49637306
-0.9830185 ]
[ 0.3002218 -1.9726763 -1.1151179 ... -0.11572987 -0.6870862
-0.96058726]
...
[-0.08202907 0.8105656 -0.1748765 ... 1.0833437 -0.41167092
1.2495995 ]
[-0.01531404 0.6044417 -0.06392197 ... -0.30775025 -0.5735508
0.6775356 ]
[ 0.74322057 1.4011574 -0.5277405 ... -0.61488384 -0.40253094
-0.8440974 ]]]
```
Am I missing something here or is this a potential bug? | https://github.com/huggingface/transformers.js/issues/1170 | closed | [
"question"
] | 2025-01-27T16:13:28Z | 2025-03-02T14:37:52Z | null | ir2718 |
huggingface/text-generation-inference | 2,956 | How to give custom model code for TGI to run. | Is there a way to give custom model inference code for TGI to run during invocation? | https://github.com/huggingface/text-generation-inference/issues/2956 | open | [] | 2025-01-27T10:37:55Z | 2025-01-27T10:37:55Z | null | ashwani-bhat |
huggingface/diffusers | 10,662 | Feature Request: Image-to-Image Fine-Tuning Example | Hello, and thank you for maintaining this amazing repository!
While working with the Diffusers library, I noticed there is a folder containing fine-tuning examples for text-to-image models but not for image-to-image fine-tuning.
Since image-to-image models have many use cases (e.g., style transfer, image restoration, or domain-specific adaptation), a fine-tuning example for this task would greatly benefit the community and improve accessibility for users looking to customize such models.
Questions:
* Is there any existing implementation or documentation for fine-tuning image-to-image models that I might have missed?
* If not, is there a specific reason this example hasn't been provided yet (e.g., complexity, low demand)?
I'd be happy to contribute or collaborate on this feature if it's considered valuable.
Thank you in advance for your time and response! | https://github.com/huggingface/diffusers/issues/10662 | closed | [] | 2025-01-27T08:33:39Z | 2025-02-07T08:27:44Z | 6 | YanivDorGalron |
huggingface/finetrainers | 248 | How to load full finetune for inference? | ### Feature request / 功能建议

### Motivation / 动机
It seems like only lora inference example in README.MD
### Your contribution / 您的贡献
test the full finetune(LTX-VIDEO,Cogxvideo) | https://github.com/huggingface/finetrainers/issues/248 | closed | [] | 2025-01-27T03:49:57Z | 2025-01-27T06:27:18Z | null | BlackTea-c |
huggingface/Google-Cloud-Containers | 143 | Route to /generate and /metrics | Hello team, thanks for supporting :)
Inside https://github.com/huggingface/text-generation-inference/blob/main/router/src/server.rs file,
there is a route for google cloud definition as below.
#[cfg(feature = "google")]
{
tracing::info!("Built with `google` feature");
tracing::info!(
"Environment variables `AIP_PREDICT_ROUTE` and `AIP_HEALTH_ROUTE` will be respected."
);
if let Ok(env_predict_route) = std::env::var("AIP_PREDICT_ROUTE") {
app = app.route(&env_predict_route, post(vertex_compatibility));
}
if let Ok(env_health_route) = std::env::var("AIP_HEALTH_ROUTE") {
app = app.route(&env_health_route, get(health));
}
}
Currently, there is no way to access /generate through VAI because if we define AIP_PREDICT_ROUTE outside of container then it creates new path for prediction.
The problem is that new features like json generation (https://huggingface.co/docs/text-generation-inference/en/guidance) only supports through /generate path.
Can we change this pattern that if AIP_PREDICT_ROUTE or AIP_HEALTH_ROUTE points existing path, then do nothing ?
Then we can route default VAI predict path to /generate and also expose /metrics path through VAI health path. | https://github.com/huggingface/Google-Cloud-Containers/issues/143 | closed | [
"question"
] | 2025-01-27T02:02:28Z | 2025-01-31T11:44:05Z | null | jk1333 |
huggingface/optimum | 2,171 | Adding Phi3 support in BetterTransformer (to use the microsoft/phi-4 model) | ### Feature request
Hello,
Is it possible to add the phi3 architecture to BetterTransformer supported models?
### Motivation
Nan
### Your contribution
Nan | https://github.com/huggingface/optimum/issues/2171 | closed | [
"Stale"
] | 2025-01-26T19:10:34Z | 2025-03-04T02:05:22Z | 2 | majdabd |
huggingface/transformers.js | 1,167 | How to create and use a customized voice in a tts pipeline? | ### Question
Hi transformers.js community!
I am new here and I’d like to ask how to create a new voice and use it inside the current tts pipeline? I just create a next.js project and I can run the text-to-speech model in the tutorial, like following code,
```
const synthesizer = await pipeline('text-to-speech', 'Xenova/speecht5_tts', { quantized: false });
const speaker_embeddings = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/speaker_embeddings.bin';
const out = await synthesizer('Hello, my dog is cute', { speaker_embeddings });`
```
Now I want to create a new voice and use it in the pipeline, how should I do? Can I realize it in the same environment? (The speaker creation and speech generation are both processed in the next.js web app). I have searched the web but there is no any tutorials or demo on that, looking forward for the answers!
Best! | https://github.com/huggingface/transformers.js/issues/1167 | open | [
"question"
] | 2025-01-26T17:44:57Z | 2025-02-11T02:55:40Z | null | gonggqing |
huggingface/open-r1 | 56 | How to supervise non-math data? | I see the accuracy reward only can check the numerical equal? But what if my question is MCQ and asking an option?
I did a quick check and find it's not working.
```
from math_verify import parse, verify
# Parse the gold and answer
# If you know that gold will only contain latex or expr (no latex env), use
# parse(gold, extraction_config=[LatexExtractionConfig()]) or parse(gold, extraction_config=[ExprExtractionConfig()])
gold = parse("So the answer is B")
answer = parse("B")
print(gold)
print(answer)
# Order here is important!
print(verify(gold, answer))
[]
[]
False
``` | https://github.com/huggingface/open-r1/issues/56 | open | [] | 2025-01-26T14:30:13Z | 2025-01-26T17:52:58Z | null | Luodian |
huggingface/diffusers | 10,655 | How to use custon dataset in train_dreambooth_flux.py. | Hi. what if i want to train two images with two different prompts. somethink like m1.jpeg , m1.txt ; m2.jpeg, m2.txt.
the default example only shows all images share one instant prompt. thanks for the help! | https://github.com/huggingface/diffusers/issues/10655 | closed | [] | 2025-01-26T11:53:01Z | 2025-01-27T19:43:55Z | null | rooooc |
huggingface/open-r1 | 46 | how to train on MultiNode MultiGPU | https://github.com/huggingface/open-r1/issues/46 | open | [] | 2025-01-26T04:57:11Z | 2025-02-19T14:00:44Z | null | yuepengs | |
huggingface/transformers.js | 1,166 | Why isn't transformers using filesystem API instead of Cache API? | ### Question
I find the cache API quite limiting when it comes to user experience. I am curious why transformers.js is not utilizing filesystem API. Is there any practical difficulty in it?
| https://github.com/huggingface/transformers.js/issues/1166 | open | [
"question"
] | 2025-01-25T14:12:38Z | 2025-02-08T12:09:16Z | null | Nithur-M |
huggingface/open-r1 | 23 | How to contribute | Hello there 👋!
Replicating all parts of DeepSeek's R1 pipeline is going to take a community effort, especially with dataset curation and creation. If you would like to contribute, please explore the issues linked below. | https://github.com/huggingface/open-r1/issues/23 | open | [] | 2025-01-25T13:55:31Z | 2025-05-06T13:32:10Z | null | lewtun |
huggingface/trl | 2,642 | How to stop `SFTTrainer` from auto tokenizing my messages ? | I want to tokenize my text in a custom way in a custom data collator but for some reason i don't know the data keeps being auto tokenized.
I passed `processing_class=None` to stop this but nothing changed, how can i stop the auto tokenization process ? | https://github.com/huggingface/trl/issues/2642 | closed | [
"❓ question",
"🏋 SFT"
] | 2025-01-24T02:58:26Z | 2025-02-18T18:59:42Z | null | MohamedAliRashad |
huggingface/diffusers | 10,637 | Issues with FlowMatchEulerDiscreteScheduler.set_timesteps() | ### Describe the bug
Why does `num_inference_steps` have the default `None`? It's not an `Optional`. It cannot be `None`. This leads to weird error messages if you skip this parameter.
https://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L239
`sigmas` is undocumented:
https://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L241
`mu` is undocumented, even though it can be a required parameter (depending on configuration):
https://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L242
### Reproduction
see above
### Logs
```shell
```
### System Info
HEAD
### Who can help?
@yiyixuxu | https://github.com/huggingface/diffusers/issues/10637 | closed | [
"bug"
] | 2025-01-23T20:22:51Z | 2025-02-16T15:29:08Z | 4 | dxqb |
huggingface/transformers.js | 1,165 | Releasing the Florence 2 ONNX conversion script? | ### Question
Hi,
This might not be the correct place to raise this issue, but I have not found a better option. There have been many requests of people trying to use their tuned Florence 2 models here and in other repos (https://github.com/huggingface/transformers.js/issues/815#issuecomment-2217220254, https://github.com/microsoft/onnxruntime-genai/issues/619, https://github.com/microsoft/onnxruntime/issues/21118, https://huggingface.co/onnx-community/Florence-2-base-ft/discussions/4). @xenova, since you've managed to export these models into ONNX, could you please share the conversion script, even if its just something experimental? | https://github.com/huggingface/transformers.js/issues/1165 | closed | [
"question"
] | 2025-01-23T11:35:05Z | 2025-03-31T10:02:53Z | null | ir2718 |
huggingface/transformers | 35,853 | How to load a model directly into the GPU memory? | I have enough GPU memory, but not enough CPU memory.When I use the
"from_pretrained" function, the program gets killed due to insufficient memory. | https://github.com/huggingface/transformers/issues/35853 | closed | [] | 2025-01-23T09:47:04Z | 2025-01-23T15:19:01Z | null | LiBai531 |
huggingface/nanotron | 273 | What is the purpose of "task" | What is the purpose of the "tasks" argument in this line?
https://github.com/huggingface/nanotron/blob/9055c664c28a3b430b4e53bfcb5a074068c90f2a/tools/preprocess_data.py#L102C9-L102C28
Thanks | https://github.com/huggingface/nanotron/issues/273 | open | [] | 2025-01-23T09:44:35Z | 2025-02-07T17:09:12Z | null | laiviet |
huggingface/transformers.js | 1,164 | `onnxruntime-node` uncompressed too large for NextJS 15 API routes | ### Question
Hello! I'm trying to deploy `xenova/bge-small-en-v1.5` locally to embed text in an Next 15 API route, but I'm encountering this error with the route's unzipped max size exceeding 250 MB. Wanted to check in to see if there's some error on my side? Doesn't seem like `onnxruntime-node` should be ~720 MB uncompressed by itself? Thanks!

`generateEmbeddingV2()` below is called within the API route.
```typescript
import {
FeatureExtractionPipeline,
layer_norm,
pipeline,
PreTrainedTokenizer,
env,
} from '@huggingface/transformers'
const MAX_TOKENS = 512
const MATRYOSHKA_DIM = 768
let cachedExtractor: FeatureExtractionPipeline | null = null
const getExtractor = async () => {
if (!cachedExtractor) {
cachedExtractor = await pipeline(
'feature-extraction',
'xenova/bge-small-en-v1.5',
{ dtype: 'fp16' }
)
}
return cachedExtractor
}
const chunkText = (text: string, tokenizer: PreTrainedTokenizer) => {
const tokens = tokenizer.encode(text)
const chunks = []
for (let i = 0; i < tokens.length; i += MAX_TOKENS) {
const chunk = tokens.slice(i, i + MAX_TOKENS)
chunks.push(chunk)
}
return chunks.map((chunk) => tokenizer.decode(chunk))
}
export const generateEmbeddingV2 = async (value: string) => {
const extractor = await getExtractor()
const chunks = chunkText(value, extractor.tokenizer)
let embedding = await extractor(chunk[0], { pooling: 'mean' })
embedding = layer_norm(embedding, [embedding.dims[1]])
.slice(null, [0, MATRYOSHKA_DIM])
.normalize(2, -1)
return embedding.tolist()[0]
}
```
I also tried downloading the model file locally, but that didn't work in deployment either. | https://github.com/huggingface/transformers.js/issues/1164 | open | [
"question"
] | 2025-01-23T03:28:16Z | 2025-10-22T20:42:41Z | null | raymondhechen |
huggingface/smolagents | 322 | How to capture CodeAgent's full thinking including the code, not just the final response into a variable | When we run a CodeAgent in a notebook, it print the question/task, the LLM model used, code (Executing this code, Execution logs) and the Final answer.
The return value from agent.run contrains only the final response.
I'm working on some demos for which I wanted to run a number of tasks, capture all the output (not just the final answer) and write them to an md or html file, so that I can show everything including the code generated by the agent without running the agents live in the demo.
I tried logging, stdout, from contextlib import redirect_stdout, etc but couldn't capture the full output to a variable.
Thanks,
| https://github.com/huggingface/smolagents/issues/322 | open | [] | 2025-01-23T02:50:34Z | 2025-01-23T13:17:49Z | null | KannamSridharKumar |
huggingface/smolagents | 312 | how to exec a bin and use the output as agent arg ? | hi
a simple exec tool as exec(path,[args]) should be in examples.
then an agent call as "use exec(/bin/ls,/bin)" put the result in sql db "(as bin-name) for later use and tell me how much of them are scripts while using sbx -z on each non-scripts"
as a short example | https://github.com/huggingface/smolagents/issues/312 | open | [] | 2025-01-22T12:55:22Z | 2025-01-22T12:55:22Z | null | malv-c |
huggingface/datatrove | 326 | How to choose the best timeout value in extractors? | Hi,
I do not know how to choose the best timeout threshold for running extractor. Shouldn't this threshold be hardware-aware? | https://github.com/huggingface/datatrove/issues/326 | open | [] | 2025-01-22T03:14:58Z | 2025-02-10T09:53:03Z | null | jordane95 |
huggingface/datasets | 7,377 | Support for sparse arrays with the Arrow Sparse Tensor format? | ### Feature request
AI in biology is becoming a big thing. One thing that would be a huge benefit to the field that Huggingface Datasets doesn't currently have is native support for **sparse arrays**.
Arrow has support for sparse tensors.
https://arrow.apache.org/docs/format/Other.html#sparse-tensor
It would be a big deal if Hugging Face Datasets supported sparse tensors as a feature type, natively.
### Motivation
This is important for example in the field of transcriptomics (modeling and understanding gene expression), because a large fraction of the genes are not expressed (zero). More generally, in science, sparse arrays are very common, so adding support for them would be very benefitial, it would make just using Hugging Face Dataset objects a lot more straightforward and clean.
### Your contribution
We can discuss this further once the team comments of what they think about the feature, and if there were previous attempts at making it work, and understanding their evaluation of how hard it would be. My intuition is that it should be fairly straightforward, as the Arrow backend already supports it. | https://github.com/huggingface/datasets/issues/7377 | open | [
"enhancement"
] | 2025-01-21T20:14:35Z | 2025-01-30T14:06:45Z | 1 | JulesGM |
huggingface/peft | 2,339 | Peft version upgrade from 0.4.0 to 0.14.0 results in "No module named \u0027peft.utils.config\u0027" error | ### System Info
Hello,
I'm migrating my sagemaker endpoint from the `huggingface-pytorch-inference:2.1.0-transformers4.37.0-gpu-py310-cu118-ubuntu20.04` image (which is being deprecated) to the `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` image, which is supported.
This new version does not support the 0.4.0 version of peft, so we have upgraded to 1.14.0 and upgraded to a compatible diffusers version. The sagemaker endpoint deploys correctly with these new versions, but once it's run, we receive the following error:
`No module named \u0027peft.utils.config\u0027`
I dug around and found that there' no usage of peft.utils.config in our inference code. The only usage I could find is here, in the peft code itself: https://github.com/huggingface/peft/blob/main/src/peft/config.py. However, in this code, It looks like utils.config does not exist at all.
Here's what I'm currently using:
diffusers==0.32.2
peft==0.14.0
Is the peft library somehow breaking itself by looking for a peft.utils.config that doesn't exist? Have I missed a step that would create the utils.config file? Or is there another hidden dependency using peft.utils.config?
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
Create a sagemaker endpoint using the new `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` huggingface DLC image.
Use a requirements.txt that looks like the following:
diffusers==0.32.2
peft==0.14.0
Observe that all requests to the sagemaker endpoint respond with 500 errors.
### Expected behavior
The Sagemaker endpoint should continue to process requests as it did before the version upgrade (using peft 0.4.0) | https://github.com/huggingface/peft/issues/2339 | closed | [] | 2025-01-21T20:00:07Z | 2025-03-02T15:03:46Z | 2 | incchar |
huggingface/smolagents | 298 | How to pass images as input to CodeAgent? | Hello,
I want to pass an input image along with the prompt to `CodeAgent.run`. I see that there is an `additional_args` argument but when I pass the image as `{"image": "path/to/image.png"}`, the agent ends up loading the image via pytesseract to read the contents of the image instead of passing it to OpenAI/Anthropic directly. Is there any way that I can ensure that the image is passed along with the prompt so that the model can infer information from it instead of using external libraries to load the image when using the LiteLLM integration?
My code for reference:
```
agent = CodeAgent(
tools=[],
model=LiteLLMModel(
model_id="openai/gpt-4o",
api_key=os.environ.get('OPENAI_API_KEY'),
temperature=1,
top_p=0.95,
),
add_base_tools=True,
additional_authorized_imports=["sqlite3", "csv", "json", "os", "datetime", "requests", "pandas", "numpy", "sys"],
max_steps=10,
)
agent.run(prompt, additional_args={"image": "path/to/image.png"})
``` | https://github.com/huggingface/smolagents/issues/298 | closed | [] | 2025-01-21T17:14:27Z | 2025-02-18T18:41:27Z | null | DarshanDeshpande |
huggingface/lerobot | 650 | use a camera | can I use a camera to collect and train? | https://github.com/huggingface/lerobot/issues/650 | closed | [
"question"
] | 2025-01-21T10:35:02Z | 2025-04-07T15:53:26Z | null | lwx2024 |
huggingface/transformers | 35,807 | How to change data | ERROR: type should be string, got "\n\nhttps://huggingface.co/facebook/rag-token-nq\n\nfrom transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\n\ntokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\nretriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\nmodel = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\n\ninput_dict = tokenizer.prepare_seq2seq_batch(\"who holds the record in 100m freestyle\", return_tensors=\"pt\") \n\ngenerated = model.generate(input_ids=input_dict[\"input_ids\"]) \nprint(tokenizer.batch_decode(generated, skip_special_tokens=True)[0]) \n\n# should give michael phelps => sounds reasonable\n\n\n\nMy attempts\nhttps://github.com/kim90000/Attempts-with-facebook-rag-token-nq/blob/main/README.md" | https://github.com/huggingface/transformers/issues/35807 | closed | [] | 2025-01-21T06:17:09Z | 2025-02-28T08:03:38Z | null | kim90000 |
huggingface/accelerate | 3,356 | how to config accelerate on 2 mac machines | https://huggingface.co/docs/accelerate/usage_guides/distributed_inference
i use accelerate config and when i run model , it will block and then got an error. means , can not connect IP and port.
\
who can help me. | https://github.com/huggingface/accelerate/issues/3356 | closed | [] | 2025-01-20T11:35:35Z | 2025-02-25T02:20:41Z | null | hsoftxl |
huggingface/transformers.js | 1,160 | How to use sentence-transformers/static-similarity-mrl-multilingual-v1 model? | ### Question
If I try to use `sentence-transformers/static-similarity-mrl-multilingual-v1` it fails on `tokenizer.json` not found. Is it possible to somehow convert the model to use it ? ONNX runtime is already there. | https://github.com/huggingface/transformers.js/issues/1160 | open | [
"question"
] | 2025-01-19T15:09:18Z | 2025-01-19T17:27:49Z | null | michalkvasnicak |
huggingface/diffusers | 10,606 | pred_original_sample in FlowMatchEulerDiscreteScheduler | Will pred_original_sample be supported in FlowMatchEulerDiscreteScheduler? How to get predicted x_0? | https://github.com/huggingface/diffusers/issues/10606 | closed | [] | 2025-01-19T10:02:22Z | 2025-02-14T12:21:33Z | 2 | haofanwang |
huggingface/transformers.js | 1,157 | When using StyleTTS/Kokoro for text-to-speech conversion, how can I get the conversion progress? | ### Question
When using StyleTTS/Kokoro for text-to-speech conversion, how can I get the conversion progress?
```bash
npm i kokoro-js
```
```typescript
const model_id = "onnx-community/Kokoro-82M-ONNX";
const tts = await KokoroTTS.from_pretrained(model_id, {
dtype: "q8", // Options: "fp32", "fp16", "q8", "q4", "q4f16"
});
const text = "Life is like a box of chocolates. You never know what you're gonna get.";
const audio = await tts.generate(text, {
// Use `tts.list_voices()` to list all available voices
voice: "af_bella",
});
audio.save("audio.wav");
```
| https://github.com/huggingface/transformers.js/issues/1157 | closed | [
"question"
] | 2025-01-18T03:36:28Z | 2025-10-13T04:46:59Z | null | emojiiii |
huggingface/transformers.js | 1,154 | Text generation pipeline memory spike | ### Question
## Description
Text generation pipeline has a memory spike at the starting point of every generation request from the instance and settle it down after few seconds. we tested this in lower vram and system memory environment it failed to generate anything because of this issue. also it generate nonsensical bunch of tokens if we pass a long context.
### Screenshots

- Input messages
```
[{
role: "system",
content: "You are a highly skilled meeting summarizer. Your role is to create comprehensive, well-organized summaries
of meetings that capture all essential information while maintaining clarity and accessibility. Follow these
guidelines to generate thorough meeting summaries:
STRUCTURE AND ORGANIZATION:
1. Meeting Metadata
- Date and time of the meeting
- Duration
- Meeting type/purpose
- Attendees (with roles if specified)
- Location/platform used
2. Executive Summary
- Brief 2-3 sentence overview capturing the meeting's main purpose and key outcomes
- Highlight critical decisions or major announcements
3. Detailed Discussion Points
- Organize by agenda items or natural topic transitions
- Maintain chronological order within each topic
- Include for each discussion point:
* Context and background information
* Key arguments or perspectives shared
* Questions raised and answers provided
* Concerns or challenges mentioned
* Solutions proposed
* Related sub-topics that emerged
4. Decisions and Action Items
- Document all decisions made, including:
* The final decision
* Key factors that influenced the decision
* Any dissenting opinions or concerns noted
- For each action item, specify:
* The assigned owner/responsible party
* Specific deliverables or expected outcomes
* Deadlines or timeframes
* Dependencies or prerequisites
* Resources needed or allocated
5. Follow-up Items
- Topics deferred to future meetings
- Scheduled follow-up discussions
- Required approvals or reviews
- Outstanding questions requiring research
IMPORTANT GUIDELINES:
Language and Tone:
- Use clear, professional language
- Maintain objectivity in describing discussions
- Avoid editorializing or interpreting beyond stated information
- Use active voice for clarity and direct attribution
- Include relevant direct quotes when they capture important points precisely
Detail Preservation:
- Capture nuanced discussions, not just high-level points
- Document both majority and minority viewpoints
- Include context for technical terms or project-specific references
- Note any significant non-verbal elements (demonstrations, whiteboard sessions, etc.)
- Preserve the rationale behind decisions, not just the outcomes
Organization Principles:
- Use consistent formatting for similar types of information
- Create clear hierarchical relationships between main topics and subtopics
- Use bullet points and subpoints for complex items
- Include cross-references when topics are interrelated
- Maintain clear distinction between facts, opinions, and decisions
Quality Checks:
- Ensure all agenda items are addressed
- Verify all action items have clear owners and deadlines
- Confirm all decisions are documented with their context
- Check that all participant contributions are fairly represented
- Validate that no discussion points are orphaned or incomplete
FORMAT SPECIFICATIONS:
# Meeting Summary: Meeting Title
## Meeting Details
- Date: Date
- Time: Start Time - End Time
- Location: Location/Platform
- Duration: Duration
- Meeting Type: Type/Purpose
### Attendees
- Name (Role) - Meeting Lead
- Names and roles of other attendees
## Executive Summary
2-3 sentences capturing key outcomes and major decisions
## Key Decisions
1. Decision 1
- Context: Brief context
- Outcome: Final decision
- Rationale: Key factors
2. Decision 2
Same structure as above
## Discussion Topics
### 1. Topic 1
#### Background
Context and background information
#### Key Points Discussed
- Main point 1
* Supporting detail
* Supporting detail
- Main point 2
* Supporting detail
* Supporting detail
#### Outcomes
- Specific outcome or conclusion
- Any decisions made
### 2. Topic 2
Same structure as Topic 1
## Action Items
1. Action Item 1
- Owner: Name
- Deadline: Date
- Deliverable: Specific expected outcome
- Dependencies: Any prerequisites
2. Action Item 2
Same structure as above
## Follow-up Items
- Deferred topic 1
- Scheduled follow-up 1
- Outstanding question 1
## Additional Notes
Any important information that doesn't fit in the above categories
FINAL VERIFICATION CHECKLIST:
1. All agenda items addressed
2. All decisions documented with context
3. All action items have owners and deadlines
4. All participant contributions included
5. All technical terms explained
6. All follow-up items clearly spe | https://github.com/huggingface/transformers.js/issues/1154 | open | [
"question"
] | 2025-01-17T06:30:06Z | 2025-02-07T03:18:49Z | null | ashen007 |
huggingface/datasets | 7,372 | Inconsistent Behavior Between `load_dataset` and `load_from_disk` When Loading Sharded Datasets | ### Description
I encountered an inconsistency in behavior between `load_dataset` and `load_from_disk` when loading sharded datasets. Here is a minimal example to reproduce the issue:
#### Code 1: Using `load_dataset`
```python
from datasets import Dataset, load_dataset
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_dataset("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 1350 samples.
- `test` has 150 samples.
#### Code 2: Using `load_from_disk`
```python
from datasets import Dataset, load_from_disk
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_from_disk("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 450 samples.
- `test` has 50 samples.
### Expected Behavior
I expected both `load_dataset` and `load_from_disk` to load the same dataset, as they are pointing to the same directory. However, the results differ significantly:
- `load_dataset` seems to merge all shards, resulting in a combined dataset.
- `load_from_disk` only loads the last saved dataset, ignoring previous shards.
### Questions
1. Is this behavior intentional? If so, could you clarify the difference between `load_dataset` and `load_from_disk` in the documentation?
2. If this is not intentional, could this be considered a bug?
3. What is the recommended way to handle cases where multiple datasets are saved to the same directory?
Thank you for your time and effort in maintaining this great library! I look forward to your feedback. | https://github.com/huggingface/datasets/issues/7372 | open | [] | 2025-01-16T05:47:20Z | 2025-01-16T05:47:20Z | 0 | gaohongkui |
huggingface/safetensors | 561 | Feature Request: Support for Ellipsis (...) in Indexing | ### Feature request
Thank you very much for your effort in maintaining this great project!
I’m writing to request the addition of support for ellipsis (...) in `safetensor.safe_open` indexing functionality. This would enhance usability and align SafeTensor’s API more closely with the standard Python indexing conventions used in NumPy and PyTorch.
### Motivation
## What Does Ellipsis (...) Do?
The ellipsis (...) is a shorthand in Python indexing that simplifies working with multi-dimensional arrays. It allows users to skip explicitly specifying a subset of dimensions, particularly when dealing with high-dimensional data. For example:
```python
tensor[..., 0:100, 0:100]
```
This indicates that all dimensions up to the last two should be included in their entirety. The `...` is dynamically replaced by as many colons (: or slice(None)) as needed to account for the unspecified dimensions.
### Your contribution
I can do a PR if it is considered relevant.
## Workaround
A class that deals with the key can be used to transform the key into a slice object, which is supported by safetensors.
```python
from typing import Union, Tuple, Any
from itertools import islice
class SliceTransformer:
__slots__ = ('ndim',) # Optimize memory usage
def __init__(self, ndim: int):
if not isinstance(ndim, int) or ndim < 1:
raise ValueError("ndim must be a positive integer")
self.ndim = ndim
def transform(self, key: Union[slice, int, Tuple[Any, ...], Any]) -> Tuple[slice, ...]:
# Handle single key case without tuple conversion
if isinstance(key, (slice, int)):
result = [slice(key, key + 1) if isinstance(key, int) else key]
result.extend(slice(None) for _ in range(self.ndim - 1))
return tuple(result)
if not isinstance(key, tuple):
raise TypeError(f"Unsupported key type: {type(key)}")
# Pre-allocate result list with known size
result = []
result_append = result.append # Local reference for faster access
# Fast path for common case (no ellipsis)
if Ellipsis not in key:
for item in islice(key, self.ndim):
result_append(slice(item, item + 1) if isinstance(item, int) else item)
result.extend(slice(None) for _ in range(self.ndim - len(result)))
return tuple(result[:self.ndim])
# Handle ellipsis case
ellipsis_idx = key.index(Ellipsis)
remaining_dims = self.ndim - (len(key) - 1)
# Pre-ellipsis items
for item in islice(key, ellipsis_idx):
result_append(slice(item, item + 1) if isinstance(item, int) else item)
# Fill ellipsis slots
result.extend(slice(None) for _ in range(remaining_dims))
# Post-ellipsis items
for item in islice(key, ellipsis_idx + 1, None):
if item is Ellipsis:
raise ValueError("Multiple ellipsis found in key")
result_append(slice(item, item + 1) if isinstance(item, int) else item)
if len(result) != self.ndim:
raise ValueError(f"Key length {len(result)} does not match ndim {self.ndim}")
return tuple(result)
def __getitem__(self, key):
return self.transform(key)
import safetensors.numpy
import safetensors
toy_data = np.random.rand(3, 5, 7, 128, 128)
safetensors.numpy.save_file({"data": toy_data}, "model.safetensors")
# Will not work
with safetensors.safe_open("model.safetensors", "np") as tensor:
tensor.get_slice("data")[..., 0:100, 0:200]
# Will work
with safetensors.safe_open("model.safetensors", "np") as tensor:
tensor_slice = tensor.get_slice("data")
tensor_shape = tensor_slice.get_shape()
new_keys = SliceTransformer(ndim=len(tensor_shape))[..., 0:100, 0:100]
tensor_slice[new_keys]
``` | https://github.com/huggingface/safetensors/issues/561 | open | [] | 2025-01-14T05:13:54Z | 2025-01-14T05:13:54Z | 0 | csaybar |
huggingface/diffusers | 10,566 | Unnecessary operations in `CogVideoXTransformer3DModel.forward()`? | ### Describe the bug
Here are few rows of codes in `CogVideoXTransformer3DModel.forward()` :
```py
# 3. Transformer blocks
...
if not self.config.use_rotary_positional_embeddings:
# CogVideoX-2B
hidden_states = self.norm_final(hidden_states)
else:
# CogVideoX-5B
hidden_states = torch.cat([encoder_hidden_states, hidden_states], dim=1)
hidden_states = self.norm_final(hidden_states)
hidden_states = hidden_states[:, text_seq_length:]
# 4. Final block
...
```
where `self.norm_final` is a `LayerNorm` defined by:
```py
self.norm_final = nn.LayerNorm(inner_dim, norm_eps, norm_elementwise_affine)
```
Since the `normalized_shape` of `self.norm_final` is 1-dimension which means only the last dimension will be normalized, it seems that **the "cat -> layernorm -> slice" logic on the 2nd dimension in CogVideoX-5B branch is unnecessary because it does the same thing with**
```py
hidden_states = self.norm_final(hidden_states)
```
These codes is imported via [PR#9203](https://github.com/huggingface/diffusers/pull/9203/files#diff-6e4d5c6638b71b7a0e7de21357c5b55ffd5ff6373dd1ced70070650855830173R469). @zRzRzRzRzRzRzR @yiyixuxu could you possibly walk me through why these changes were necessary? Thanks a lot for your help!
### Reproduction
.
### Logs
```shell
```
### System Info
.
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/10566 | closed | [
"bug",
"stale"
] | 2025-01-14T04:01:20Z | 2025-02-13T22:11:26Z | 2 | townwish4git |
huggingface/diffusers | 10,565 | Different generation with `Diffusers` in I2V tasks for LTX-video | ### Describe the bug
Hello, I encountered an issue with the generation when attempting the I2V task using `Diffusers`. Is there any difference between the `diffusers` implementation and the `LTX-video-inference scripts` in the I2V task?
- The above is the result from the `inference.py`, and the following is the result generated with `diffuser`.
- Prompts: `a person`
https://github.com/user-attachments/assets/6e2aeeaf-c52b-402c-ae92-aff2d325464b
https://github.com/user-attachments/assets/59f815ad-1746-4ec5-ae1c-a47dcfa0fd02
https://github.com/user-attachments/assets/8ca3c79b-8003-4fa2-82b1-8ae17beccb9c
- test img

Besides, it seems that the text prompt has a significant impact on the I2V generation with 'diffusers'. Could I be missing any important arguments?
https://huggingface.co/docs/diffusers/api/pipelines/ltx_video
- results
https://github.com/user-attachments/assets/c062c21f-5611-4860-ba17-441dd26a8913
https://github.com/user-attachments/assets/991ec853-ee26-43a7-914b-622d115a9b7f
https://github.com/user-attachments/assets/ff3e7f04-c17d-4f0a-9aba-2db68aae792d
https://github.com/user-attachments/assets/f2699759-c36e-4839-bddd-37b84a85e2c7
### Reproduction
- for LTX-video generation
https://github.com/Lightricks/LTX-Video/blob/main/inference.py
```
python inference.py \
--ckpt_path ./pretrained_models/LTX-Video \
--output_path './samples' \
--prompt "A person." \
--input_image_path ./samples/test_cases.png \
--height 512 \
--width 512 \
--num_frames 49 \
--seed 42
```
- for diffuser generation: it seems that the negative prompts are causing the issues. However, even when I remove them, the results are still not satisfactory.
```
import argparse
import torch
from diffusers import LTXVideoTransformer3DModel
from diffusers import LTXImageToVideoPipeline
from diffusers import FlowMatchEulerDiscreteScheduler, AutoencoderKLLTXVideo
from diffusers.utils import export_to_video, load_image, load_video
from moviepy import VideoFileClip, AudioFileClip
import numpy as np
from pathlib import Path
import os
import imageio
from einops import rearrange
from PIL import Image
import random
def seed_everething(seed: int):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
def generate_video(args):
pipe = LTXImageToVideoPipeline.from_pretrained(args.ltx_model_path, torch_dtype=torch.bfloat16)
pipe.to("cuda")
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
image = load_image(args.validation_image)
prompt = "A person."
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
generator = torch.Generator(
device="cuda" if torch.cuda.is_available() else "cpu"
).manual_seed(42)
video = pipe(
image=image,
prompt=prompt,
guidance_scale=3,
# stg_scale=1,
generator=generator,
callback_on_step_end=None,
negative_prompt=negative_prompt,
width=512,
height=512,
num_frames=49,
num_inference_steps=50,
decode_timestep=0.05,
decode_noise_scale=0.025,
).frames[0]
export_to_video(video, args.output_file, fps=24)
```
- for demo images with difference text prompts
https://huggingface.co/docs/diffusers/api/pipelines/ltx_video
```
import torch
from diffusers import LTXImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
pipe = LTXImageToVideoPipeline.from_pretrained("./pretrained_models/LTX-Video", torch_dtype=torch.bfloat16)
pipe.to("cuda")
image = load_image("samples/image.png")
prompt = "A young girl stands."
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
video = pipe(
image=image,
prompt=prompt,
negative_prompt=negative_prompt,
width=704,
height=480,
num_frames=161,
num_inference_steps=50,
).frames[0]
modified_prompt = "-".join(prompt.split()[:14])
export_to_video(video, f"samples/test_out/demo-{modified_prompt}.mp4", fps=24)
```
### Logs
```shell
```
### System Info
torch 2.4.1
torchao 0.7.0
torchvision 0.19.1
diffusers 0.32.1
python 3.10
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/10565 | open | [
"bug",
"stale"
] | 2025-01-14T03:24:06Z | 2025-09-09T07:21:31Z | 11 | Kaihui-Cheng |
huggingface/transformers.js | 1,146 | Why does the local models keep downloading everyday? | ### Question
Every day when I come back to chat with the local models via transformers.js it downloads the models again. Can't I persisted the downloaded model so that I can chat with them anytime instantly?
Thank you. | https://github.com/huggingface/transformers.js/issues/1146 | closed | [
"question"
] | 2025-01-14T02:56:34Z | 2025-01-18T15:11:09Z | null | Nithur-M |
huggingface/chat-ui | 1,646 | Inline audio/video in the output | If a model returns a markdown content with an image (``), the chat-ui will display the image inline.
Is there something similar for audio and video? How can a model return audio or video content to the user?
I don't know if this is currently supported or not.
(I'm using the OpenAI endpoint)
btw, tanks a lot for the project, it's very nice!
| https://github.com/huggingface/chat-ui/issues/1646 | open | [
"enhancement"
] | 2025-01-14T01:20:54Z | 2025-02-28T11:32:48Z | 1 | laurentlb |
huggingface/lerobot | 633 | [Question] How to set training to a local dataset? | Is there a way to train on a local dataset without manually adding the `local_files_only` arg to the `make_dataset` function of the train script?
I have set the `LEROBOT_HOME` env variable. | https://github.com/huggingface/lerobot/issues/633 | closed | [
"question",
"dataset"
] | 2025-01-13T15:27:00Z | 2025-10-08T08:37:55Z | null | tlpss |
huggingface/lerobot | 630 | Removing episodes from LeRobotDataset | Hi, thanks for building this. It's great.
Is there a way to easily remove episodes from a dataset. I had a decent amount of diversity in my episodes, and wanted to reduce it, so I had to remove ~1/2 of the episodes. Rather than rerecording them, I wanted to remove specified episodes (lets say all even episodes). Is there an easy way to do this? I'de tried just removing them from the `episodes.jsonl` file, but it seemed to load all of the episodes, and also deleting unwated episode videos/data and renaming the files through some issues when loading the datasets. Is there a better way to do this? | https://github.com/huggingface/lerobot/issues/630 | closed | [
"question",
"dataset",
"stale"
] | 2025-01-13T01:22:32Z | 2025-10-17T12:07:56Z | null | andlyu |
huggingface/safetensors | 559 | serialize & deserialize does not work as the documentation specify. | ### System Info
- `transformers` version: 4.42.3
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.5.2
- Accelerate version: 0.27.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.1+cu121 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3050 Laptop GPU
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Reproduction
Hi,
I’m unsure if this is expected behavior or a bug since it does not align with what the documentation for these functions describes. Below is the code to reproduce the issue:
### Expected behavior
```python
import numpy as np
import safetensors
from safetensors.numpy import save_file, load
# Save as a safetensors file
data_ran_uint16 = np.random.randint(0, 255, (2, 2, 2)).astype(np.uint16)
save_file({"toy": data_ran_uint16}, "toy.safetensors")
# Deserialize the file
with open("toy.safetensors", "rb") as f:
fbytes = safetensors.deserialize(f.read())
# Expected to work
serialized = safetensors.serialize({"toy": fbytes[0][1]})
# Workaround
fbytes[0][1]["data"] = bytes(fbytes[0][1]["data"]) # I had to convert the bytearray to bytes
fbytes[0][1]["dtype"] = "uint16" # I had to change the dtype to uint16
fbytes[0][1]["shape"]
serialized = safetensors.serialize({"toy": fbytes[0][1]})
load(serialized)
``` | https://github.com/huggingface/safetensors/issues/559 | open | [] | 2025-01-12T20:22:57Z | 2025-01-12T20:23:18Z | 0 | csaybar |
huggingface/transformers.js | 1,142 | Make in-browser WebGPU as seamless as in WebLLM | ### Question
Hi there! 👋
I've noticed something interesting about WebGPU support in browsers:
✅ [WebLLM's demo](https://chat.webllm.ai/) detects and uses my GPU automatically
❌ [transformers.js examples](https://huggingface.co/spaces/Xenova/nanollava-1.5-webgpu) fail with:
```Error: no available backend found. ERR: [webgpu] TypeError: e.requestAdapterInfo is not a function```
This ease-of-use difference matters a lot for adoption. I believe reducing friction in GPU setup is crucial for adoption of in-browser ML models - when users need to modify browser settings or follow additional configuration steps, it can significantly impact their willingness to try new applications. WebLLM shows that seamless GPU detection is possible for in-browser ML models.
Environment:
- Chrome 131.0.6778.205
- macOS
Could transformers.js adopt a similar approach to WebLLM for automatic GPU detection? Happy to provide more details if needed!
Best regards | https://github.com/huggingface/transformers.js/issues/1142 | closed | [
"question"
] | 2025-01-12T15:06:17Z | 2025-01-27T11:45:03Z | null | Anna-iroro |
huggingface/peft | 2,322 | model merge and unload feature for AdaLora | ### Feature request
unlike Lora or IA3 adapter type, AdaLora does not provide a method to merge lora adapter weights into original weights so that it can be used as a standalone model. I made that feature for a personal usecase and want to make a PR to make this feature accessible to everyone.
### Motivation
This feature makes people easily merge AdaLora adapter weights into original weights, which makes further finetuning on it possible (i.e. when one wants to resume adalora training for checkpoints that was already trained with adalora, resuming training is not possible with unmerged weights. )
### Your contribution
I'll submit a PR. I followed the example of IA3 `merge_and_unload`
Following is the overview of change :
```
def _unload_and_optionally_merge(
self,
merge: bool = True,
safe_merge: bool = False,
adapter_names: Optional[list[str]] = None,
eps: float = 1e-5
) -> torch.nn.Module:
"""
This method unloads the AdaLoRA adapter modules and optionally merges them into the base model weights.
Args:
merge (`bool`, defaults to `True`):
If True, merges the adapter weights into base model weights.
If False, it will only unload the adapters without merging.
safe_merge (`bool`, defaults to `False`):
If True, performs the merge operation with extra safety checks.
adapter_names (`List[str]`, *optional*):
The list of adapter names to merge. If None, all active adapters will be merged.
eps (`float`, defaults to 1e-5):
Small constant for numerical stability when dividing by ranknum.
Returns:
model (`torch.nn.Module`):
The resulting PyTorch model.
"""
if getattr(self.model, "is_loaded_in_8bit", False):
raise ValueError("Cannot merge adalora layers when the model is loaded in 8-bit mode")
if getattr(self.model, "is_loaded_in_4bit", False):
raise ValueError("Cannot merge adalora layers when the model is loaded in 4-bit mode")
if adapter_names is not None:
raise ValueError("AdaLoRA does not support merging specific adapters. Got adapter_names={adapter_names}")
# Create a copy of the base model state dict to modify
original_state_dict = self.model.state_dict()
if merge:
for name, module in self.model.named_modules():
if hasattr(module, "base_layer") and hasattr(module, "lora_A"):
# Extract base layer weight name
layer_name = name.replace(".lora_A", "")
layer_name = layer_name.replace("base_model.model.", "")
base_weight_name = f"{layer_name}.weight"
# Get SVD parameters
lora_A = module.lora_A["default"] # [r x d_in]
lora_B = module.lora_B["default"] # [d_out x r]
lora_E = module.lora_E["default"] # [r x 1]
# Calculate active ranks
ranknum = (lora_E != 0).sum()
scaling = module.scaling["default"] if hasattr(module, "scaling") else 16
# Safety check if requested
if safe_merge and (torch.isnan(lora_A).any() or torch.isnan(lora_B).any() or torch.isnan(lora_E).any()):
raise ValueError(f"NaN detected in adapter weights for layer {name}")
# Scale A with E: A' = AE
scaled_A = lora_A * lora_E # [r x d_in]
# Compute update: ΔW = BA'
if ranknum > 0:
update = (lora_B @ scaled_A) * scaling / (ranknum + eps)
else:
update = torch.zeros_like(original_state_dict[base_weight_name])
# Update base weights
if base_weight_name in original_state_dict:
original_state_dict[base_weight_name] += update
# Load the merged state dict back into a clean version of the model
self.model.load_state_dict(original_state_dict)
return self.model
def merge_and_unload(
self,
safe_merge: bool = False,
adapter_names: Optional[list[str]] = None,
eps: float = 1e-5
) -> torch.nn.Module:
"""
Merge the active adapters into the base model and unload the adapters.
Args:
safe_merge (`bool`, defaults to `False`):
If True, performs the merge operation with extra safety checks.
adapter_names (`List[str]`, *optional*):
List of adapter names to merge. If None, merges all active adapters.
eps (`floa | https://github.com/huggingface/peft/issues/2322 | closed | [] | 2025-01-12T09:20:01Z | 2025-01-14T12:47:35Z | 6 | DaehanKim |
huggingface/sentence-transformers | 3,166 | How to report a security issue responsibly? | I have just found a potential security issue in the repo and want to know how I can report it to your team privately, thanks! | https://github.com/huggingface/sentence-transformers/issues/3166 | closed | [] | 2025-01-12T04:24:15Z | 2025-01-12T08:52:43Z | null | zpbrent |
huggingface/datasets | 7,365 | A parameter is specified but not used in datasets.arrow_dataset.Dataset.from_pandas() | ### Describe the bug
I am interested in creating train, test and eval splits from a pandas Dataframe, therefore I was looking at the possibilities I can follow. I noticed the split parameter and was hopeful to use it in order to generate the 3 at once, however, while trying to understand the code, i noticed that it has no added value (correct me if I am wrong or misunderstood the code).
from_pandas function code :
```python
if info is not None and features is not None and info.features != features:
raise ValueError(
f"Features specified in `features` and `info.features` can't be different:\n{features}\n{info.features}"
)
features = features if features is not None else info.features if info is not None else None
if info is None:
info = DatasetInfo()
info.features = features
table = InMemoryTable.from_pandas(
df=df,
preserve_index=preserve_index,
)
if features is not None:
# more expensive cast than InMemoryTable.from_pandas(..., schema=features.arrow_schema)
# needed to support the str to Audio conversion for instance
table = table.cast(features.arrow_schema)
return cls(table, info=info, split=split)
```
### Steps to reproduce the bug
```python
from datasets import Dataset
# Filling the split parameter with whatever causes no harm at all
data = Dataset.from_pandas(self.raw_data, split='egiojegoierjgoiejgrefiergiuorenvuirgurthgi')
```
### Expected behavior
Would be great if there is no split parameter (if it isn't working), or to add a concrete example of how it can be used.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.27.1
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | https://github.com/huggingface/datasets/issues/7365 | open | [] | 2025-01-10T13:39:33Z | 2025-01-10T13:39:33Z | 0 | NourOM02 |
huggingface/peft | 2,319 | Import error , is it a version issue? | ### System Info
When I execute the finetune.py file, an error occurs as follows: cannot import name 'prepare_model_for_int8_training'.Is it a version issue? My version is 0.14.0.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
cannot import name 'prepare_model_for_int8_training' from 'peft' (/path/python3.10/site-packages/peft/__init__.py)
### Expected behavior
Who can help me answer this question,thks | https://github.com/huggingface/peft/issues/2319 | closed | [] | 2025-01-10T02:34:52Z | 2025-01-13T10:13:18Z | 3 | zhangyangniubi |
huggingface/Google-Cloud-Containers | 138 | entrypoint.sh for TGI does not implemented requirements.txt installation process | Hello team,
Like this sample, https://github.com/huggingface/Google-Cloud-Containers/blob/main/containers/pytorch/inference/gpu/2.3.1/transformers/4.46.1/py311/entrypoint.sh
The entrypoint needs requirements.txt provisioning process.
But in this TGI sample does not contains these procedure.
https://github.com/huggingface/Google-Cloud-Containers/blob/main/containers/tgi/gpu/3.0.1/entrypoint.sh
Is it missing or handled by text_generation_launcher process internally ? | https://github.com/huggingface/Google-Cloud-Containers/issues/138 | closed | [
"question"
] | 2025-01-09T08:09:14Z | 2025-01-21T07:44:52Z | null | jk1333 |
huggingface/lerobot | 623 | Why different dimensionality state tensor with n_obs_steps vs not? | Curious about a design decision - why not have ACT with a [batch, n_obs_steps, state_dim] tensor but assert that n_obs_steps is length 1? Instead of [batch, state_dim]
Currently, we have to detect different dimensionality and handle when we're writing policy-agnostic code | https://github.com/huggingface/lerobot/issues/623 | closed | [
"question",
"policies",
"stale"
] | 2025-01-08T18:16:51Z | 2025-10-19T02:32:27Z | null | genemerewether |
huggingface/diffusers | 10,496 | NF4 quantized flux models with loras | Is there any update here ? With nf4 quantized flux models, i could not use any lora
> **Update**: NF4 serialization and loading are working fine. @DN6 let's brainstorm how we can support it more easily? This would help us unlock doing LoRAs on the quantized weights, too (cc: @BenjaminBossan for PEFT). I think this will become evidently critical for larger models.
>
> `transformers` has a nice reference for us to follow. Additionally, `accelerate` has: https://huggingface.co/docs/accelerate/en/usage_guides/quantization, but it doesn't support NF4 serialization yet.
>
> Cc: @SunMarc for jamming on this together.
>
> _Originally posted by @sayakpaul in https://github.com/huggingface/diffusers/issues/9165#issuecomment-2287694518_
> | https://github.com/huggingface/diffusers/issues/10496 | closed | [] | 2025-01-08T11:41:01Z | 2025-01-13T19:42:03Z | 12 | hamzaakyildiz |
huggingface/diffusers | 10,489 | Bug in SanaPipeline example? | ### Describe the bug
I think there might be something wrong with the `SanaPipeline` example code at https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana#diffusers.SanaPipeline
It results in a shape mismatch (see detailed logs below): `mat1 and mat2 shapes cannot be multiplied (600x256000 and 2304x1152)`
I've noticed that the `text_encoder` model looks different depending on the way it is loaded.
* If I **load it with the official example code** (=code in `Reproduction`), `pipeline.text_encoder` looks like this:
```
Gemma2ForCausalLM(
(model): Gemma2Model(
(embed_tokens): Embedding(256000, 2304, padding_idx=0)
(layers): ModuleList(
(0-25): 26 x Gemma2DecoderLayer(
(self_attn): Gemma2Attention(
(q_proj): Linear(in_features=2304, out_features=2048, bias=False)
(k_proj): Linear(in_features=2304, out_features=1024, bias=False)
(v_proj): Linear(in_features=2304, out_features=1024, bias=False)
(o_proj): Linear(in_features=2048, out_features=2304, bias=False)
(rotary_emb): Gemma2RotaryEmbedding()
)
(mlp): Gemma2MLP(
(gate_proj): Linear(in_features=2304, out_features=9216, bias=False)
(up_proj): Linear(in_features=2304, out_features=9216, bias=False)
(down_proj): Linear(in_features=9216, out_features=2304, bias=False)
(act_fn): PytorchGELUTanh()
)
(input_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(pre_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(post_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(post_attention_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
)
)
(norm): Gemma2RMSNorm((2304,), eps=1e-06)
)
(lm_head): Linear(in_features=2304, out_features=256000, bias=False)
)
```
If however I **don't load the components separately** but with the code provided by @lawrence-cj [here](https://github.com/huggingface/diffusers/issues/10334#issuecomment-2558359268) it 1) works and 2) the `text_encoder` looks different:
```
Gemma2Model(
(embed_tokens): Embedding(256000, 2304, padding_idx=0)
(layers): ModuleList(
(0-25): 26 x Gemma2DecoderLayer(
(self_attn): Gemma2Attention(
(q_proj): Linear(in_features=2304, out_features=2048, bias=False)
(k_proj): Linear(in_features=2304, out_features=1024, bias=False)
(v_proj): Linear(in_features=2304, out_features=1024, bias=False)
(o_proj): Linear(in_features=2048, out_features=2304, bias=False)
(rotary_emb): Gemma2RotaryEmbedding()
)
(mlp): Gemma2MLP(
(gate_proj): Linear(in_features=2304, out_features=9216, bias=False)
(up_proj): Linear(in_features=2304, out_features=9216, bias=False)
(down_proj): Linear(in_features=9216, out_features=2304, bias=False)
(act_fn): PytorchGELUTanh()
)
(input_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(pre_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(post_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(post_attention_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
)
)
(norm): Gemma2RMSNorm((2304,), eps=1e-06)
)
```
-> the language modeling head `lm_head` is gone. Is guess that's all expected (?) but I haven't found any documentation of this behaviour or where in the pipeline code this happens.
### Reproduction
```python
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, SanaTransformer2DModel, SanaPipeline
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, AutoModelForCausalLM
quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = AutoModelForCausalLM.from_pretrained(
"Efficient-Large-Model/Sana_600M_1024px_diffusers",
subfolder="text_encoder",
# quantization_config=quant_config,
torch_dtype=torch.float16,
)
quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = SanaTransformer2DModel.from_pretrained(
"Efficient-Large-Model/Sana_600M_1024px_diffusers",
subfolder="transformer",
# quantization_config=quant_config,
torch_dtype=torch.float16,
)
pipeline = SanaPipeline.from_pretrained(
"Efficient-Large-Model/Sana_600M_1024px_diffusers",
text_encoder=text_encoder_8bit,
transformer=transformer_8bit,
torch_dtype=torch.float16,
device_map="balanced",
)
prompt = "a tiny astronaut hatching from an egg on the moon"
image = pipeline(prompt).images[0]
image.save("sana.png")
```
Loading without `quantization_config` because for some reason this does not work on my mac but I tried the same code on a 4090 and it fails there too.
### Logs
```shell
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[5], line 30
| https://github.com/huggingface/diffusers/issues/10489 | closed | [
"bug"
] | 2025-01-07T17:14:27Z | 2025-01-08T05:18:05Z | 2 | geronimi73 |
huggingface/distil-whisper | 164 | How to finetune distil-whisper/distil-large-v2 model? | How to finetune distil-whisper/distil-large-v2 model? | https://github.com/huggingface/distil-whisper/issues/164 | open | [] | 2025-01-07T12:59:42Z | 2025-01-07T13:00:59Z | null | dhattareddy |
huggingface/doc-builder | 539 | How to Deploy huggingface/doc-builder Artifacts to GitHub Pages? | Hi,
I am currently working with the `huggingface/doc-builder` and I'm looking to deploy the generated documentation artifacts to GitHub Pages. Could you provide guidance or best practices on how to achieve this?
Specifically, I am interested in understanding:
1. The steps required to configure the deployment process.
2. Any necessary settings or configurations within GitHub Pages.
3. Common pitfalls or issues to be aware of during deployment.
Thank you for your assistance! | https://github.com/huggingface/doc-builder/issues/539 | open | [] | 2025-01-07T08:37:05Z | 2025-01-07T08:37:05Z | null | shunk031 |
huggingface/peft | 2,310 | Comparison of Different Fine-Tuning Techniques for Conversational AI | ### Feature request
It would be incredibly helpful to have a clear comparison or support for various fine-tuning techniques specifically for conversational AI. This feature could include insights into their strengths, limitations, and ideal use cases, helping practitioners choose the right approach for their needs.
Here’s a list of techniques to consider:
LoRa
AdaLoRa
BONE
VeRa
XLora
LN Tuning
VbLora
HRA (Hyperparameter Regularization Adapter)
IA3 (Input-Aware Adapter)
Llama Adapter
CPT (Conditional Prompt Tuning)etc
### Motivation
With the growing number of fine-tuning techniques for conversational AI, it can be challenging to identify the most suitable approach for specific use cases. A comprehensive comparison of these techniques—highlighting their strengths, limitations, and ideal scenarios—would save time, reduce trial-and-error, and empower users to make informed decisions. This feature would bridge the gap between research and practical application, enabling more effective model customization and deployment.
### Your contribution
I’d be happy to collaborate on this! While I might not have a complete solution right now, I’m willing to contribute by gathering resources, reviewing papers, or helping organize comparisons. If others are interested in teaming up, we could work together on a PR to make this feature happen. Let’s connect and brainstorm how we can tackle this effectively! | https://github.com/huggingface/peft/issues/2310 | open | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-01-07T07:07:50Z | 2025-12-15T09:58:10Z | 44 | ImamaDev |
huggingface/smolagents | 83 | How to save/extract executed code | Is it possible to save the executed code? It's already in the log. It will be very useful.
ex.
```
╭─ Executing this code: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ 1 attractions_list = [ │
│ 2 ["Attraction", "Description"], │
│ 3 ["Sensoji Temple", "The oldest temple in Tokyo, offering beautiful architecture and a rich history."], │
│ 4 ["Nakamise Shopping Street", "A historic shopping street with souvenirs and traditional snacks."], │
│ 5 ["Kibi Dango", "A traditional rice cake snack available at Nakamise Street."], │
│ 6 ["Asakusa Jinja", "A historic Shinto shrine that survived the bombings during WWII."], │
│ 7 ["Kimono Experience", "Rent a kimono and walk around Asakusa."], │
│ 8 ["Asakusa Culture Tourist Information Center", "A building with unique architecture, great for photos."], │
│ 9 ["Tokyo Skytree", "The tallest structure in Tokyo, offering panoramic views."], │
│ 10 ["Hanayashiki", "Japan’s oldest amusement park with nostalgic charm."], │
│ 11 ["Demboin Garden", "A serene Japanese garden adjacent to Sensoji Temple."], │
│ 12 ["Azuma-bashi Bridge", "An iconic bridge offering views of the Tokyo Skytree."] │
│ 13 ] │
│ 14 │
│ 15 # Convert the list to CSV format (string) │
│ 16 csv_data = "\n".join([",".join(row) for row in attractions_list]) │
│ 17 │
│ 18 # Save the CSV data to file │
│ 19 save_csv(data=csv_data, filename='asakusa_trip.csv') │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ | https://github.com/huggingface/smolagents/issues/83 | closed | [] | 2025-01-06T15:40:17Z | 2025-02-16T17:43:40Z | null | Lodimup |
huggingface/diffusers | 10,475 | [SD3]The quality of the images generated by the inference is not as high as on the validation set during fine-tuning? | ### Describe the bug
Why is the quality of the graphs I generate with `StableDiffusion3Pipeline` not as good as the quality of the images in the validation set in the log generated when using dreambooth_lora for fine tuning?
Maybe I need some other plugin or parameter setting to maintain the same image quality as the validation set?
### Reproduction
```
# Here is my inference code:
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained('./diffusers/stabilityai/stable-diffusion-3-medium-diffusers', torch_dtype=torch.float16).to('cuda')
pipe.load_lora_weights("./my_path/pytorch_lora_weights.safetensors", adapter_name="test_lora")
img = pipe(
"my prompt...",
generator=torch.manual_seed(1),
num_inference_steps=40,
guidance_scale=6
).images[0].save('/root/my_img.png')
```
### Logs
_No response_
### System Info
Diffuser Version: stable-diffusion-3-medium
CUDA Version: 12.4
GPU: NVIDIA A800 80GB
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/10475 | closed | [
"bug",
"stale"
] | 2025-01-06T14:52:57Z | 2025-02-06T12:17:47Z | 8 | ytwo-hub |
huggingface/datasets | 7,356 | How about adding a feature to pass the key when performing map on DatasetDict? | ### Feature request
Add a feature to pass the key of the DatasetDict when performing map
### Motivation
I often preprocess using map on DatasetDict.
Sometimes, I need to preprocess train and valid data differently depending on the task.
So, I thought it would be nice to pass the key (like train, valid) when performing map on DatasetDict.
What do you think?
### Your contribution
I can submit a pull request to add the feature to pass the key of the DatasetDict when performing map. | https://github.com/huggingface/datasets/issues/7356 | closed | [
"enhancement"
] | 2025-01-06T08:13:52Z | 2025-03-24T10:57:47Z | null | jp1924 |
huggingface/diffusers | 10,468 | What is accelerate_ds2.yaml? | I can't find accelerate config file named "accelerate_ds2.yaml".
Please give me the file.
Thanks very much! | https://github.com/huggingface/diffusers/issues/10468 | closed | [] | 2025-01-06T07:53:06Z | 2025-01-12T05:32:01Z | null | aa327chenge |
huggingface/transformers | 35,523 | How about adding a combined step and epoch feature to save_strategy? | ### Feature request
Add epoch+steps functionality to save_strategy
### Motivation
I often set save_strategy to epoch for saving, but sometimes I need to run experiments with steps.
Recently, I had to compare checkpoints saved at both epoch and step intervals, which required running the experiment twice and was quite cumbersome. Having a combined feature would be really helpful. What do you think?
### Your contribution
I can add the epoch+steps functionality to save_strategy. | https://github.com/huggingface/transformers/issues/35523 | closed | [
"Feature request"
] | 2025-01-06T02:21:22Z | 2025-02-17T00:02:42Z | null | jp1924 |
huggingface/transformers | 35,512 | Perhaps your features (`videos` in this case) have excessive nesting (inputs type `list` where type `int` is expected). | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.46.1
- Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
- Python version: 3.10.16
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
@ArthurZucker
class BatchEncoding(UserDict):
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
<s> [INST] What are the names of some famous actors that started their careers on Broadway? [/INST] Some famous actors that started their careers on Broad[78/1906]
de:
1. Hugh Jackman
2. Meryl Streep
3. Denzel Washington
4. Julia Roberts
5. Christopher Walken
6. Anthony Rapp
7. Audra McDonald
8. Nathan Lane
9. Sarah Jessica Parker
10. Lin-Manuel Miranda</s>
label_ids:
[-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 2909, 8376, 16760, 369,
2774, 652, 26072, 356, 24331, 3024, 28747, 28705, 13, 28740, 28723, 22389, 4299, 1294, 28705, 13, 28750, 28723, 351, 1193, 28714, 4589, 615, 28705, 13, 28770, 2872
3, 4745, 10311, 5924, 28705, 13, 28781, 28723, 19526, 18021, 28705, 13, 28782, 28723, 17561, 9863, 269, 28705, 13, 28784, 28723, 15089, 399, 763, 28705, 13, 28787,
28723, 14421, 520, 25999, 28705, 13, 28783, 28723, 20514, 19029, 28705, 13, 28774, 28723, 12642, 24062, 19673, 28705, 13, 28740, 28734, 28723, 6678, 28733, 2356,
3009, 9154, 5904, 2]
labels:
Some famous actors that started their careers on Broadway include:
1. Hugh Jackman
2. Meryl Streep
3. Denzel Washington
4. Julia Roberts
5. Christopher Walken | https://github.com/huggingface/transformers/issues/35512 | closed | [
"bug"
] | 2025-01-05T06:51:26Z | 2025-02-13T08:45:39Z | null | yxy-kunling |
huggingface/diffusers | 10,452 | pipe.disable_model_cpu_offload | **Is your feature request related to a problem? Please describe.**
If I enable the following in Gradio interface
sana_pipe.enable_model_cpu_offload()
and during next generation I want to disable cpu offload, how to do it? I mentioned Gradio specifically as command line inference will not have this problem unless after initializing pipe you generate multiple times with and without cpu offload.
I already searched but nothing found
https://github.com/search?q=repo%3Ahuggingface%2Fdiffusers%20disable_model_cpu_offload&type=code
**Describe the solution you'd like.**
Add method to disable for
1. enable_model_cpu_offload()
2. enable_sequential_cpu_offload()
**Describe alternatives you've considered.**
I will have to delete the pipe completely and load again for each inference in Gradio UI
Kindly suggest if any alternative solution.
```
import torch
from diffusers import SanaPipeline
pipe = SanaPipeline.from_pretrained(
"Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers", torch_dtype=torch.float32
)
pipe.to("cuda")
pipe.text_encoder.to(torch.bfloat16)
pipe.transformer = pipe.transformer.to(torch.bfloat16)
pipe.enable_model_cpu_offload()
image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana"')[0]
image[0].save("output.png")
pipe.disable_model_cpu_offload()
image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana 1"')[0]
image[0].save("output1.png")
```
P.S. How to delete a pipe completely so all models are removed completely and GPU memory is freed
I did checked documentation but unable to find find anything relevant
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py
https://github.com/huggingface/diffusers/blob/4e44534845d35248436abf87688906f52e71b868/src/diffusers/pipelines/pipeline_utils.py
| https://github.com/huggingface/diffusers/issues/10452 | closed | [] | 2025-01-04T16:39:01Z | 2025-01-07T08:29:32Z | 3 | nitinmukesh |
huggingface/diffusers | 10,448 | Load DDUF file with Diffusers using mmap | DDUF support for diffusers is there and DDUF support mmap.
But diffusers example doesn't use or support mmap,
How can I load DDUF file to diffusers with mmap?
```
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"DDUF/FLUX.1-dev-DDUF", dduf_file="FLUX.1-dev.dduf", torch_dtype=torch.bfloat16
).to("cuda")
image = pipe(
"photo a cat holding a sign that says Diffusers", num_inference_steps=50, guidance_scale=3.5
).images[0]
image.save("cat.png")
``` | https://github.com/huggingface/diffusers/issues/10448 | open | [
"stale"
] | 2025-01-04T00:42:09Z | 2025-02-03T15:02:46Z | 1 | adhikjoshi |
huggingface/lerobot | 613 | Starting off with pretrained models | Are there any pretrained models available that can be fine tuned using our own dataset for tasks like pick and place and manipulation? | https://github.com/huggingface/lerobot/issues/613 | closed | [
"question",
"stale"
] | 2025-01-03T21:09:40Z | 2025-10-08T20:53:09Z | null | rabhishek100 |
huggingface/optimum | 2,148 | Support for Exporting Specific Sub-Modules (e.g., Encoder, Decoder) | ### Feature request
Currently, when converting transformer models (like T5, but potentially others) to ONNX using the Optimum library, it appears to generate a single ONNX file encompassing the entire model architecture (both encoder and decoder). This occurs regardless of the specific task option selected during conversion.
```
optimum-cli export onnx --model . . --task text-classification
optimum-cli export onnx --model . . --task feature-extraction
```
I propose a feature that provides users with more granular control over the ONNX export process. Specifically, this feature should allow users to selectively export specific sub-modules of a transformer model, such as:
* Only the encoder
* Only the decoder
* Potentially other distinct components of the model
This enhancement would enable users to optimize ONNX models for specific use cases where only a portion of the full model is required.
Evidence of the feasibility and need for this is the existence of separately exported encoder and decoder ONNX models for various transformer architectures on Hugging Face:
- https://huggingface.co/dmmagdal/flan-t5-large-onnx-js/tree/main/onnx
- https://huggingface.co/onnx-community/Florence-2-base-ft/tree/main/onnx
### Motivation
I am encountering a limitation with the current ONNX export functionality in Optimum. When converting transformer models, the resulting ONNX file invariably includes the entire model, even when I only require a specific part, like the encoder.
This is frustrating because:
* **Increased Model Size:** The generated ONNX model is larger than necessary, consuming more storage and potentially impacting loading times.
* **Performance Overhead:** When deploying the ONNX model for tasks that only utilize a specific sub-module (e.g., using only the encoder for embedding generation), the presence of the unnecessary decoder can introduce performance overhead.
* **Lack of Flexibility:** The current approach lacks the flexibility to tailor the exported ONNX model to specific application needs.
As observed on Hugging Face, users have successfully exported individual components (like encoders and decoders) of various transformer models to ONNX. This indicates that it's technically possible and a desirable workflow. The Optimum library should provide a more direct and user-friendly way to achieve this without requiring manual workarounds.
### Your contribution
While my direct expertise in the internal workings of the Optimum library for ONNX export is limited, I am willing to contribute by:
* **Testing:** Thoroughly testing any implementation of this feature on various transformer models.
* **Providing Feedback:** Offering detailed feedback on the usability and effectiveness of the new feature.
* **Sharing Use Cases:** Providing specific use cases and examples that highlight the benefits of this functionality. | https://github.com/huggingface/optimum/issues/2148 | closed | [
"Stale"
] | 2025-01-03T14:48:36Z | 2025-04-08T02:09:03Z | 4 | happyme531 |
huggingface/smolagents | 52 | How to implement human in the loop? | How to implement human in the loop?
There are two scenarios: one where more information and input from the user are required, and another where the user's consent is needed to perform a certain action. | https://github.com/huggingface/smolagents/issues/52 | closed | [] | 2025-01-03T12:19:01Z | 2025-02-18T18:49:15Z | null | waderwu |
huggingface/lerobot | 611 | Can ACT policy support pushT task? | I want to train the ACT policy with pushT dataset, but the evaluation accuracy is only 0%.

Here is my yaml
[act_pusht.txt](https://github.com/user-attachments/files/18299197/act_pusht.txt)
And my training command is
''
python lerobot/scripts/train.py \
hydra.run.dir=outputs/train/2025_1_3_1654_act_pusht \
hydra.job.name=act_pusht \
policy=act_pusht \
policy.use_vae=true \
env=pusht \
env.task=PushT-v0 \
dataset_repo_id=lerobot/pusht \
training.offline_steps=50000 \
training.save_freq=25000 \
training.eval_freq=5000 \
eval.n_episodes=50 \
wandb.enable=false \
device=cuda
'' | https://github.com/huggingface/lerobot/issues/611 | closed | [
"question",
"policies",
"stale"
] | 2025-01-03T11:30:40Z | 2025-10-19T02:32:28Z | null | Kimho666 |
huggingface/optimum | 2,147 | Convert Stable Diffusion Inpainting model to FP16 with FP32 inputs | ### Feature request
I've used [this script](https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/conv_sd_to_onnx.py) to convert models to ONNX in FP16 format but maintaining the FP32 inputs. One of the models that I converted was [Stable Diffusion 2 Inpainting](https://huggingface.co/jdp8/sd-2-inpainting-fp16) to FP16 and tried to use it in ONNX Runtime and ONNX Runtime Web but it doesn't give me the expected results in either engine. I also converted [the model](https://huggingface.co/jdp8/optimum-sd-2-inpainting-onnx-fp32) with the Optimum conversion script to FP32 and this model gives me the expected result in ONNX Runtime. Results are shown below:
Input Image:

Mask Image:

Correct Onnx Runtime Output (converted with Optimum script):

Incorrect Onnx Runtime Output (converted with Stable-Diffusion-ONNX-FP16 script):

Incorrect Onnx Runtime Web Output (converted with Stable-Diffusion-ONNX-FP16 script):

I've also used the Optimum conversion script to convert the model to FP16 and this worked but the inputs are expected to be FP16. This datatype does not exist in JavaScript (specifically, `Float16Array`) and therefore cannot be used in ONNX Runtime Web.
With that being said, is it possible to convert a model to FP16 but leaving the inputs as FP32 in order for the UNET to be less than 2 GB?
### Motivation
I would like to run Stable Diffusion Inpainting in ONNX Runtime Web and for the UNET to be less than 2GB. The FP16 model that I have at the moment gives me an output that is not as expected in ONNX Runtime and ONNX Runtime Web. So far, only the Optimum models give me a correct output in ONNX Runtime but I would like to use this in ONNX Runtime Web.
### Your contribution
I am willing to contribute to this change given some guidance. Not sure how difficult it would be but I believe it would be similar to how it's implemented in [the script](https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/conv_sd_to_onnx.py) mentioned beforehand. | https://github.com/huggingface/optimum/issues/2147 | closed | [] | 2025-01-02T21:28:43Z | 2025-01-25T00:15:54Z | 0 | jdp8 |
huggingface/diffusers | 10,433 | [Docs] Broken Links in a Section of Documentation | ### Broken Links in a Section of Documentation
>Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
In this section of docs `reuse components across pipelines` link is broken or Not Directed to Proper Link
`reuse components across pipelines` should be directed to [Reuse a pipeline](https://huggingface.co/docs/diffusers/en/using-diffusers/loading#reuse-a-pipeline) this section instead of [Load pipelines](https://huggingface.co/docs/diffusers/en/using-diffusers/loading#reuse-components-across-pipelines) section in following File.
<details>
<summary>In these docs</summary>
docs/source/en/api/pipelines/animatediff.md
docs/source/en/api/pipelines/attend_and_excite.md
docs/source/en/api/pipelines/audioldm.md
docs/source/en/api/pipelines/audioldm2.md
docs/source/en/api/pipelines/blip_diffusion.md
docs/source/en/api/pipelines/controlnet.md
docs/source/en/api/pipelines/controlnet_flux.md
docs/source/en/api/pipelines/controlnet_hunyuandit.md
docs/source/en/api/pipelines/controlnet_sd3.md
docs/source/en/api/pipelines/controlnet_sdxl.md
docs/source/en/api/pipelines/controlnetxs.md
docs/source/en/api/pipelines/controlnetxs_sdxl.md
docs/source/en/api/pipelines/dance_diffusion.md
docs/source/en/api/pipelines/ddpm.md
docs/source/en/api/pipelines/dit.md
docs/source/en/api/pipelines/i2vgenxl.md
docs/source/en/api/pipelines/kandinsky.md
docs/source/en/api/pipelines/kandinsky3.md
docs/source/en/api/pipelines/kandinsky_v22.md
docs/source/en/api/pipelines/latent_diffusion.md
docs/source/en/api/pipelines/marigold.md
docs/source/en/api/pipelines/musicldm.md
docs/source/en/api/pipelines/paint_by_example.md
docs/source/en/api/pipelines/panorama.md
docs/source/en/api/pipelines/pix2pix.md
docs/source/en/api/pipelines/self_attention_guidance.md
docs/source/en/api/pipelines/semantic_stable_diffusion.md
docs/source/en/api/pipelines/shap_e.md
docs/source/en/api/pipelines/stable_unclip.md
docs/source/en/api/pipelines/text_to_video.md
docs/source/en/api/pipelines/text_to_video_zero.md
docs/source/en/api/pipelines/unclip.md
docs/source/en/api/pipelines/unidiffuser.md
docs/source/en/api/pipelines/value_guided_sampling.md
</details>
---
Some Links of `reuse components across pipelines` are broken in these files below.
<details>
<summary>In these docs</summary>
docs/source/en/api/pipelines/allegro.md
docs/source/en/api/pipelines/cogvideox.md
docs/source/en/api/pipelines/latte.md
docs/source/en/api/pipelines/ltx_video.md
docs/source/en/api/pipelines/lumina.md
docs/source/en/api/pipelines/pixart.md
docs/source/en/api/pipelines/sana.md
</details>
---
And `docs/source/en/api/pipelines/hunyuan_video.md` and `docs/source/en/api/pipelines/hunyuandit.md` are not in proper format
@stevhliu | https://github.com/huggingface/diffusers/issues/10433 | closed | [] | 2025-01-02T18:24:44Z | 2025-01-06T18:07:39Z | 0 | SahilCarterr |
huggingface/transformers | 35,485 | How to run the model on another machine and send the answer to another machine. | ### System Info
transformers 4.31.0 , window os , python 3.10.12
### Who can help?
vision models: @amyeroberts, @qubvel
I have tried using this model on my machine myself, and it works normally, but the processing is very slow because the GPU on my machine is not that powerful. However, I have a server with a strong GPU. If I install this model on the server and run the code on my machine, when it reaches the video processing stage, it sends the task to the server, and the server sends back the result. Then my machine will print the answer and display the result. Is this possible? If so, how can I do it?
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
.
### Expected behavior
I expect it to work in a hybrid way between my computer and the server to achieve faster results. | https://github.com/huggingface/transformers/issues/35485 | closed | [
"bug"
] | 2025-01-02T10:03:42Z | 2025-01-07T10:20:46Z | null | ixn3rd3mxn |
huggingface/accelerate | 3,320 | How to save self-defined model with deepspeed zero 3? | ### System Info
```Shell
- `Accelerate` version: 1.0.1
- Python version: 3.10.0
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- PyTorch MUSA available: False
- System RAM: 128.00 GB
- GPU type: NVIDIA H20
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
My custom model inherits from torch.nn.Module.
I am training this model with 4 H20 GPUs using deepspeed zero 3.
I am trying to save a checkpoint with these code:
`
### save model
if (idx % args.save_per_steps == 0) and (idx != 0):
accelerator.wait_for_everyone()
if (accelerator.is_local_main_process):
accelerator.print('Saving model ...')
save_dir = os.path.join(args.save_path, args.save_name + '_epoch_' + str(epoch) + '_step_' + str(idx))
accelerator.print('Getting state dict ...')
state_dict = accelerator.get_state_dict(model)
accelerator.print('Unwraping model ...')
unwrapped_model = accelerator.unwrap_model(model)
accelerator.print('Saving checkpoint ...')
unwrapped_model.save_checkpoint(save_dir, idx, state_dict)
accelerator.print('Model saved!')
accelerator.wait_for_everyone()
`
### Expected behavior
The code stuck when getting state dict.
I also tried `accelerator.save_model` but it couldn't work.
I am wondering what's the recommend way to save and load a large model training with deepspeed zero 3?
Thank you very much. | https://github.com/huggingface/accelerate/issues/3320 | closed | [] | 2025-01-02T08:15:36Z | 2025-02-10T15:07:18Z | null | amoyplane |
huggingface/diffusers | 10,425 | Euler Flow Matching Scheduler Missing Documentation for Parameters | ### Describe the bug
The Euler flow matching scheduler in Hugging Face Diffusers is missing clear documentation for its parameters, making it difficult for users to understand how to configure the scheduler effectively for different use cases.
### Reproduction
Steps to Reproduce:
Visit the Hugging Face Diffusers documentation page and locate the section for the Euler flow matching scheduler.
Try to find documentation for the scheduler’s parameters.
Notice that the documentation does not clearly define the parameters or explain their effects.
### Logs
_No response_
### System Info
Hugging Face Diffusers version: 0.16.1
PyTorch version: 2.1.0
CUDA version: 11.8
CPU: Intel Core i7-12700K
GPU: NVIDIA RTX 3090
### Who can help?
@sayakpaul @DN6 | https://github.com/huggingface/diffusers/issues/10425 | closed | [
"bug"
] | 2025-01-02T01:37:38Z | 2025-01-02T01:38:38Z | 0 | hanshengzhu0001 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.