repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 10,004 | how to use kohya sd-scripts flux loras with text encoder keys in diffusers? | resulting lora weights from setting train text encoder to true is incompatible with diffusers load_lora_weights. the script networks/convert_flux_lora.py does not convert the text encoder keys either. | https://github.com/huggingface/diffusers/issues/10004 | open | [
"contributions-welcome"
] | 2024-11-23T20:54:30Z | 2025-03-16T15:39:25Z | null | neuron-party |
huggingface/transformers.js | 1,050 | How to lengthen the Whisper max audio length? | ### Question
I'm working from the [webgpu-whisper](https://github.com/huggingface/transformers.js/tree/main/examples/webgpu-whisper) demo, and I'm having a hard time lengthening the maximum audio input allowed. I made the following changes:
```js
-const MAX_AUDIO_LENGTH = 30; // seconds
+const MAX_AUDIO_LENGTH = 120; // seconds
-const MAX_NEW_TOKENS = 64;
+const MAX_NEW_TOKENS = 624;
```
This seems to allow for longer input, but after 30 seconds I get the following error:
```
Attempting to extract features for audio longer than 30 seconds. If using a pipeline to extract transcript from a long audio clip, remember to specify `chunk_length_s` and/or `stride_length_s`.
```
I can't seem to find where to add [stride_length_s](https://huggingface.co/docs/transformers.js/main/en/api/pipelines#pipelinesautomaticspeechrecognitionpipelinetype--code-promise--automaticspeechrecognitionoutputarray--automaticspeechrecognitionoutput----code) in the demo code, however. Could someone point me in the right direction? | https://github.com/huggingface/transformers.js/issues/1050 | closed | [
"question"
] | 2024-11-22T17:50:50Z | 2024-11-26T03:59:03Z | null | stinoga |
huggingface/diffusers | 9,996 | Flux.1 cannot load standard transformer in nf4 | ### Describe the bug
loading different flux transformer models is fine except for nf4.
it works for 1% of fine-tunes provided on Huggingface, but it doesn't work for 99% standard fine-tunes available on CivitAI.
example of such model: <https://civitai.com/models/118111?modelVersionId=1009051>
*note* i'm using `FluxTransformer2DModel` directly as its easiest for reproduction plus majority of flux fine-tunes are provided as transformer-only, not full models. but where full model does exist, its exactly the same problem using `FluxPipeline`
### Reproduction
```py
import torch
import bitsandbytes as bnb
import diffusers
print(f'torch=={torch.__version__} diffusers=={diffusers.__version__} bnb=={bnb.__version__}')
kwargs = { 'low_cpu_mem_usage': True, 'torch_dtype': torch.bfloat16, 'cache_dir': '/mnt/models/huggingface' }
files = [
'flux-c4pacitor_v2alpha-f1s-bf16.safetensors',
'flux-iniverse_v2-f1d-fp8.safetensors',
'flux-copax_timeless_xplus_mix2-nf4.safetensors',
]
for f in files:
print(f)
try:
transformer = diffusers.FluxTransformer2DModel.from_single_file(f, **kwargs)
print(transformer.__class__)
except Exception as e:
print(e)
transformer = None
torch.cuda.empty_cache()
```
### Logs
```shell
in `diffusers/loaders/single_file_utils.py:convert_flux_transformer_checkpoint_to_diffusers`
q, k, v, mlp = torch.split(checkpoint.pop(f"single_blocks.{i}.linear1.weight"), split_size, dim=0)
> RuntimeError: split_with_sizes expects split_sizes to sum exactly to 33030144 (input tensor's size at dimension 0), but got split_sizes=[3072, 3072, 3072, 12288]
```
### System Info
torch==2.5.1+cu124 diffusers==0.32.0.dev0 bnb==0.44.1
### Who can help?
@yiyixuxu @sayakpaul @DN6 @asomoza | https://github.com/huggingface/diffusers/issues/9996 | open | [
"bug",
"wip"
] | 2024-11-22T16:55:11Z | 2024-12-28T19:56:54Z | 16 | vladmandic |
huggingface/diffusers | 9,990 | How to diagnose problems in training custom inpaint model | ### Discussed in https://github.com/huggingface/diffusers/discussions/9989
<div type='discussions-op-text'>
<sup>Originally posted by **Marquess98** November 22, 2024</sup>
What I want to do is to perform image inpainting when the input is a set of multimodal images, using sdxl as the pre trained model. But the results are very poor now, and I cannot determine whether it is a problem with the code, dataset, pre trained model, or training parameters.
The infer code snipped is as follows:
noise_scheduler = DDIMScheduler.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", subfolder="scheduler")
noise_scheduler.set_timesteps(denoise_steps, device=device)
zi = vae.encode(masked_image).latent_dist.sample()
# zi = vae.encode(masked_image).latent_dist.sample()
zi = zi * vae.config.scaling_factor
zd = vae.encode(img2).latent_dist.sample()
zd = zd * vae.config.scaling_factor
zi_m = vae.encode(masked_image).latent_dist.sample()
zi_m = zi_m * vae.config.scaling_factor
noise = torch.randn_like(zi)
denoise_steps = torch.tensor(denoise_steps,dtype=torch.int32,device=device)
timesteps_add, _ = get_timesteps(noise_scheduler, denoise_steps, 1.0, device, denoising_start=None)
start_step = 5
zi_t = noise_scheduler.add_noise(zi, noise, timesteps_add[start_step])
# mask = mask.unsqueeze(1)
m = F.interpolate(mask.to(zi.dtype), size=(zi.shape[2], zi.shape[3]),
mode='bilinear', align_corners=False)
input_ids = dataset["prompt_ids"].to(device)
input_ids = input_ids.unsqueeze(0)
encoder_hidden_states = text_encoder(input_ids, return_dict=False)[0]
timesteps = noise_scheduler.timesteps
iterable = tqdm(
enumerate(timesteps),
total=len(timesteps),
leave=False,
desc=" " * 4 + "Diffusion denoising",
)
# iterable = enumerate(timesteps)
start_step = 1
# -----------------------denoise------------------------
for i, t in iterable:
if i >= start_step:
unet_input = torch.cat([zi_t, zi_m, zd, m], dim=1)
with torch.no_grad():
noise_pred = unet(unet_input, t,
encoder_hidden_states)[0]
zi_t = noise_scheduler.step(noise_pred, t, zi_t).prev_sample
# torch.cuda.empty_cache()
decode_rgb = vae.decode(zi_t / vae.config.scaling_factor)
decode_rgb = decode_rgb['sample'].squeeze()
And the results of different start_steps are as follow:[0, 5, 15 respectively]



Another wired thing is the decoder_rgb range is about [-2, 2], Shouldn't its range be [-1, 1] ?
Currently, I think the problem may lie in either the infer code or the scale of dataset(about 5000 sets images so far). Can someone guide me on how to determine which part of the problem it is?
Any suggestions and ideas will be greatly appreciated !!!!</div> | https://github.com/huggingface/diffusers/issues/9990 | closed | [] | 2024-11-22T03:16:50Z | 2024-11-23T13:37:53Z | null | Marquess98 |
huggingface/Google-Cloud-Containers | 123 | Querying PaliGemma VLMs | My collaborators and I are trying to use your very useful containers to deploy and use Google's PaliGemma models on GCS/Vertex. I was wondering what is the best way to query the model with images, especially if the images are stored locally? I see that there is an [example showing this for Llama Vision](https://github.com/huggingface/Google-Cloud-Containers/blob/main/examples/vertex-ai/notebooks/deploy-llama-vision-on-vertex-ai/vertex-notebook.ipynb) but it seems like you have to pass in the images as urls which may not be feasible for us..
We're getting some success by doing something like this, but unsure if that's the right way:
```py
image_path = "/PATH/rabbit.png"
with open(image_path, "rb") as f:
image = base64.b64encode(f.read()).decode("utf-8")
image = f"data:image/png;base64,{image}"
output = deployed_model.predict(
instances=[
{
"inputs":f"What is the animal wearing?",
"parameters":{"max_new_tokens": 100, "do_sample": False}
}
]
)
#> space suit
```
Please let me know if you need more details! Any assistance would be much appreciated! | https://github.com/huggingface/Google-Cloud-Containers/issues/123 | closed | [
"question"
] | 2024-11-21T14:52:41Z | 2024-12-04T16:31:01Z | null | kanishkamisra |
huggingface/diffusers | 9,983 | Using StableDiffusionControlNetImg2ImgPipeline Enable_vae_tiling(), seemingly fixed the patch is 512 x 512, where should I set the relevant parameters | ```
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "a beautiful landscape photograph"
pipe.enable_vae_tiling()
``` | https://github.com/huggingface/diffusers/issues/9983 | closed | [] | 2024-11-21T09:21:24Z | 2024-12-02T08:32:52Z | null | reaper19991110 |
huggingface/datatrove | 305 | How to read text files | Hey all is there any text reader in the repo?
I have text files where each line is a document/data sample.
Are there any readers which can read these kind of files directly? | https://github.com/huggingface/datatrove/issues/305 | open | [] | 2024-11-21T06:55:21Z | 2025-05-16T10:51:33Z | null | srinjoym-cerebras |
huggingface/diffusers | 9,979 | flux img2img controlnet channels error | ### Describe the bug
When I use flux's img2img controlnet for inference, a channel error occurs.
### Reproduction
```python
import numpy as np
import torch
import cv2
from PIL import Image
from diffusers.utils import load_image
from diffusers import FluxControlNetImg2ImgPipeline, FluxControlNetPipeline
from diffusers import FluxControlNetModel
from controlnet_aux import HEDdetector
base_model = "black-forest-labs/FLUX.1-dev"
controlnet_model = "Xlabs-AI/flux-controlnet-hed-diffusers"
controlnet = FluxControlNetModel.from_pretrained(
controlnet_model,
torch_dtype=torch.bfloat16,
use_safetensors=True,
)
pipe = FluxControlNetImg2ImgPipeline.from_pretrained(
base_model, controlnet=controlnet, torch_dtype=torch.bfloat16
)
pipe.load_lora_weights("./toonystarkKoreanWebtoonFlux_fluxLoraAlpha.safetensors")
pipe.enable_sequential_cpu_offload()
hed = HEDdetector.from_pretrained("lllyasviel/Annotators")
image_source = load_image("./03.jpeg")
control_image = hed(image_source)
control_image = control_image.resize(image_source.size)
if control_image.mode != 'RGB':
control_image = control_image.convert('RGB')
control_image.save(f"./hed_03.png")
prompt = "bird, cool, futuristic"
image = pipe(
prompt,
image=image_source,
control_image=control_image,
control_guidance_start=0.2,
control_guidance_end=0.8,
controlnet_conditioning_scale=0.5,
num_inference_steps=50,
guidance_scale=6,
).images[0]
image.save("flux.png")
```
### Logs
```shell
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[13], line 2
1 prompt = "bird, cool, futuristic"
----> 2 image = pipe(
3 prompt,
4 image=image_source,
5 control_image=control_image,
6 control_guidance_start=0.2,
7 control_guidance_end=0.8,
8 controlnet_conditioning_scale=0.5,
9 num_inference_steps=50,
10 guidance_scale=6,
11 ).images[0]
12 image.save("flux.png")
File /opt/conda/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /opt/conda/lib/python3.11/site-packages/diffusers/pipelines/flux/pipeline_flux_controlnet_image_to_image.py:924, in FluxControlNetImg2ImgPipeline.__call__(self, prompt, prompt_2, image, control_image, height, width, strength, num_inference_steps, timesteps, guidance_scale, control_guidance_start, control_guidance_end, control_mode, controlnet_conditioning_scale, num_images_per_prompt, generator, latents, prompt_embeds, pooled_prompt_embeds, output_type, return_dict, joint_attention_kwargs, callback_on_step_end, callback_on_step_end_tensor_inputs, max_sequence_length)
921 controlnet_cond_scale = controlnet_cond_scale[0]
922 cond_scale = controlnet_cond_scale * controlnet_keep[i]
--> 924 controlnet_block_samples, controlnet_single_block_samples = self.controlnet(
925 hidden_states=latents,
926 controlnet_cond=control_image,
927 controlnet_mode=control_mode,
928 conditioning_scale=cond_scale,
929 timestep=timestep / 1000,
930 guidance=guidance,
931 pooled_projections=pooled_prompt_embeds,
932 encoder_hidden_states=prompt_embeds,
933 txt_ids=text_ids,
934 img_ids=latent_image_ids,
935 joint_attention_kwargs=self.joint_attention_kwargs,
936 return_dict=False,
937 )
939 guidance = (
940 torch.tensor([guidance_scale], device=device) if self.transformer.config.guidance_embeds else None
941 )
942 guidance = guidance.expand(latents.shape[0]) if guidance is not None else None
File /opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py:1511, in Module._wrapped_call_impl(self, *args, **kwargs)
1509 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1510 else:
-> 1511 return self._call_impl(*args, **kwargs)
File /opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py:1520, in Module._call_impl(self, *args, **kwargs)
1515 # If we don't have any hooks, we want to skip the rest of the logic in
1516 # this function, and just call forward.
1517 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1518 or _global_backward_pre_hooks or _global_backward_hooks
1519 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1520 return forward_call(*args, **kwargs)
1522 try:
1523 result = None
File /opt/conda/lib/python3.11/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward( | https://github.com/huggingface/diffusers/issues/9979 | closed | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-11-21T03:39:12Z | 2025-04-23T20:43:51Z | 10 | wen020 |
huggingface/diffusers | 9,976 | ControlNet broken from_single_file | ### Describe the bug
controlnet loader from_single_file was originally added via #4084
and method `ControlNet.from_single_file()` works for non-converted controlnets.
but for controlnets in safetensors format that contain already converted state_dict, it errors out.
its not reasonable to expect from user to know what is the internal dict structure of the controlnet safetensors file
before he can use it.
even worse, some of the newer controlnets are distributed as single-file-only and are already in diffusers format
which makes them impossible to load in difufsers.
for example: <https://huggingface.co/Laxhar/noob_openpose/tree/main>
this issue was already mentioned several times, each time closed as "works as designed"
when in reality its just a failure that should be addressed as an issue.
see #8474 #9208 #8614 as examples of previous issues
### Reproduction
scenario-1: works with non-converted controlnet
```python
import torch
from diffusers import ControlNetModel
from huggingface_hub import hf_hub_download
local_path = hf_hub_download(repo_id='Aptronym/SDNext', filename='ControlNet11/controlnet11Models_canny.safetensors')
cn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16)
print(cn.__class__)
```
scenario-1: fails for majority of controlnets available on huggingface
```python
import torch
from diffusers import ControlNetModel
from huggingface_hub import hf_hub_download
local_path = hf_hub_download(repo_id='lllyasviel/sd_control_collection', filename='diffusers_xl_canny_small.safetensors')
cn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16)
print(cn.__class__)
```
initial failure is nonsense
> OSError: stable-diffusion-v1-5/stable-diffusion-v1-5 does not appear to have a file named config.json.
whats making this worse is that SD15 and SDXL share the same `ControlNet` class which causes some
confusion on the base repo where to lookup config.
e.g,, here we're loading SDXL controlnet and error referrs to SD15 repo.
anyhow, trying to force correct config:
```py
cn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16, config='diffusers/controlnet-canny-sdxl-1.0-small')
```
results in even worse nonsense failure during loading of state_dict:
> TypeError: is_floating_point(): argument 'input' (position 1) must be Tensor, not NoneType
### System Info
diffusers=0.32.0.dev0
python==3.12.3
torch==2.5.1+cu124
### Who can help?
@yiyixuxu @sayakpaul @DN6 @asomoza | https://github.com/huggingface/diffusers/issues/9976 | closed | [
"bug"
] | 2024-11-20T13:46:14Z | 2024-11-22T12:22:53Z | 7 | vladmandic |
huggingface/lerobot | 515 | ACT is working, but not Diffusion | Hello Team,
your work is so good, I am currently working on creating some nice policies with Lerobot repo, architecture and software. I tried ACT on my robot, it is working fine, able to execute the tasks what it learnt in the evaluation.
I tried training Diffusion policy, multiple times with different params and also the default params, what you provided in the repo. I tried PushT in colab, its working but not in robot. Can you please explain why its not working, or should I change other things??
I forgot to mention, I used 3 cameras for data collection and training for Diffusion
Thank you
EDIT (aliberts): format | https://github.com/huggingface/lerobot/issues/515 | closed | [
"question",
"policies",
"stale"
] | 2024-11-19T18:58:28Z | 2025-11-30T02:37:09Z | null | Kacchan16 |
huggingface/transformers.js | 1,042 | how can i pass embeddings or context to a text2text-generation model | ### Question
I downloaded the model to local. I found that there doesn't seem to be an API that allows me to pass embeddings. How can I make this model understand the context?
Then I tried to pass the context content to this model, but the model didn't seem to accept it and output the following words.
The code is like the following:
```js
const model =await pipeline("text2text-generation", "LaMini-Flan-T5-783M")
const result = await model("you are a teacher, who are you?",{})
```
this is model output
```json
[
{
"generated_text": "As an AI language model, I am not a teacher."
}
]
```
I don't know whether it's due to the model itself or that I just haven't found the API for passing the context😕
| https://github.com/huggingface/transformers.js/issues/1042 | closed | [
"question"
] | 2024-11-19T18:32:45Z | 2024-11-20T05:34:45Z | null | electroluxcode |
huggingface/transformers.js | 1,041 | Full preload example | ### Question
Hello!
I'm looking for a full "preload model" nodejs example.
Say I do this:
```ts
import { env } from '@huggingface/transformers';
env.allowRemoteModels = false;
env.localModelPath = '/path/to/local/models/';
```
how do I "get" the model to that path? I want to download it when building my docker image | https://github.com/huggingface/transformers.js/issues/1041 | closed | [
"question"
] | 2024-11-19T12:34:04Z | 2024-11-26T12:44:55Z | null | benjick |
huggingface/transformers.js | 1,038 | script.convert tfjs model to onnx support | ### Question
I'm using tfjs-node to create an image-classifier model;
but I'm stuck with how to convert model.json to a format that can be used by optimum or script.convert to convert it to a onnx file.
I'm able to convert to a graph model using
```
tensorflowjs_converter --input_format=tfjs_layers_model \ --output_format=tfjs_graph_model \ ./saved-model/layers-model/model.json \ ./saved-model/graph-model
```
and then I can convert to an onnx using
```
python3 -m tf2onnx.convert --tfjs ./saved-model/graph-model/model.json --output ./saved-model/model.onnx
```
This works fine when I test in python but I'm unable to use in transformers.js - I probably need to use optimum to convert it?
I tried a number of approaches but was unable to convert to onnx - I then saw script.convert but am having difficulties
- This is an example of the code I'm using to test the model with
```
import onnxruntime as ort
from PIL import Image
import numpy as np
# Load the ONNX model
session = ort.InferenceSession('./saved-model/model.onnx')
# Get input and output names
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
# Load and preprocess the image
img = Image.open('./training_images/shirt/00e745c9-97d9-429d-8c3f-d3db7a2d2991.jpg').resize((128, 128))
img_array = np.array(img).astype(np.float32) / 255.0 # Normalize pixel values to [0, 1]
img_array = np.expand_dims(img_array, axis=0) # Add batch dimension
# Run inference
outputs = session.run([output_name], {input_name: img_array})
print(f"Inference outputs: {outputs}")
```
[Uploading model.onnx.txt…]()
Any guidance on how to go from tfjs model.json to onnx supported by transformers.js would really help me out.
Thanks!
| https://github.com/huggingface/transformers.js/issues/1038 | open | [
"question"
] | 2024-11-18T15:42:46Z | 2024-11-19T10:08:28Z | null | JohnRSim |
huggingface/chat-ui | 1,573 | Include chat-ui in an existing React application | Hello,
Is it possible to integrate / embed chat-ui in an existing application, like a React component?
For example, to add a chat module to an existing website with the UI of chat-ui.
As is the case with Chainlit : https://docs-prerelease.chainlit.io/customisation/react-frontend | https://github.com/huggingface/chat-ui/issues/1573 | open | [
"enhancement"
] | 2024-11-18T14:11:58Z | 2024-11-18T14:15:17Z | 0 | martin-prillard |
huggingface/optimum | 2,097 | TFJS support model.json to ONNX conversion | ### Feature request
Currently using node to create an image-classifier model.json with tfjs
- I don't think Optimum support this format to convert to onnx?
It would be nice to just use optimum and point to model.json.
### Motivation
Currently I'm creating the model converting it to graph and then converting to onnx like this -
```
tensorflowjs_converter --input_format=tfjs_layers_model \ --output_format=tfjs_graph_model \ ./saved-model/layers-model/model.json \ ./saved-model/graph-model
```
```
python3 -m tf2onnx.convert --tfjs ./saved-model/graph-model/model.json --output ./saved-model/model.onnx
```
I'm not sure how to switch to use optimum - do I need to convert model.json to .h5 and then run?
- if I try this I run into huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './path_to_save/model.h5'. Use `repo_type` argument if needed
### Your contribution
N/A | https://github.com/huggingface/optimum/issues/2097 | open | [
"exporters",
"tflite"
] | 2024-11-18T12:55:05Z | 2024-11-19T10:22:35Z | 0 | JohnRSim |
huggingface/optimum-benchmark | 294 | How to Use a Local Model When Calling the Python API | 
| https://github.com/huggingface/optimum-benchmark/issues/294 | closed | [] | 2024-11-18T06:36:24Z | 2024-12-09T12:23:30Z | null | WCSY-YG |
huggingface/lerobot | 511 | Minimum Requirements - Running Policies in production/ Training Policies | I was wondering what types of hardware can policies trained using lerobot can run on. Lets say I wanted to run policies in production on say a raspberry pi. Is it possible to run training on beefier hardware and then deploy policies to lower-end hardware to run? Is it better to record with various cameras or just use the same camera? What is the minimum quality?
You have tutorials on training and evaluating policies but nothing about deploying to production. Would be interesting to see this.
Thank you
| https://github.com/huggingface/lerobot/issues/511 | closed | [
"question"
] | 2024-11-17T17:34:50Z | 2025-04-07T16:23:41Z | null | rkeshwani |
huggingface/transformers.js | 1,035 | How can I implement partial output in the react demo? | ### Question
Hello! I am reading the Transformers.js documentation for "[Building a react application](https://huggingface.co/docs/transformers.js/tutorials/react)", but I encountered an issue at [step 4](https://huggingface.co/docs/transformers.js/tutorials/react#step-4-connecting-everything-together).
I don't know how to implement the **partial output** of the translation results, even though the documentation provides the following instructions:
```javascript
let output = await translator(event.data.text, {
tgt_lang: event.data.tgt_lang,
src_lang: event.data.src_lang,
// Allows for partial output
callback_function: x => {
self.postMessage({
status: 'update',
output: translator.tokenizer.decode(x[0].output_token_ids, { skip_special_tokens: true })
});
}
});
```
I have completed all the steps in the tutorial documentation, but I still cannot get the output to work properly. I tried using `console.log` for debugging and found that the `callback_function` is not working, and the main thread is not receiving any messages with the status `update`. I have also not found any information about the `callback_function` in the transformers.js documentation. I apologize for taking up your time, but I sincerely need your help. 🙏 | https://github.com/huggingface/transformers.js/issues/1035 | open | [
"question"
] | 2024-11-17T11:29:22Z | 2024-12-02T23:00:13Z | null | DikkooXie |
huggingface/lerobot | 510 | Do we have to compulsory use trossen robotics robots for this repo? | Or any robot will work fine?
Also one more question.
Do we have to use depth camera or simple camera will work fine? | https://github.com/huggingface/lerobot/issues/510 | closed | [
"question",
"robots"
] | 2024-11-17T11:14:52Z | 2025-04-07T16:27:40Z | null | hemangjoshi37a |
huggingface/diffusers | 9,942 | Unable to install pip install diffusers>=0.32.0dev | ### Describe the bug
I am installing the following version
pip install diffusers>=0.32.0dev
However it does nothing
```
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>pip install diffusers>=0.32.0dev
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>
```
I even uninstalled the previous version
```
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>pip uninstall diffusers
Found existing installation: diffusers 0.31.0
Uninstalling diffusers-0.31.0:
Would remove:
c:\aitools\cogvideo\cv_venv\lib\site-packages\diffusers-0.31.0.dist-info\*
c:\aitools\cogvideo\cv_venv\lib\site-packages\diffusers\*
c:\aitools\cogvideo\cv_venv\scripts\diffusers-cli.exe
Proceed (Y/n)? y
Successfully uninstalled diffusers-0.31.0
```
### Reproduction
Create a conda environment and install using
`pip install diffusers>=0.32.0dev`
So I understand it is not release here
https://pypi.org/project/diffusers/#history
How do I install on Windows 11
I even checked the branch

### Logs
_No response_
### System Info
Python 3.11.10
Windows 11
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9942 | closed | [
"bug"
] | 2024-11-17T10:26:19Z | 2024-11-17T12:27:23Z | 0 | nitinmukesh |
huggingface/candle | 2,622 | How to compute `Atan2` for tensors? | I am trying to implement DeepPhase in candle but I am struggling figuring out how to calculate the phase angles from two tensors using `atan2` operation. | https://github.com/huggingface/candle/issues/2622 | open | [] | 2024-11-16T16:45:36Z | 2024-11-17T14:21:50Z | null | cryscan |
huggingface/transformers.js | 1,032 | How to identify which models will work with transformers.js? | ### Question
I've tried multiple models from MTEB dashboard (e.g. `jinaai/jina-embeddings-v3`, `jinaai/jina-embeddings-v2`, `dunzhang/stella_en_400M_v5`), but none of them work.
It's not clear which models will work?
```ts
const generateGteSmallEmbedding = await pipeline(
'feature-extraction',
'dunzhang/stella_en_400M_v5',
);
``` | https://github.com/huggingface/transformers.js/issues/1032 | open | [
"question"
] | 2024-11-15T22:13:00Z | 2024-12-22T02:41:43Z | null | punkpeye |
huggingface/datasets | 7,291 | Why return_tensors='pt' doesn't work? | ### Describe the bug
I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List?

### Steps to reproduce the bug

### Expected behavior
Sorry for this silly question, I'm noob on using this tool. But I think it should return a tensor value as I have used the protocol?
When I tokenize only one sentence using tokenized_input=tokenizer(input, return_tensors='pt' ),it does return in tensor type. Why doesn't it work in map()?
### Environment info
transformers>=4.41.2,<=4.45.0
datasets>=2.16.0,<=2.21.0
accelerate>=0.30.1,<=0.34.2
peft>=0.11.1,<=0.12.0
trl>=0.8.6,<=0.9.6
gradio>=4.0.0
pandas>=2.0.0
scipy
einops
sentencepiece
tiktoken
protobuf
uvicorn
pydantic
fastapi
sse-starlette
matplotlib>=3.7.0
fire
packaging
pyyaml
numpy<2.0.0
| https://github.com/huggingface/datasets/issues/7291 | open | [] | 2024-11-15T15:01:23Z | 2024-11-18T13:47:08Z | 2 | bw-wang19 |
huggingface/speech-to-speech | 141 | 不想实时录音,传一段音频怎么操作?I don't want to record in real time, how can I upload an audio clip? | 服务器上启动server
win10 本地启动python listen_and_play.py 后,一会没录,服务端就结束了???
我想传一段音频让他翻译应该怎么搞 | https://github.com/huggingface/speech-to-speech/issues/141 | open | [] | 2024-11-15T03:58:26Z | 2024-12-20T04:30:13Z | null | dh12306 |
huggingface/diffusers | 9,930 | [PAG] - Adaptive Scale bug | ### Describe the bug
I am looking for the purpose of the PAG adaptive scale? Because I was passing a value in it, for example 5.0, and passing 3.0 in the PAG scale, according to the implemented code we will have a negative number and the scale will return 0 and the PAG will not be applied and I did not find an explanation about this parameter in the documentation.
So i found it on an ComfyUI documentation: "_This dampening factor reduces the effect of PAG during the later stages of the denoising process, speeding up the overall sampling. A value of 0.0 means no penalty, while 1.0 completely removes PAG_"
Then I realized that I was passing values above 1.0, however when I pass values of 0.2 it is enough for it not to apply the PAG. I suspect this could be a problem.
If you run the code below, you will see that in the third image where I pass a scale of 0.2 in adaptive_scale it practically invalidates the PAG in the first generation steps.
I propose a possible solution:
After this code:
https://github.com/huggingface/diffusers/blob/5c94937dc7561767892d711e199f874dc35df041/src/diffusers/pipelines/pag/pag_utils.py#L93
We can change for:
```python
if self.do_pag_adaptive_scaling:
signal_scale = self.pag_scale
if t / self.num_timesteps > self.pag_adaptive_scale:
signal_scale = 0
return signal_scale
else:
return self.pag_scale
```
And inside every PAG pipeline, we need change "t" variable for "i" variable is passed with param on this function, to receive the number of current step.
https://github.com/huggingface/diffusers/blob/5c94937dc7561767892d711e199f874dc35df041/src/diffusers/pipelines/pag/pipeline_pag_sd_xl.py#L1253
With this, the logic will not be that the higher the adaptive scale value, the faster the PAG will be disabled, but quite the opposite. The scale will tell you exactly at what point in the process the PAG will be disabled. If the scale exceeds 0.5 in a 30-step generation, the PAG will be disabled from step 15 onwards. The scale applied will be the same until the moment of the cut and will not be a variable scale.
I don't know if this was the original purpose of this parameter, but it works well for me.
### Reproduction
```python
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda"
pipeline_sdxl = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
enable_pag=True,
pag_applied_layers=["mid"],
torch_dtype=torch.float16
).to(device)
pipeline = AutoPipelineForText2Image.from_pipe(pipeline_sdxl, enable_pag=True).to(device)
pipeline.enable_vae_tiling()
pipeline.enable_model_cpu_offload()
prompt = "an insect robot preparing a delicious meal, anime style"
for i, pag_scale in enumerate([0.0, 3.0, 3.0]):
generator = torch.Generator(device="cpu").manual_seed(0)
images = pipeline(
prompt=prompt,
num_inference_steps=25,
guidance_scale=7.0,
generator=generator,
pag_scale=pag_scale,
pag_adaptive_scale=0.0 if i < 2 else 0.2
).images[0]
images.save(f"./data/result_pag_{i+1}.png")
```
### Logs
```shell
N/A
```
### System Info
- 🤗 Diffusers version: 0.32.0.dev0
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.11
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): 0.10.1 (cpu)
- Jax version: 0.4.35
- JaxLib version: 0.4.35
- Huggingface_hub version: 0.26.2
- Transformers version: 4.46.2
- Accelerate version: 1.1.1
- PEFT version: 0.13.2
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: 0.0.27.post2
- Accelerator: NVIDIA GeForce RTX 3060 Ti, 8192 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@yiyixuxu , @asomoza | https://github.com/huggingface/diffusers/issues/9930 | open | [
"bug",
"stale"
] | 2024-11-15T02:00:19Z | 2024-12-15T15:03:05Z | 1 | elismasilva |
huggingface/safetensors | 541 | [Question] Safetensors seem to block the main thread -- but torch.save does not? | I have the following code in my training loop:
```
if rank == 0:
t = Thread(
target=save_file,
args=(model_sd, f"{cfg.model_dir}/model_{step + 1}.safetensors"),
daemon=True
)
t.start()
```
Which saves the checkpoint to disk using safetensors. However, I notice that this blocks the training loop, even though the thread should be running in the background.
When I switch the code to use `torch.save`, there's no issue. What should I do? | https://github.com/huggingface/safetensors/issues/541 | open | [] | 2024-11-15T00:37:55Z | 2025-02-26T09:51:23Z | 4 | vedantroy |
huggingface/peft | 2,216 | How to specify the coefficients of loading lora during inference? | https://github.com/huggingface/peft/issues/2216 | closed | [] | 2024-11-14T11:47:00Z | 2024-11-18T11:30:03Z | null | laolongboy | |
huggingface/chat-ui | 1,565 | Is there any place that uses this environment variable? | https://github.com/huggingface/chat-ui/blob/ab349d0634ec4cf68a781fd7afc5e7fdd6bb362f/.env#L59-L65
It seems like it can be deleted. | https://github.com/huggingface/chat-ui/issues/1565 | closed | [] | 2024-11-14T11:12:49Z | 2024-11-14T11:17:04Z | 2 | calycekr |
huggingface/diffusers | 9,927 | HeaderTooLarge when train controlnet with sdv3 | ### Describe the bug
Hello, I tried diffuser to train controlnet with sdv3 but it didn't start training and send `safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge` feedback. I don't know how to handle it.
### Reproduction
Follow the README_v3 guide.
### Logs
```shell
(diffusers) [liudongyu@localhost controlnet]$ accelerate launch train_controlnet_sd3.py --pretrained_model_name_or_path=$MODEL_DIR --output_dir=$OUTPUT_DIR --train_data_dir="/home/users/liudongyu/datasets" --resolution=1024 --learning_rate=1e-5 --max_train_steps=20000 --train_batch_size=1 --gradient_accumulation_steps=4
Detected kernel version 3.10.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
11/14/2024 15:16:14 - INFO - __main__ - Distributed environment: DistributedType.NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda
Mixed precision type: no
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
{'max_image_seq_len', 'base_image_seq_len', 'use_dynamic_shifting', 'max_shift', 'base_shift'} was not found in config. Values will be initialized to default values.
Traceback (most recent call last):
File "/home/users/liudongyu/diffuser/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1423, in <module>
main(args)
File "/home/users/liudongyu/diffuser/diffusers/examples/controlnet/train_controlnet_sd3.py", line 982, in main
text_encoder_one, text_encoder_two, text_encoder_three = load_text_encoders(
^^^^^^^^^^^^^^^^^^^
File "/home/users/liudongyu/diffuser/diffusers/examples/controlnet/train_controlnet_sd3.py", line 187, in load_text_encoders
text_encoder_two = class_two.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3789, in from_pretrained
with safe_open(resolved_archive_file, framework="pt") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge
Traceback (most recent call last):
File "/home/users/liudongyu/anaconda3/envs/diffusers/bin/accelerate", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/accelerate/commands/launch.py", line 1168, in launch_command
simple_launcher(args)
File "/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/accelerate/commands/launch.py", line 763, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/users/liudongyu/anaconda3/envs/diffusers/bin/python', 'train_controlnet_sd3.py', '--pretrained_model_name_or_path=stabilityai/stable-diffusion-3-medium-diffusers', '--output_dir=sd3-controlnet-out', '--train_data_dir=/home/users/liudongyu/datasets', '--resolution=1024', '--learning_rate=1e-5', '--max_train_steps=20000', '--train_batch_size=1', '--gradient_accumulation_steps=4']' returned non-zero exit status 1.
```
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- 🤗 Diffusers version: 0.31.0.dev0
- Platform: Linux-3.10.0-1160.114.2.el7.x86_64-x86_64-with-glibc2.17
- Running on Google Colab?: No
- Python version: 3.11.10
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.25.2
- Transformers version: 4.45.2
- Accelerate version: 1.0.0
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA A100-PCIE-40GB, 40960 MiB
NVIDIA A100 80GB PCIe, 81920 MiB
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9927 | closed | [
"bug"
] | 2024-11-14T07:28:03Z | 2024-11-21T13:02:05Z | 3 | Viola-Siemens |
huggingface/datasets | 7,290 | `Dataset.save_to_disk` hangs when using num_proc > 1 | ### Describe the bug
Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours.
Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than when using `num_proc=1`
The documentation mentions that "Multiprocessing is disabled by default.", but there is no explanation on how to enable it.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
n_samples = int(4e6)
n_tokens_sample = 100
data_dict = {
'tokens' : np.random.randint(0, 100, (n_samples, n_tokens_sample)),
}
dataset = Dataset.from_dict(data_dict)
dataset.save_to_disk('test_dataset', num_proc=1)
dataset.save_to_disk('test_dataset', num_proc=4)
dataset.save_to_disk('test_dataset', num_proc=8)
```
This results in:
```
>>> dataset.save_to_disk('test_dataset', num_proc=1)
Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [00:17<00:00, 228075.15 examples/s]
>>> dataset.save_to_disk('test_dataset', num_proc=4)
Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [01:49<00:00, 36583.75 examples/s]
>>> dataset.save_to_disk('test_dataset', num_proc=8)
Saving the dataset (8/8 shards): 100%|██████████████| 4000000/4000000 [02:11<00:00, 30518.43 examples/s]
```
With larger datasets it can take hours, but I didn't benchmark that for this bug report.
### Expected behavior
I would expect using `num_proc>1` to be faster instead of slower than `num_proc=1`.
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | https://github.com/huggingface/datasets/issues/7290 | open | [] | 2024-11-14T05:25:13Z | 2025-11-24T09:43:03Z | 4 | JohannesAck |
huggingface/trl | 2,356 | How to train from scratch? Can you provide the code | ### System Info
train from scratch
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
train from scratch
### Expected behavior
train from scratch
### Checklist
- [X] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))
- [X] I have included my system information
- [X] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [X] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [X] Any traceback provided is complete | https://github.com/huggingface/trl/issues/2356 | closed | [
"❓ question"
] | 2024-11-14T02:39:41Z | 2024-12-13T23:00:20Z | null | sankexin |
huggingface/sentence-transformers | 3,054 | 'scale' hyperparameter in MultipleNegativesRankingLoss | I am looking through the MultipleNegativesRankingLoss.py code and I have question about the 'scale' hyperparameter. Also known as the 'temperature', the scale is used to stretch or compress the range of output values from the similarity function. A larger scale creates greater distinction between positive and negative examples in terms of similarity score differences. The line below is how the scale is used in the forward function of the loss.
`scores = self.similarity_fct(embeddings_a, embeddings_b) * self.scale`
Currently, the scale is set to 20 for when cosine similarity is used as the distance metric.
Why was 20 selected as the scale for when using cosine similarity on the embeddings? Is this the optimal scale value for cosine similarity? Would this hyperparameter need to be optimized during fine-tuning? | https://github.com/huggingface/sentence-transformers/issues/3054 | closed | [
"question"
] | 2024-11-14T00:11:23Z | 2025-01-16T13:54:45Z | null | gnatesan |
huggingface/diffusers | 9,924 | Can we get more schedulers for flow based models such as SD3, SD3.5, and flux | It seems advanced schedulers such as DDIM, and the dpm++ 2m does work with flow based model such as SD3, SD3.5, and flux.
However, I only see 2 flow based schedulers in diffusers codebase:
FlowMatchEulerDiscreteScheduler, and'
FlowMatchHeunDiscreteScheduler
I tried to use DPMSolverMultistepScheduler, but it does not generate correct images with flow based models. Help?
| https://github.com/huggingface/diffusers/issues/9924 | open | [
"wip",
"scheduler"
] | 2024-11-14T00:07:56Z | 2025-01-14T18:31:12Z | 40 | linjiapro |
huggingface/pytorch-image-models | 2,332 | [BUG] How to customize the number of classification heads | **Describe the bug**
**To Reproduce**
Steps to reproduce the behavior:
from timm.models import create_model
checkpoint_path = "/nas_mm_2/yinxiaofei.yxf/open_source_model/InternViT-300M-448px/tmp/timm__vit_intern300m_patch14_448.ogvl_dist/model.safetensors"
model = create_model('vit_intern300m_patch14_448',checkpoint_path=checkpoint_path, num_classes = 3)
**Screenshots**
RuntimeError: Error(s) in loading state_dict for VisionTransformer:
Missing key(s) in state_dict: "head.weight", "head.bias".
**Additional context**
If I remove the num_classes = 3 parameter, then this program is completely normal
| https://github.com/huggingface/pytorch-image-models/issues/2332 | closed | [
"bug"
] | 2024-11-12T08:08:50Z | 2024-11-12T15:28:42Z | null | JarvisFei |
huggingface/unity-api | 30 | [QUESTION] | I have a simple game built in unity and I'm using this Hugging face API client for voice parsing. I'm trying to understand when I build and run the game, and want to distribute it to many users, how do I keep the same api key every time so that users can install and run voice control it without any issue? | https://github.com/huggingface/unity-api/issues/30 | closed | [
"question"
] | 2024-11-12T02:35:52Z | 2024-11-20T01:46:16Z | null | harshal-14 |
huggingface/swift-transformers | 140 | How to use customized tokenizer? | Hello. I am writing this post because I have a question about loading the tokenizer model. I am trying to use a pre-trained tokenizer in a Swift environment. After training, how do I apply the byproduct .model and .vocab files so that I can use the tokenizer I trained in Swift while using the swift-transformer API? I would appreciate it if you could answer. | https://github.com/huggingface/swift-transformers/issues/140 | open | [
"tokenization"
] | 2024-11-11T09:36:14Z | 2025-09-10T13:19:10Z | null | cch1219 |
huggingface/diffusers | 9,900 | Potential bug in repaint? | https://github.com/huggingface/diffusers/blob/dac623b59f52c58383a39207d5147aa34e0047cd/src/diffusers/schedulers/scheduling_repaint.py#L322
According to line5 of algorithm 1 in the paper, the second part in line 322 should remove the `**0.5`?
thanks! | https://github.com/huggingface/diffusers/issues/9900 | closed | [] | 2024-11-10T10:41:26Z | 2024-12-16T19:38:22Z | 3 | jingweiz |
huggingface/finetrainers | 82 | [question] what is the difference between cofgvideo scheduler and normal diffuers scheduler | ### Feature request / 功能建议
CogVideoXDPMScheduler VS DPMSCheduler
CogVideoXDDIMScheduler VS DDIM Scheduler
Hi Aryan, is there any sampling difference between these two sampler?
@a-r-r-o-w
### Motivation / 动机
/
### Your contribution / 您的贡献
/ | https://github.com/huggingface/finetrainers/issues/82 | closed | [] | 2024-11-09T17:15:57Z | 2024-12-19T14:43:23Z | null | foreverpiano |
huggingface/optimum | 2,092 | Add support for RemBERT in the ONNX export | ### Feature request
Add RemBERT to supported architectures for ONNX export.
### Motivation
The support for [RemBert](https://huggingface.co/docs/transformers/model_doc/rembert) was previously available in Transformers see [here](https://github.com/huggingface/transformers/issues/16308). However, now it seems that RemBERT is no longer supported.
### Your contribution
I can help by testing implementation or providing the code if provided by some tutorial. I was not able to find documentation on how to do that. | https://github.com/huggingface/optimum/issues/2092 | closed | [
"onnx"
] | 2024-11-08T15:12:34Z | 2024-12-02T13:54:10Z | 1 | mlynatom |
huggingface/lerobot | 502 | Low accuracy for diffusion policy+aloha env+sim_transfer_cude_human dataset | I'm trying to use diffusion model and aloha env to train on sim_transfer_cude_human dataset. But after 60000 training step, the evaluation accuracy is only 2%-6%. Idont know why? If I load pre-trained act policy, the accuracy can reach 80%. | https://github.com/huggingface/lerobot/issues/502 | open | [
"question",
"simulation"
] | 2024-11-08T02:20:14Z | 2025-11-29T02:48:27Z | null | Kimho666 |
huggingface/local-gemma | 41 | How to load from file? | How to load model from file, eg. .h5 file, instead of downloading the model?
Especially the model saved by keras_nlp. | https://github.com/huggingface/local-gemma/issues/41 | open | [] | 2024-11-07T03:01:25Z | 2024-11-07T03:03:31Z | null | datdq-abivin |
huggingface/diffusers | 9,876 | Why isn’t VRAM being released after training LoRA? | ### Describe the bug
When I use train_dreambooth_lora_sdxl.py, the VRAM is not released after training. How can I fix this?
### Reproduction
Not used.
### Logs
_No response_
### System Info
- 🤗 Diffusers version: 0.31.0.dev0
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.17
- Running on Google Colab?: No
- Python version: 3.8.20
- PyTorch version (GPU?): 2.2.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.25.2
- Transformers version: 4.45.2
- Accelerate version: 1.0.1
- PEFT version: 0.13.2
- Bitsandbytes version: 0.44.1
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA H800, 81559 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9876 | open | [
"bug",
"stale"
] | 2024-11-06T11:58:59Z | 2024-12-13T15:03:25Z | 14 | hjw-0909 |
huggingface/diffusers | 9,866 | Flux controlnet can't be trained, do this script really work? | ### Describe the bug
run with one num processes, the code broke down and returns:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
run with more than one processes, the code broke down and returns:
Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
### Reproduction
just follow the instructions and it will be reproduced
### Logs
_No response_
### System Info
diffusers v0.32
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9866 | closed | [
"bug",
"stale"
] | 2024-11-05T08:51:57Z | 2024-12-05T15:19:12Z | 4 | liuyu19970607 |
huggingface/optimum-quanto | 346 | How to support activation 4bit quantization? | As mentioned in title. | https://github.com/huggingface/optimum-quanto/issues/346 | closed | [
"Stale"
] | 2024-11-04T09:59:21Z | 2024-12-10T02:10:31Z | null | Ther-nullptr |
huggingface/transformers | 34,591 | How to retrain the GLIP model on the Object365 dataset | Since I made some modifications to the GLIP model, I need to perform some pre-training again to improve performance. I replaced `_base_ = [../_base_/datasets/coco_detection.py]` with `_base_ = [../_base_/datasets/objects365v1_detection.py]` in `glip_atss_swin-t_a_fpn_dyhead_16xb2_ms-2x_funtune_coco.py` to train on Object365. Is this correct? | https://github.com/huggingface/transformers/issues/34591 | closed | [] | 2024-11-04T03:54:17Z | 2024-11-04T06:46:17Z | null | Polarisamoon |
huggingface/diffusers | 9,847 | Merge Lora weights into base model | I have finetuned the stable diffusion model and would like to merge the lora weights into the model itself. Currently I think in PEFT this is supported using `merge_and_unload` function but I seem to not find this option in diffusers. So is there any way to get a base model but with finetuned weights and If i am not wrong only unet part of model weights needs to be merged.
This is necessary for the tasks like feature extraction. | https://github.com/huggingface/diffusers/issues/9847 | closed | [] | 2024-11-02T18:00:28Z | 2024-11-03T03:03:45Z | 1 | yaswanth19 |
huggingface/chat-ui | 1,550 | Add full-text search in chat history | ## Describe your feature request
Allow users to search for specific keywords or phrases within the chat history, making it easier to find and recall previous conversations.
## Screenshots (if relevant)
An example of the search bar placement could be found in #1079
## Implementation idea
One possible implementation could be to use a library to index the chat history data. This would allow for efficient and scalable search functionality. The search bar could be added to the chat history interface, and when a user enters a search query, it would send a request to the search index to retrieve relevant results. The results could be displayed in a dropdown list or a separate search results page, with links to the original chat messages.
## Previous proposals and why this one is different
I'm aware that a similar proposal was made in the past #243, but it was rejected in favor of using the browser's page search functionality (ctrl + F). However, I'd like to argue that page search does not provide the same functionality as a dedicated full-text search in chat history. Here's why:
- Page search is limited to the currently loaded chat history and previous chat names, whereas a dedicated search would allow users to search across the entire conversation history, even if it's not currently loaded on the page.
- Page search does not provide any contextual information, such as the date and time of the message, or the conversation, whereas a dedicated search could provide this information and make it easier for users to understand the context of the search results.
Given these differences, I believe that a dedicated full-text search in chat history is a valuable feature that would greatly improve the user experience, and I'd like to propose it again for consideration.
Personally, I tend to create a new chat for each small problem to keep the LLM focused on what's important. As a result, I end up with too many chats with similar names, which makes the browser page search nearly useless.
| https://github.com/huggingface/chat-ui/issues/1550 | closed | [
"enhancement"
] | 2024-11-01T19:27:41Z | 2025-05-28T15:03:19Z | 5 | kadykov |
huggingface/diffusers | 9,837 | [Feature] Is it possible to customize latents.shape / prepare_latent for context parallel case? | **Is your feature request related to a problem? Please describe.**
One may need to extend the code to context parallel case and the latent sequence length needs to get divided.
Instead of copying all the code of pipeline.py, the minimum modification is just adding few lines about dividing the latent shape and all_gather the result from the output.
I suggest adding this feature so doing the monkey patch will be easier.
| https://github.com/huggingface/diffusers/issues/9837 | closed | [
"stale"
] | 2024-11-01T14:32:05Z | 2024-12-01T15:07:36Z | 3 | foreverpiano |
huggingface/diffusers | 9,836 | [Feature] Can we record layer_id for DiT model? | **Is your feature request related to a problem? Please describe.**
Some layerwise algorithm may be based on layer-id.
just need some simple modification for transformer2Dmodel and its inner module like attention part, batch_norm part. just pass the layer_id as an extra parameter.
| https://github.com/huggingface/diffusers/issues/9836 | closed | [
"stale"
] | 2024-11-01T14:26:31Z | 2025-01-27T01:31:21Z | 9 | foreverpiano |
huggingface/diffusers | 9,835 | unused parameters lead to error when training contrlnet_sd3 | ### Discussed in https://github.com/huggingface/diffusers/discussions/9834
<div type='discussions-op-text'>
<sup>Originally posted by **Zheng-Fang-CH** November 1, 2024</sup>

Is there someone meet this question? I have this error no matter I train it on single gpu or multi gpu.</div> | https://github.com/huggingface/diffusers/issues/9835 | closed | [] | 2024-11-01T13:57:03Z | 2024-11-17T07:33:25Z | 6 | Daryu-Fan |
huggingface/diffusers | 9,833 | SD3.5-large. Why is it OK when calling with a single thread, but not with multiple threads? | ### Describe the bug
First, I created a SD3.5-large service:
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
import uuid
from diffusers import BitsAndBytesConfig, SD3Transformer2DModel, DDIMScheduler, DDPMParallelScheduler
from diffusers import StableDiffusion3Pipeline
import torch
from transformers import T5EncoderModel
import time
from flask import request, jsonify
import logging
import sys
import flask
app = flask.Flask("sd_server")
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(logging.Formatter("[%(asctime)s] %(levelname)s in %(module)s: %(message)s"))
app.logger.handlers.clear()
app.logger.addHandler(handler)
app.logger.setLevel(logging.INFO)
# model pipeline
model_id = "../stable-diffusion-3.5-large"
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model_nf4 = SD3Transformer2DModel.from_pretrained(
model_id,
subfolder="transformer",
quantization_config=nf4_config,
torch_dtype=torch.bfloat16
)
model_nf4 = model_nf4.to("cuda:0")
pipeline = StableDiffusion3Pipeline.from_pretrained(
model_id,
transformer=model_nf4,
torch_dtype=torch.bfloat16
)
# pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
# pipeline.scheduler = DDPMParallelScheduler.from_config(pipeline.scheduler.config)
pipeline = pipeline.to("cuda:0")
# # diffusers/t5-nf4
# t5_nf4 = T5EncoderModel.from_pretrained("text_encoder_3", torch_dtype=torch.bfloat16)
# t5_nf4 = t5_nf4.to("cuda:0")
# pipeline = StableDiffusion3Pipeline.from_pretrained(
# model_id,
# transformer=model_nf4,
# text_encoder_3=t5_nf4,
# torch_dtype=torch.bfloat16
# )
# pipeline = pipeline.to("cuda:0")
def generate_uuid_filename(extension=".jpeg"):
filename = f"{uuid.uuid4()}{extension}"
return filename
def image_generation(prompt, negative_prompt, width, height, save_path, num_inference_steps=28, guidance_scale=4.5, max_sequence_length=512):
image = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=num_inference_steps,
width=width,
height=height,
guidance_scale=guidance_scale,
max_sequence_length=max_sequence_length,
).images[0]
file_name = generate_uuid_filename()
image.save(os.path.join(save_path, file_name))
torch.cuda.empty_cache()
return f"{file_name}保存完毕..."
def update_prompt(req_data):
trans = {"natural":["cinematic photo ```%s``` , photograph, film, bokeh, professional, 4k, highly detailed",
"drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly"],
"vivid":["HDR photo of ``%s``` . High dynamic range, vivid, rich details, clear shadows and highlights, realistic, intense, enhanced contrast, highly detailed",
"flat, low contrast, oversaturated, underexposed, overexposed, blurred, noisy"]}
style = "natural"
try:
if req_data.get('style') != None:
if req_data.get('style') in trans.keys():
style = req_data.get('style')
except:
pass
import re
try:
req_data["promptEnglish"] = re.findall(r'\\"(.+)\\"',req_data["promptEnglish"])[0]
except:
pass
prompt = trans[style][0]%req_data["promptEnglish"]
negative_prompt = trans[style][1]
if req_data["negativePromptEnglish"] not in [None ,'']:
negative_prompt = req_data["negativePromptEnglish"]
return prompt, negative_prompt
@app.route('/api/text_to_img', methods=['POST'])
def route():
res = {"id": "",
"object": "image",
"created":int(time.time()),
"data":[]}
req_data = request.json
app.logger.info(req_data)
prompt, negative_prompt = update_prompt(req_data)
app.logger.info(prompt+"|"+negative_prompt)
width = int(req_data["size"].split("x")[0])
height= int(req_data["size"].split("x")[1])
res["data"] = image_generation(prompt, negative_prompt, width, height, './')
return jsonify(res)
if __name__ == '__main__':
app.run(host='0.0.0.0',port=12571,threaded=True, debug=False)
```
Then I called this service concurrently and the following problems occurred:
```bash
[2024-11-01 07:32:12,370] INFO in app: {'prompt': '', 'promptEnglish': 'A capybara holding a sign that reads Hello Fast World', 'negative_prompt': '', 'negativePromptEnglish': None, 'style': 'natural', 'size': '1024x1024'}
[2024-11-01 07:32:12,371] INFO in app: cinematic photo ```A capybara holding a sign that reads Hello Fast World``` , photograph, film, bokeh, professional, 4k, highly detailed|drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly
4%|███▋ | https://github.com/huggingface/diffusers/issues/9833 | closed | [
"bug"
] | 2024-11-01T08:00:04Z | 2024-11-02T02:14:50Z | 1 | EvanSong77 |
huggingface/diffusers | 9,825 | Support IPAdapters for FLUX pipelines | ### Model/Pipeline/Scheduler description
IPAdapter for FLUX is available now, do you have any plans to add IPAdapter to FLUX pipelines?
### Open source status
- [X] The model implementation is available.
- [X] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
model implementation:
* https://github.com/XLabs-AI/x-flux/blob/main/src/flux/xflux_pipeline.py#L55
model weights:
* https://huggingface.co/XLabs-AI/flux-ip-adapter-v2
* https://huggingface.co/XLabs-AI/flux-ip-adapter
| https://github.com/huggingface/diffusers/issues/9825 | closed | [
"help wanted",
"wip",
"contributions-welcome",
"IPAdapter"
] | 2024-10-31T23:07:32Z | 2024-12-21T17:49:59Z | 10 | chenxiao111222 |
huggingface/diffusers | 9,822 | Loading SDXL loras into Flux | ### Describe the bug
Currently it's possible to load SDXL loras without warning into Flux.
### Reproduction
Is it possible for you to implement a raise a warning (and an error when a boolean is active) when the list of layers here is zero:
https://github.com/huggingface/diffusers/blob/41e4779d988ead99e7acd78dc8e752de88777d0f/src/diffusers/loaders/lora_pipeline.py#L1905
### Logs
_No response_
### System Info
ubuntu
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9822 | closed | [
"bug"
] | 2024-10-31T18:01:29Z | 2024-12-10T14:37:32Z | 8 | christopher5106 |
huggingface/datasets | 7,268 | load_from_disk | ### Describe the bug
I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that?
### Steps to reproduce the bug
when trying to load data using load_From_disk after being saved using save_to_disk
### Expected behavior
run out of disk space
### Environment info
lateest version | https://github.com/huggingface/datasets/issues/7268 | open | [] | 2024-10-31T11:51:56Z | 2025-07-01T08:42:17Z | 3 | ghaith-mq |
huggingface/peft | 2,188 | How to change 'modules_to_save' setting when reloading a lora finetuned model | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.19
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@BenjaminBossan
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
@BenjaminBossan 1. I use lora to finetune whisper,and get the model A. The settings are
```
config = LoraConfig(r=8, lora_alpha=16,target_modules=target_modules,modules_to_save=modules_to_save,lora_dropout=0.05, bias="none")
model = get_peft_model(model, config)
```
and then I change the source code of model A, I add an additional layer. I now want to train a model with an extra layer based on the lora trained model A. I use:
```
model_lora_path = "../lora_path/" + 'checkpoint-56416'
model = PeftModel.from_pretrained(model,model_lora_path,ignore_mismatched_sizes=True).cuda()
```
But the model LoraConfig's "modules_to_save" can not be changed, I want to store the additional layer in to 'adapter_model.safetensors' How can I change my code?
In short, I want to add parameters to modules_to_save in LoraConfig during the reload process based on the trained lora model so that the additional layer can be stored.
I tried to use `model.peft_config['default'].modules_to_save.extend(modules_to_save)` to add the “modules_to_save” but it doesn't work.
### Expected behavior
Change reload lora model's LoraConfig settings | https://github.com/huggingface/peft/issues/2188 | closed | [] | 2024-10-30T12:26:37Z | 2024-12-08T15:03:37Z | null | dengchengxifrank |
huggingface/huggingface.js | 996 | @huggingface/hub: how to use `modelInfo` with proper typing | THe `modelInfo` method is allowing the caller to define which field will be provided, it has been added in https://github.com/huggingface/huggingface.js/pull/946
https://github.com/huggingface/huggingface.js/blob/186ab738e2f9c7c3613330d45e44848186958815/packages/hub/src/lib/model-info.ts#L9-L11
Here is an example
```typescript
$: const info = await modelInfo({
name: "openai-community/gpt2",
});
$: console.log(info);
{
id: '621ffdc036468d709f17434d',
name: 'openai-community/gpt2',
private: false,
task: 'text-generation',
downloads: 13764131,
gated: false,
likes: 2334,
updatedAt: 2024-02-19T10:57:45.000Z
}
```
We can ask for additional fields, using the `additionalFields`. Here is an example
```typescript
$: const info = await modelInfo({
name: "openai-community/gpt2",
additionalFields: ['author'],
});
$: console.log(info);
{
// ... omitted
author: 'openai-community',
}
```
However I am not able to find proper typing for the method calling and return type.
The return type of `modelInfo` is the following
https://github.com/huggingface/huggingface.js/blob/186ab738e2f9c7c3613330d45e44848186958815/packages/hub/src/lib/model-info.ts#L21
The additionalFields is the following
https://github.com/huggingface/huggingface.js/blob/186ab738e2f9c7c3613330d45e44848186958815/packages/hub/src/lib/model-info.ts#L15
But, I am getting an error when doing the following
```typescript
const info = await modelInfo<'author'>({
name: "openai-community/gpt2",
additionalFields: ['author'],
});
```
`TS2344: Type string does not satisfy the constraint never`
I am also interesting in getting the full `ApiModelInfo` object, but I am not able to use the method with the right typing :thinking: .
cc @coyotte508 :)
| https://github.com/huggingface/huggingface.js/issues/996 | closed | [] | 2024-10-30T10:41:36Z | 2024-10-30T12:02:47Z | null | axel7083 |
huggingface/diffusers | 9,802 | Multidiffusion (panorama pipeline) is missing segmentation inputs? | I'm looking at the multidiffusion panorama pipeline page (https://huggingface.co/docs/diffusers/en/api/pipelines/panorama). It looks like there is no way to specify the segmentation and associated prompts as in the original paper https://multidiffusion.github.io/ . If the code only has the panorama capability and not the region based generation using segmentation and prompts, then it should be extended to include the regional generation... If it does have region based generation then the documentation should be updated to show how to use it! | https://github.com/huggingface/diffusers/issues/9802 | open | [
"stale"
] | 2024-10-29T20:15:15Z | 2024-12-24T15:03:30Z | 5 | jloveric |
huggingface/transformers.js | 1,000 | Error while converting LLama-3.1:8b to ONNX | ### Question
Hey @xenova,
Thanks a lot for this library! I tried converting [`meta-llama/Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) to ONNX using the following command (on `main`):
```bash
python -m scripts.convert --quantize --model_id "meta-llama/Llama-3.1-8B-Instruct"
```
Using the following `requirements.py` file (in a fresh env):
```
transformers[torch]==4.43.4
onnxruntime==1.19.2
optimum==1.21.3
onnx==1.16.2
onnxconverter-common==1.14.0
tqdm==4.66.5
onnxslim==0.1.31
--extra-index-url https://pypi.ngc.nvidia.com
onnx_graphsurgeon==0.3.27
```
But got the following error:
```
Framework not specified. Using pt to export the model.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:27<00:00, 6.99s/it]
Automatic task detection to text-generation-with-past (possible synonyms are: causal-lm-with-past).
Using the export variant default. Available variants are:
- default: The default ONNX variant.
***** Exporting submodel 1/1: LlamaForCausalLM *****
Using framework PyTorch: 2.5.0
Overriding 1 configuration item(s)
- use_cache -> True
We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
/site-packages/transformers/models/llama/modeling_llama.py:1037: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if sequence_length != 1:
Traceback (most recent call last):
File "/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "scripts/convert.py", line 462, in <module>
main()
File "scripts/convert.py", line 349, in main
main_export(**export_kwargs)
File "/site-packages/optimum/exporters/onnx/__main__.py", line 365, in main_export
onnx_export_from_model(
File "/site-packages/optimum/exporters/onnx/convert.py", line 1170, in onnx_export_from_model
_, onnx_outputs = export_models(
File "/site-packages/optimum/exporters/onnx/convert.py", line 776, in export_models
export(
File "/site-packages/optimum/exporters/onnx/convert.py", line 881, in export
export_output = export_pytorch(
File "/site-packages/optimum/exporters/onnx/convert.py", line 577, in export_pytorch
onnx_export(
File "/site-packages/torch/onnx/__init__.py", line 375, in export
export(
File "/site-packages/torch/onnx/utils.py", line 502, in export
_export(
File "/site-packages/torch/onnx/utils.py", line 1564, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
graph = _optimize_graph(
File "/site-packages/torch/onnx/utils.py", line 663, in _optimize_graph
_C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
```
I saw this somewhat related issue #967, but the error didn't happen on the ONNX library (I think `v3` has been merged now).
Do you have a fix for larger models such as this one? I also tried with [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct), but I got the same error, even though I see [here](https://huggingface.co/onnx-community/Llama-3.2-3B-Instruct) that you managed to convert it successfully.
Thanks! | https://github.com/huggingface/transformers.js/issues/1000 | open | [
"question"
] | 2024-10-29T09:40:14Z | 2024-10-29T09:40:14Z | null | charlesbvll |
huggingface/chat-ui | 1,545 | Support markdown & code blocks in text input | ## Describe your feature request
Would be nice to support code block in the text input bar, that would make it easier to input code. we could also support basic markdown features like bold or italic, maybe not headings tho.
## Screenshots (if relevant)
Try https://claude.ai/new to see an example of how this could work
| https://github.com/huggingface/chat-ui/issues/1545 | open | [
"enhancement",
"front"
] | 2024-10-28T08:42:58Z | 2024-11-11T20:26:32Z | 2 | nsarrazin |
huggingface/peft | 2,181 | How can I do to export mode format as gguf | ### Feature request
This is a good project,I just got it today and encountered some problems.
my any code
``` python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Qwen2-0.5B")
model = AutoModelForCausalLM.from_pretrained("model")
model.save_pretrained('directory')
```
I need gguf file deploy by ollama.Whern I export model format as gguf.
I use
```shell
!python llama.cpp/convert_hf_to_gguf.py directory
```
but it error
```
INFO:hf-to-gguf:Loading model: directory
Traceback (most recent call last):
File "/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py", line 4436, in <module>
main()
File "/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py", line 4404, in main
hparams = Model.load_hparams(dir_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py", line 462, in load_hparams
with open(dir_model [/](https://file+.vscode-resource.vscode-cdn.net/) "config.json", "r", encoding="utf-8") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'directory/config.json'
```
<img width="1328" alt="image" src="https://github.com/user-attachments/assets/4d74c66e-b092-47f2-b570-b6e35767a6ce">
### Motivation
I need gguf file deploy by ollama.
Is there any other way to deploy the PEFT model?
Thank you very much.
### Your contribution
I simply reproduced it on top | https://github.com/huggingface/peft/issues/2181 | closed | [] | 2024-10-26T13:51:45Z | 2024-10-26T13:59:18Z | null | xu756 |
huggingface/diffusers | 9,772 | Support ControlNetPlus Union if not already supported | It's not clear if ControlNetPlus is already supported by diffusers https://github.com/xinsir6/ControlNetPlus/tree/main/pipeline which consists of union controlnet for SDXL. This model seems to support the only SDXL segmentation that I'm aware of. If not already supported, it should be!
https://github.com/xinsir6/ControlNetPlus/tree/main
| https://github.com/huggingface/diffusers/issues/9772 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-10-25T17:43:43Z | 2024-12-11T17:07:54Z | 5 | jloveric |
huggingface/transformers.js | 994 | Will these mistakes have an impact? | ### Question
After AutoProcessor.from_pretrained is loaded, an error occurred, and the error message is as follows:
````typescript
ort-wasm-simd-thread…jsep.wasm:0x10367e0 2024-10-25 20:11:31.705399 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
ort-wasm-simd-thread…jsep.wasm:0x10367e0 2024-10-25 20:11:31.706300 [W:onnxruntime:, session_state.cc:1170 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
```` | https://github.com/huggingface/transformers.js/issues/994 | open | [
"question"
] | 2024-10-25T12:17:03Z | 2024-11-12T11:10:11Z | null | aidscooler |
huggingface/transformers.js | 993 | How do I know the loading progress when loading .onnx file? | ### Question
Because the .onnx file is large(about 170M),I decided to provide a loading progress. Code as below:
```` typescript
const modelSettings = {
// Do not require config.json to be present in the repository
config: { model_type: "custom" },
subfolder: "",
process_callback: (progress) => {
modelLoadingProgress.value = Math.round(progress * 100);
console.log("model : " + progress)
}
};
modelSettings.device = "webgpu";
modelSettings.dtype = "fp32";
model = await AutoModel.from_pretrained('briaai/RMBG-1.4', modelSettings);
````
I found the process_callback never been called. Can anyone help? | https://github.com/huggingface/transformers.js/issues/993 | open | [
"question"
] | 2024-10-25T05:52:12Z | 2024-10-25T17:54:30Z | null | aidscooler |
huggingface/finetrainers | 70 | How to set the resolutions when finetuning I2V model? | I want to train a video diffusion with lower resolutions. I set the height_buckets=256 and width_buckets=256 in prepare_dataset.sh and process the data. But I run into the following error while run the train_image_to_video_lora.sh script.
ValueError: It is currently not possible to generate videos at a different resolution that the defaults. This should only be the case with 'THUDM/CogVideoX-5b-I2V'.If you think this is incorrect, please open an issue at https://github.com/huggingface/diffusers/issues.
How to set the hyperparameters to train with different resolutions? | https://github.com/huggingface/finetrainers/issues/70 | closed | [] | 2024-10-25T05:36:19Z | 2024-11-11T18:27:29Z | null | TousakaNagio |
huggingface/optimum | 2,080 | "ValueError: Trying to export a codesage model" while trying to export codesage/codesage-large | ### System Info
```shell
optimum 1.23.2
MacOS 14.7
Python 3.9
```
### Who can help?
@michaelbenayoun
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
This is a PyTorch embedding model released by AWS, as described here: https://www.linkedin.com/posts/changsha-ma-9ba7a485_yes-code-needs-its-own-embedding-models-activity-7163196644258226176-bFSW
Hoping I can use it with RAG under ollama for code understanding.
```
huggingface-cli download codesage/codesage-large
optimum-cli export onnx --model codesage/codesage-large codesage-large-onnx --task default --trust-remote-code
```
The error: "ValueError: Trying to export a codesage model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type codesage to be supported natively in the ONNX export."
I am grateful for any help you can provide!
### Expected behavior
An exported ONNX file. | https://github.com/huggingface/optimum/issues/2080 | open | [
"bug"
] | 2024-10-25T05:27:22Z | 2024-10-25T05:27:22Z | 0 | TurboEncabulator9000 |
huggingface/chat-ui | 1,543 | RFC enable multimodal and tool usage at once for OAI endpoints ? | https://github.com/huggingface/chat-ui/blob/8ed1691ecff94e07d10dfb2874d3936d293f4842/src/lib/server/endpoints/openai/endpointOai.ts#L191C53-L191C65
Just played around with combining both of this
What do you think about making tool calling only if no image is in conversation ?
Otherwise we need to insert models twice, once for multi modal and once for tool usage.
A quick solution could be just checking if image_url is part in one of the messages and if it is skip the tools check
Just struggled around because the upload file button was there but didnt worked to do something with the uploaded image until checking the code.
@nsarrazin wdyt ? | https://github.com/huggingface/chat-ui/issues/1543 | open | [] | 2024-10-24T17:37:50Z | 2024-10-24T17:39:14Z | 0 | flozi00 |
huggingface/transformers.js | 991 | Loading models from "non-URL" locations in the browser | ### Question
Hi! I have an application where the model files will be pre-loaded in a custom format into the browsers IndexDb. Based on my understanding, transformer.js currently only supports loading models by URL and then caches them in the browser cache. Getting the model files from the IndexDb instead, seems a little tricky, as it would require to "copy" a lot of the loading logic.
Other ideas were to use a ServiceWorker to intercept the model download and mock the response with the files from IndexDb, or to write the files directly into browser cache that transformer.js uses.
Both solutions seem hacky... So, before I embark on writing my own loading logic, I wanted to ask, if you have any ideas or suggestions on how to approach this?
Thanks in advance! | https://github.com/huggingface/transformers.js/issues/991 | open | [
"question"
] | 2024-10-24T12:18:19Z | 2024-12-04T19:30:07Z | null | AKuederle |
huggingface/finetrainers | 68 | How to set the hyperparameters when finetuning I2V model with LoRA? | File "/home/shinji106/ntu/cogvideox-factory/training/dataset.py", line 411, in __iter__
self.buckets[(f, h, w)].append(data)
KeyError: (16, 320, 720)
The resolution is (13, 320, 480) so the key of self.bucket does not match with input.
How do I set the hyperparameters when running the prepare_dataset.sh and train_image_to_video_lora.sh so that the key will match? | https://github.com/huggingface/finetrainers/issues/68 | closed | [] | 2024-10-24T08:06:33Z | 2025-01-10T23:40:06Z | null | TousakaNagio |
huggingface/datasets | 7,249 | How to debugging | ### Describe the bug
I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the processing, but when I wished to do more complex processing, I found that I was unable to debug (even the simple samples were inaccessible). There are no errors reported, and I am able to print the _info,_split_generators and _generate_examples messages, but I am unable to access the breakpoints.
### Steps to reproduce the bug
# my_dataset.py
import json
import datasets
class MyDatasetConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super(MyDatasetConfig, self).__init__(**kwargs)
class MyDataset(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
BUILDER_CONFIGS = [
MyDatasetConfig(
name="default",
version=VERSION,
description="myDATASET"
),
]
def _info(self):
print("info") # breakpoints
return datasets.DatasetInfo(
description="myDATASET",
features=datasets.Features(
{
"id": datasets.Value("int32"),
"text": datasets.Value("string"),
"label": datasets.ClassLabel(names=["negative", "positive"]),
}
),
supervised_keys=("text", "label"),
)
def _split_generators(self, dl_manager):
print("generate") # breakpoints
data_file = "data.json"
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_file}
),
]
def _generate_examples(self, filepath):
print("example") # breakpoints
with open(filepath, encoding="utf-8") as f:
data = json.load(f)
for idx, sample in enumerate(data):
yield idx, {
"id": sample["id"],
"text": sample["text"],
"label": sample["label"],
}
#main.py
import os
os.environ["TRANSFORMERS_NO_MULTIPROCESSING"] = "1"
from datasets import load_dataset
dataset = load_dataset("my_dataset.py", split="train", cache_dir=None)
print(dataset[:5])
### Expected behavior
Pause at breakpoints while running debugging
### Environment info
pycharm
| https://github.com/huggingface/datasets/issues/7249 | open | [] | 2024-10-24T01:03:51Z | 2024-10-24T01:03:51Z | null | ShDdu |
huggingface/sentence-transformers | 3,015 | How to customize the dataloader? e.g. Custom Data Augmentation | Hi,
I've always been used to the old .fit behaviour where I could pass in the good DataLoader, implementing the Dataset myself, according to my needs.
With the new trainer interface, how am I supposed to tweak the dataloader?
Let's say I want to apply some random transformations to the input text, how can I do it right now? Of course, changing the original dataset, augmenting it statically, is a no-go.
Thanks! | https://github.com/huggingface/sentence-transformers/issues/3015 | open | [] | 2024-10-23T17:11:13Z | 2024-11-15T10:32:35Z | null | msciancalepore98 |
huggingface/diffusers | 9,756 | Could not find loading_adapters.ipynb | ### Describe the bug
while reading doc [Load adapters](https://huggingface.co/docs/diffusers/using-diffusers/loading_adapters)
I tried to open in Colab to run an example on this page.
<img width="504" alt="open_colab" src="https://github.com/user-attachments/assets/0b1397f1-d266-4d83-84ab-276ea796a2a4">
It will get Notebook not found on a new page.
It can't find loading_adapters.ipynb in [huggingface/notebooks](https://github.com/huggingface/notebooks)
### Reproduction
I follow the doc and write down a Google Colab [Google Colab loading_adapters](https://colab.research.google.com/drive/1pYpvsOf6U9CAZfughY1aUltUQTFsw4OI)
Can I contribute a pr for this?
Do you know how I can do this?
Commit to notebook repo?
Or something different?
### Logs
_No response_
### System Info
Google Colab
### Who can help?
@stevhliu @sayakpaul | https://github.com/huggingface/diffusers/issues/9756 | closed | [
"bug"
] | 2024-10-23T13:03:11Z | 2024-11-01T15:27:56Z | 6 | thliang01 |
huggingface/accelerate | 3,190 | How to save the optimizer state while enabling Deepspeed to save the model | ### System Info
```Shell
Unrelated to configuration
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
```
unwrapped_model = accelerator.unwrap_model(transformer)
unwrapped_model.save_pretrained(save_directory,
save_function=accelerator.save,
state_dict=accelerator.get_state_dict(transformer))
```
I am using Deepspeed Zero2.
I want to save the model state and optimizer state, but the current `save_pretrained()` only supports saving the model state. How can I save the optimizer state?
### Expected behavior
I would like to know if it supports saving optimizer state and how to use it.
THANKS! | https://github.com/huggingface/accelerate/issues/3190 | closed | [] | 2024-10-23T11:58:08Z | 2024-11-01T02:53:38Z | null | ITerydh |
huggingface/diffusers | 9,750 | Is it possible to provide img2img code for CogView3? | Is it possible to provide img2img code for CogView3? | https://github.com/huggingface/diffusers/issues/9750 | open | [
"stale",
"contributions-welcome"
] | 2024-10-23T07:40:38Z | 2024-12-20T15:04:01Z | 3 | ChalvYongkang |
huggingface/optimum | 2,076 | Problem converting tinyllama to onnx model with optimum-cli | ### System Info
```shell
main branch newest
local pip install
```
### Who can help?
@michaelbenayoun
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
optimum-cli export onnx --model /home/wangzhiqun/TinyLlama-1.1B-Chat-v1.0 --task text-generation --batch_size 1 --sequence_length 128 tinyllama_onnx_file
### Expected behavior
To specify the batch_size and sequence_length, I use the following "optimum-cli export onnx --model /home/wangzhiqun/TinyLlama-1.1B-Chat-v1.0 --task text-generation --batch_size 1 --sequence_length 128 tinyllama_onnx_file". But the exported onnx model still holds the shape [batch_size, sequence_length]. How can I specify the fixed dimensions? | https://github.com/huggingface/optimum/issues/2076 | open | [
"bug"
] | 2024-10-22T06:23:51Z | 2024-10-22T06:36:42Z | 0 | hayyaw |
huggingface/diffusers | 9,731 | How to use Playground2.5 to train lora with own dataset to generate pictures of a specific style? | ### Describe the bug
Hi,
I have been working on training models using the same dataset as "stabilityai/stable-diffusion-xl-base-1.0" with the script examples/text_to_image/train_text_to_image_lora_sdxl.py, and I achieved quite promising results.
Now, I am trying to further improve the performance by switching to Dreambooth. I am currently using playground2.5 with examples/dreambooth/train_dreambooth_lora_sdxl.py. However, after multiple parameter tuning attempts, the performance is still not as good as the SDXL base model.
I am unsure what might be causing this.
### Reproduction

### Logs
_No response_
### System Info
- 🤗 Diffusers version: 0.31.0.dev0
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.17
- Running on Google Colab?: No
- Python version: 3.8.20
- PyTorch version (GPU?): 2.2.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.25.2
- Transformers version: 4.45.2
- Accelerate version: 1.0.1
- PEFT version: 0.13.2
- Bitsandbytes version: 0.44.1
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA H800, 81559 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9731 | open | [
"bug",
"stale"
] | 2024-10-21T12:10:12Z | 2024-11-20T15:03:04Z | null | hjw-0909 |
huggingface/diffusers | 9,727 | FLUX.1-dev dreambooth save problem trained on multigpu | ### Describe the bug
I tried to train flux using accelerate and deepspeed, but when using two L40s, the model could not be saved properly. What is the problem?
### Reproduction
train.sh:
accelerate launch --config_file config.yaml train_flux.py \
--pretrained_model_name_or_path="./FLUX.1-dev" \
--resolution=1024 \
--train_batch_size=1 \
--output_dir="output1" \
--num_train_epochs=10 \
--checkpointing_steps=5 \
--validation_steps=500 \
--max_train_steps=40001 \
--learning_rate=4e-05 \
--seed=12345 \
--mixed_precision="fp16" \
--revision="fp16" \
--use_8bit_adam \
--gradient_accumulation_steps=1 \
--gradient_checkpointing \
--lr_scheduler="constant_with_warmup" --lr_warmup_steps=2500 \
config.yaml:
compute_environment: LOCAL_MACHINE
debug: false
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: false
zero_stage: 2
distributed_type: DEEPSPEED
downcast_bf16: 'no'
gpu_ids: 0,1
enable_cpu_affinity: false
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
### Logs
```shell
Using /home/oppoer/.cache/torch_extensions/py310_cu117 as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.00030350685119628906 seconds
10/21/2024 02:58:18 - INFO - __main__ - ***** Running training *****
10/21/2024 02:58:18 - INFO - __main__ - Num examples = 2109730
10/21/2024 02:58:18 - INFO - __main__ - Num batches each epoch = 1054865
10/21/2024 02:58:18 - INFO - __main__ - Num Epochs = 1
10/21/2024 02:58:18 - INFO - __main__ - Instantaneous batch size per device = 1
10/21/2024 02:58:18 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2
10/21/2024 02:58:18 - INFO - __main__ - Gradient Accumulation steps = 1
10/21/2024 02:58:18 - INFO - __main__ - Total optimization steps = 40001
Steps: 0%| | 0/40001 [00:00<?, ?it/s]Passing `txt_ids` 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor
Using /home/oppoer/.cache/torch_extensions/py310_cu117 as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0007116794586181641 seconds
[2024-10-21 02:58:29,496] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648
Steps: 0%| | 1/40001 [00:11<127:38:44, 11.49s/it, loss=0.544, lr=0]Passing `txt_ids` 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor
[2024-10-21 02:58:36,774] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2147483648, reducing to 1073741824
Steps: 0%| | 2/40001 [00:18<100:07:40, 9.01s/it, loss=0.36, lr=0]Passing `txt_ids` 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor
[2024-10-21 02:58:44,052] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1073741824, reducing to 536870912
Steps: 0%| | 3/40001 [00:26<91:19:39, 8.22s/it, loss=0.543, lr=0]Passing `txt_ids` 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor
[2024-10-21 02:58:51,324] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 536870912, reducing to 268435456
Steps: 0%| | 4/40001 [00:33<87:10:01, 7.85s/it, loss=1.14, lr=0]Passing `txt_ids` 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor
[2024-10-21 02:58:58,612] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 268435456, reducing to 134217728
Steps: 0%| | https://github.com/huggingface/diffusers/issues/9727 | closed | [
"bug"
] | 2024-10-21T03:37:23Z | 2024-10-29T06:38:00Z | 1 | jyy-1998 |
huggingface/diffusers | 9,726 | FLUX.1-dev dreambooth problem trained on multigpu | ### Describe the bug
I tried to use accelerate and deepspeed to train flux, and it worked fine when using two L40s, but an error occurred when using two a100s. What is the reason?
### Reproduction
train.sh:
accelerate launch --config_file config.yaml train_flux.py \
--pretrained_model_name_or_path="./FLUX.1-dev" \
--resolution=1024 \
--train_batch_size=1 \
--output_dir="output0" \
--num_train_epochs=10 \
--checkpointing_steps=5 \
--validation_steps=500 \
--max_train_steps=40001 \
--learning_rate=4e-05 \
--seed=12345 \
--mixed_precision="fp16" \
--revision="fp16" \
--use_8bit_adam \
--gradient_accumulation_steps=1 \
--gradient_checkpointing \
--lr_scheduler="constant_with_warmup" --lr_warmup_steps=2500 \
--mask_accept_threshold=0.6 \
--empty_prompt_prob=0.1 \
--dilate_factor=4 \
--crop_img \
--mask_cover_percent=0.0 \
--mask_cover_percent_person=0.5 \
config.yaml:
compute_environment: LOCAL_MACHINE
debug: false
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: false
zero_stage: 2
distributed_type: DEEPSPEED
downcast_bf16: 'no'
gpu_ids: 0,1
enable_cpu_affinity: false
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
### Logs
```shell
Installed CUDA version 11.8 does not match the version torch was compiled with 11.7 but since the APIs are compatible, accepting this combination
Installed CUDA version 11.8 does not match the version torch was compiled with 11.7 but since the APIs are compatible, accepting this combination
Using /home/oppoer/.cache/torch_extensions/py310_cu117 as PyTorch extensions root...
[1/3] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/opt/conda/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -DBF16_AVAILABLE -c /opt/conda/lib/python3.10/site-packages/deepspeed/ops/csrc/common/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o
[2/3] c++ -MMD -MF cpu_adam.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/opt/conda/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /opt/conda/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -L/usr/local/cuda/lib64 -lcudart -lcublas -g -march=native -fopenmp -D__AVX512__ -D__ENABLE_CUDA__ -DBF16_AVAILABLE -c /opt/conda/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp -o cpu_adam.o
[3/3] c++ cpu_adam.o custom_cuda_kernel.cuda.o -shared -lcurand -L/opt/conda/lib/python3.10/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/usr/local/cuda/lib64 -lcudart -o cpu_adam.so
Loading extension module cpu_adam...
Time to load cpu_adam op: 27.327727794647217 seconds
Loading extension module cpu_adam...
Time to load cpu_adam op: 21.32274580001831 seconds
Adam Optimizer #0 is created with AVX512 arithmetic capability.
Config: alpha=0.001000, betas=(0.900000, 0.999000), weight_decay=0.000100, adam_w=1
[2024-10-21 03:05:17,566] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.3, git-hash=unknown, git-branch=unknown
Adam Optimizer #0 is created with AVX512 arithmetic capability.
Config: alpha=0.001000, betas=(0.900000, 0.999000), weight_decay=0.000100, adam_w=1
10/21/2024 03:06:08 - INFO - torch.distributed.dis | https://github.com/huggingface/diffusers/issues/9726 | closed | [
"bug"
] | 2024-10-21T03:20:44Z | 2024-10-21T03:32:42Z | 0 | jyy-1998 |
huggingface/tokenizers | 1,661 | How to Read Information in Large Tokenizer's Vocabulary | TLDR; This is how the byte-level BPE works. Main advantages are:
- Smaller vocabularies
- No unknown token
This is totally expected behavior. The byte-level BPE converts all the Unicode code points into multiple byte-level characters:
1. Each Unicode code point is decomposed into bytes (1 byte for ASCII characters, and up to 4 bytes for UTF-8 Unicode code points)
2. Each byte value gets a "visible" character assigned to it from the beginning of the Unicode table. This is especially important because there are a lot of control characters, so we can't just have a simple mapping ASCII Table character <-> byte value. So some characters get other representations, like for example the white space `U+0020` becomes `Ġ`.
The purpose is, by doing so, you end up with an initial alphabet of 256 tokens. These 256 tokens can then be merged together to represent any other token in the vocabulary. This results in smaller vocabularies, that won't ever need an "unknown" token.
_Originally posted by @n1t0 in https://github.com/huggingface/tokenizers/issues/203#issuecomment-605105611_
@n1t0
Thank you for your previous responses. I have been working with a large tokenizer of a LLM, and I've noticed that the vocabulary contains a significant amount of information that like these unreadable codes.
I wonder if there are any methods or tools available to help me read and interpret the information in the tokenizer's vocabulary. For example, is there a way to map these tokens back to their original words or phrases, or any other approach to make the vocabulary more interpretable?
| https://github.com/huggingface/tokenizers/issues/1661 | closed | [] | 2024-10-20T13:38:53Z | 2024-10-21T07:29:43Z | null | kaizhuanren |
huggingface/diffusers | 9,719 | `disable_progress_bar` is ignored for some models (Loading checkpoint shards) | ### Describe the bug
When loading some pipelines, `diffusers.utils.logging.disable_progress_bar()` doesn't disable all progress bars. In particular the "Loading checkpoint shards" progress bar still appears. The "Loading pipeline components..." progress bar, however, is disabled as expected. Models I found, where this occurs, are:
* [`stabilityai/stable-diffusion-3-medium-diffusers`](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers)
* [`black-forest-labs/FLUX.1-schnell`](https://huggingface.co/black-forest-labs/FLUX.1-schnell)
The image generation progress bar also doesn't respect this setting, but can be disabled with `pipe.set_progress_bar_config(disable=True)`. When files are downloaded, the progress bars are also not disabled. These two cases seem like they might be intentional. Are they?
Is there better way to disable progress bars globally for diffusers? Can the "Loading checkpoint shards" progress bar be disabled specifically?
### Reproduction
```python
import diffusers
diffusers.utils.logging.disable_progress_bar()
# pipe = diffusers.StableDiffusion3Pipeline.from_pretrained('stabilityai/stable-diffusion-3-medium-diffusers')
pipe = diffusers.FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-schnell')
pipe('test')
```
### Logs
```shell
>>> pipe = diffusers.FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-schnell')
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.56s/it]
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
>>>
```
### System Info
Google Colab
or locally:
- 🤗 Diffusers version: 0.30.3
- Running on Google Colab?: No
- Python version: 3.12.7
- PyTorch version (GPU?): 2.5.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.26.0
- Transformers version: 4.45.2
- Accelerate version: 1.0.1
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
### Who can help?
@sayakpaul @DN6 | https://github.com/huggingface/diffusers/issues/9719 | closed | [
"bug"
] | 2024-10-19T17:42:37Z | 2024-10-19T19:29:12Z | 2 | JonasLoos |
huggingface/optimum | 2,069 | High CUDA Memory Usage in ONNX Runtime with Inconsistent Memory Release | ### System Info
```shell
Optimum version: 1.22.0
Platform: Linux (Ubuntu 22.04.4 LTS)
Python version: 3.12.2
ONNX Runtime Version: 1.19.2
CUDA Version: 12.1
CUDA Execution Provider: Yes (CUDA 12.1)
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
```python
def load_model(self, model_name):
session_options = ort.SessionOptions()
session_options.add_session_config_entry('cudnn_conv_use_max_workspace', '0')
session_options.enable_mem_pattern = False
session_options.arena_extend_strategy = "kSameAsRequested"
session_options.gpu_mem_limit = 10 * 1024 * 1024 * 1024
model = ORTModelForSeq2SeqLM.from_pretrained(model_name, provider="CUDAExecutionProvider", session_options=session_options)
tokenizer = AutoTokenizer.from_pretrained(model_name)
return tokenizer, model
def inference(self, batch, doc_id='-1'):
responses, status = '', False
try:
encodings = self.tokenizer(batch, padding=True, truncation=True, max_length=8192, return_tensors="pt").to(self.device)
with torch.no_grad():
generated_ids = self.model.generate(
encodings.input_ids,
max_new_tokens=1024
)
responses = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
status = True
except Exception as e:
logger.error(f"Failed to do inference on LLM, error: {e}")
torch.cuda.empty_cache()
return status, responses
```
### Expected behavior
I expect the CUDA memory to decrease and be released after processing smaller inputs, optimizing memory usage for subsequent inputs.

| https://github.com/huggingface/optimum/issues/2069 | closed | [
"question",
"Stale"
] | 2024-10-19T02:45:54Z | 2024-12-25T02:02:08Z | null | niyathimariya |
huggingface/transformers.js | 981 | Any gotcha's with manually adding items to transformers-cache? | ### Question
For [papeg.ai](https://www.papeg.ai) I've implemented that the service worker caches `.wasm` files from `jsDelivir` that Transformers.js [wasn't caching itself yet](https://github.com/huggingface/transformers.js/issues/685#issuecomment-2325125036).
I've been caching those filesi n the 'main' Papeg.ai cache until now, but I want to switch to saving those files in the `transformers-cache` instead. That would (hopefully) make it so that the .wasm files don't have to be downloaded again if I update papeg.ai (which clears the papeg.ai cache). And vice-versa: the transformers cache could be fully cleared independently of the papeg.ai cache (ideally Transformers.js would manage all this itself).
- Is this a reasonable idea?
- Is this in line with your plans for a future improved caching system? Or do you, for example, plan to keep wasm, onnx and config files in separate caches, like WebLLM?
- Will Transformers.js even look for those .wasm files in `transformers-cache` first? With the service worker this doesn't technically matter, as requests to jsDelivir are captured anyway. But the service worker isn't always available.
Tangentially, would it be an idea to (also) store the code and wasm files on Huggingface itself? Because of EU privacy regulations, and good privacy design in general, I'd like to keep third parties that the site needs to connect to to an absolute minimum. I'd love to eliminate jsDelivir, and only rely on Github and HuggingFace. Or is there perhaps a way to tell Transformers.js where to look? Then I could host the files on Github/HuggingFace manually.
Just for fun, here's a service worker code snippet that, from now on, stores the jsDelivir files in the transformers-cache:
```
let target_cache = cacheName;
if(request.url.indexOf('https://cdn.jsdelivr.net/npm/@huggingface/transformers') != -1){
console.log("service_worker: saving to transformers-cache: ", request.url);
target_cache = 'transformers-cache';
}
caches.open(target_cache)
.then(function(cache) {
cache.put(request, fetch_response_clone);
})
.catch((err) => {
console.error("service worker: caught error adding to cache: ", err);
})
```
| https://github.com/huggingface/transformers.js/issues/981 | open | [
"question"
] | 2024-10-18T12:53:07Z | 2024-10-18T12:56:21Z | null | flatsiedatsie |
huggingface/transformers | 34,241 | How to output token by token use transformers? | ### System Info
...
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
...
### Expected behavior
How to output token by token use transformers? | https://github.com/huggingface/transformers/issues/34241 | closed | [
"Discussion",
"bug"
] | 2024-10-18T09:45:19Z | 2024-11-26T08:04:43Z | null | xuanzhangyang |
huggingface/lerobot | 477 | Collecting human operated datasets in simulation | Hello,
Can you provide info on how human supervision was provided for the simulated datasets (e.g. `lerobot/aloha_sim_transfer_cube_human`)? I am starting to setup a similar MuJoCo gym environment for the Stretch (https://github.com/mmurray/gym-stretch) and I would like to collect/train on some human teleop data, but it seems like the current `control_robot.py` script and data collection examples are setup only for physical robots. Is there a branch somewhere with the code used to collect `lerobot/aloha_sim_transfer_cube_human` that I can reference?
Thanks! | https://github.com/huggingface/lerobot/issues/477 | closed | [
"question",
"dataset",
"simulation"
] | 2024-10-17T23:24:17Z | 2025-10-08T08:49:32Z | null | mmurray |
huggingface/lighteval | 365 | [FT] Using lighteval to evaluate a model on a single sample, how? | Thank you the team for the great work. I have a question. Can you please help me to use lighteval to evaluate a model on a single sample?
For example, if I have an input from mmlu I, my model generates output O, how can I use lighteval to evaluate O with using the Acc metric?
Thanks! | https://github.com/huggingface/lighteval/issues/365 | closed | [
"feature"
] | 2024-10-17T12:43:45Z | 2024-10-24T10:12:54Z | null | dxlong2000 |
huggingface/diffusers | 9,700 | Flux inversion | current img2img is not so well, [RF Inversion](https://rf-inversion.github.io/)) provides an inverse method for Flux real image editing, can we implement it using diffusers?
or how can we use DDIM inversion in Flux? | https://github.com/huggingface/diffusers/issues/9700 | closed | [] | 2024-10-17T07:03:59Z | 2024-12-17T16:00:30Z | 8 | yuxu915 |
huggingface/diffusers | 9,698 | Unable to Retrieve Intermediate Gradients with CogVideoXPipeline | ### Describe the bug
When generating videos using the CogVideoXPipeline model, we need to access the gradients of intermediate tensors. However, we do not require additional training or parameter updates for the model.
We tried using register_forward_hook to capture the gradients, but this approach failed because the CogVideoXPipeline disables gradient calculations. Specifically, in pipelines/cogvideo/pipeline_cogvideox.py at line 478, gradient tracking is turned off with @torch.no_grad().
How can we resolve this issue and retrieve the gradients without modifying the model’s parameters or performing extra training?
### Reproduction
Sample Code
pipe = CogVideoXPipeline.from_pretrained(
"THUDM/CogVideoX-2b",
torch_dtype=torch.float16
)
video = pipe(
prompt=prompt,
num_videos_per_prompt=1,
num_inference_steps=50,
num_frames=49,
guidance_scale=6,
generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]
Pipeline Code Reference
pipelines/cogvideo/pipeline_cogvideox.py at line 478
@torch.no_grad()
@replace_example_docstring(EXAMPLE_DOC_STRING)
def __call__(
self,
prompt: Optional[Union[str, List[str]]] = None,
negative_prompt: Optional[Union[str, List[str]]] = None,
height: int = 480,
width: int = 720,
### Logs
_No response_
### System Info
Diffusers version: 0.30.3
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9698 | closed | [
"bug"
] | 2024-10-17T04:30:56Z | 2024-10-27T10:24:41Z | 4 | lovelyczli |
huggingface/diffusers | 9,697 | train_text_to_image_sdxl training effect is very poor | I use DeepSpeed for training: train_text_to_image_sdxl.py
1.The data volume is 231 pieces
2. deepspeed json

3.Training Script

4.After training, use the training prompt words again,The generated effect is as follows:

May I ask everyone, what is the reason for the poor generation effect?
| https://github.com/huggingface/diffusers/issues/9697 | closed | [] | 2024-10-17T03:40:17Z | 2024-10-17T08:32:44Z | 2 | wzhiyuan2016 |
huggingface/finetrainers | 41 | cannot access local variable 'gradient_norm_before_clip' where it is not associated with a value | During both I2V and t2V training, sometimes I encountered the error
```
[rank1]: File "/root/projects/cogvideox-factory/training/cogvideox_text_to_video_lora.py", line 762, in main
[rank1]: "gradient_norm_before_clip": gradient_norm_before_clip,
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: UnboundLocalError: cannot access local variable 'gradient_norm_before_clip' where it is not associated with a value
```
This is probably [here](https://github.com/a-r-r-o-w/cogvideox-factory/blob/a6c246c29d11d78e4aa3fb4b137c5ffd8d719d94/training/cogvideox_text_to_video_lora.py#L715) in the following code
```
if accelerator.sync_gradients:
gradient_norm_before_clip = get_gradient_norm(transformer.parameters())
accelerator.clip_grad_norm_(transformer.parameters(), args.max_grad_norm)
gradient_norm_after_clip = get_gradient_norm(transformer.parameters())
```
somehow `accelerator.sync_gradients` is false sometimes.
Is there a quick fix? Is it only for logging?
| https://github.com/huggingface/finetrainers/issues/41 | closed | [] | 2024-10-16T18:34:19Z | 2024-12-06T08:09:46Z | null | Yuancheng-Xu |
huggingface/finetrainers | 40 | How to load the fine-tuned I2V model's LoRA module | I have successfully fine-tuned an I2V model (locally, without pushing to HF) and would like to load it for inference. I use the following code suggested in the readme
```
model_name = "THUDM/CogVideoX-5b-I2V"
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
model_name, torch_dtype=torch.bfloat16
).to("cuda")
pipe.load_lora_weights("MyLocalLoRAPath", adapter_name=["cogvideox-lora"])
pipe.set_adapters(["cogvideox-lora"], [1.0])
```
However I encounter the error
```
File ~/anaconda3/envs/cogvideox-i2v/lib/python3.11/site-packages/diffusers/loaders/lora_pipeline.py:2451, in CogVideoXLoraLoaderMixin.load_lora_into_transformer(cls, state_dict, transformer, adapter_name, _pipeline):
if adapter_name in getattr(transformer, "peft_config", {}):
aise ValueError(
f"Adapter name {adapter_name} already in use in the transformer - please select a new adapter name." )
TypeError: unhashable type: 'list'
```
Note: in the trained LoRA folders, there is only a `pytorch_lora_weights.safetensors` | https://github.com/huggingface/finetrainers/issues/40 | closed | [] | 2024-10-16T17:25:21Z | 2024-12-03T03:01:23Z | null | Yuancheng-Xu |
huggingface/transformers.js | 975 | Supporting Multiple Pipelines? | ### Question
First of all, thank you so much for creating transformers.js! This is a fantastic library, and I had lots of fun building with it!
I have a question regarding using pipelines API: Would it be possible to start multiple pipelines? For example, instead of using just one pipeline to run inference, can we create a pool of pipelines and push jobs into this pool, potentially better utilize the multi-cores on modern laptops?
The goal here is really to understand if there's ways to utilize multi-cores. No worries if not! I just want to understand where the limits are.
Thanks! | https://github.com/huggingface/transformers.js/issues/975 | closed | [
"question"
] | 2024-10-16T08:06:44Z | 2024-10-21T15:58:20Z | null | kelayamatoz |
huggingface/chat-ui | 1,525 | Standardize Chat Prompt Templates to Use Jinja Format | ## Describe your feature request
Currently, the `chatPromptTemplate` for each model that can be set in env uses **Handlebars** format. However, the `chat_prompt` in the actual model's `tokenizer_config.json` uses **Jinja** format. This inconsistency is causing significant inconvenience. Since **Jinja** is widely used and preferred, it would be beneficial to standardize on **Jinja** format for both `chatPromptTemplate` and `chat_prompt`. This will improve consistency and ease of use for developers.
## Screenshots (if relevant)
## Implementation idea
To implement this change, the following steps can be taken:
1. Update Codebase: Update the codebase to handle **Jinja** templates for `chatPromptTemplate`.
2. Documentation: Update the documentation to reflect this change and provide examples of how to use **Jinja** templates.
3. Testing: Thoroughly test the changes to ensure compatibility and that all existing templates work correctly with the new format. | https://github.com/huggingface/chat-ui/issues/1525 | open | [
"enhancement"
] | 2024-10-16T05:26:12Z | 2024-11-20T00:44:16Z | 8 | calycekr |
huggingface/alignment-handbook | 201 | Full parameter fine-tuning keeps consuming system RAM and lead to crash | I am using alignment handbook to perform a full parameter fine-tuning of llama3 models with Deepspeed stage 2 on my own dataset which is relatively large (400k+ records).
The training was performed on a slurm cluster with two nodes (each has 4 H100 GPUs).
I have noticed that during the training, the system memory utilization keeps increasing even though I set torch_empty_cache_steps=500.
I wonder if there is something wrong with the HF trainer? Any suggestions how to fix/debug?
There is also a similar issue at https://github.com/huggingface/transformers/issues/30119
- Below is the system ram usage report from wandb:



- my config:
```yaml
# Model arguments
model_name_or_path: ~/models/Meta-Llama-3-8B
model_revision: main
torch_dtype: bfloat16
attn_implementation: flash_attention_2
# Data training arguments
chat_template: "{{ bos_token }}{% if messages[0]['role'] == 'system' %}{% set system_message = '### System Instruction: ' + messages[0]['content'] | trim + '' %}{% set messages = messages[1:] %}{% else %}{% set system_message = '' %}{% endif %}{{ bos_token + system_message }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '### Context: ' + message['content'] | trim + '' }}{% elif message['role'] == 'assistant' %}{{ '### Result: ' + message['content'] | trim + ' ' + eos_token }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '### Result: ' }}{% endif %}"
dataset_mixer:
~/data/processed_data_open_sourced_xml_to_text/merged_open_sourced_xml_to_text_dataset: 1.0
dataset_splits:
- train_sft
- test_sft
preprocessing_num_workers: 4
dataloader_num_workers: 2
# SFT trainer config
bf16: true
do_eval: true
# evaluation_strategy: epoch
eval_strategy: epoch
max_grad_norm: 1.0
# gradient_accumulation_steps: 16
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: False
log_level: info
logging_steps: 5
logging_strategy: steps
learning_rate: 2.0e-05
lr_scheduler_type: cosine_with_min_lr # cosine_with_min_lr
lr_scheduler_kwargs:
min_lr: 5e-6
optim: adamw_torch # adamw_torch paged_adamw_32bit galore_adamw lion_32bit
optim_target_modules: all-linear
weight_decay: 0.01
max_seq_length: 12800
packing: false
dataset_num_proc: 16
max_steps: -1
num_train_epochs: 1
output_dir: /~/alignment-handbook/experiments/models/llama3
overwrite_output_dir: true
per_device_eval_batch_size: 1
per_device_train_batch_size: 1 # this is per device, you need to manual calculate global batch by per device * gas * gpu * node
gradient_accumulation_steps: 8
push_to_hub: false
remove_unused_columns: true
report_to:
- wandb # - tensorboard
save_strategy: "steps"
save_steps: 500
torch_empty_cache_steps: 500
save_total_limit: 30
seed: 42
warmup_ratio: 0.1
```
- training launch script (brief version)
```sh
#!/bin/bash
#SBATCH --job-name=train
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=1
#SBATCH --gpus-per-node=4
#SBATCH --gpus-per-task=4
#SBATCH --cpus-per-task=32
#SBATCH --mem=512gb
#SBATCH --time=96:00:00
#SBATCH --output=output
#SBATCH --partition=batch
# apptainer
CONTAINER=pt2402.sif
TRAIN_CONF=config.yaml
DEEPSPEED_CONF=deepspeed_zs2.json
CMD=torchrun \
--nproc_per_node=$SLURM_GPUS_ON_NODE \
--nnode=$SLURM_JOB_NUM_NODES \
--node_rank=$SLURM_NODEID \
--master_addr=$PRIMARY \
--master_port=$PRIMARY_PORT \
${ROOT}/scripts/run_sft.py \
$TRAIN_CONF \
--deepspeed=$DEEPSPEED_CONF \
--tee=3
srun --jobid $SLURM_JOB_ID apptainer exec --nv $CONTAINER bash -c $CMD
```
- deepspeed config:
```json
{
"fp16": {
"enabled": false,
"loss_scale": 0,
"auto_cast": false,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"consecutive_hysteresis": false,
"min_loss_scale": 1
},
"bf16": {
"enabled": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto",
"betas": "auto",
"eps": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": 1e-8,
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
} | https://github.com/huggingface/alignment-handbook/issues/201 | closed | [] | 2024-10-15T15:04:18Z | 2024-10-17T18:56:53Z | 2 | xiyang-aads-lilly |
huggingface/chat-ui | 1,522 | Add example prompt field to tools | ## Describe your feature request
This lets the user specify a prompt that would call the tool. It can be shown as a demo if you're not sure how to use a tool.
We should show it somewhere in the UI so the user can easily start a conversation from that demo.
It can also be used for validating that a tool works. (run the example server-side, if the tool does not get called or does not return an output then something is wrong and dont let users publish it)
## Implementation idea
Storing the prompt itself is straightforward since you can just store it as a string. Most tools use file inputs though so we should ideally also support that, which means storing example files in the DB. | https://github.com/huggingface/chat-ui/issues/1522 | open | [
"enhancement",
"front",
"back",
"tools"
] | 2024-10-15T12:42:42Z | 2024-10-15T12:42:43Z | 0 | nsarrazin |
huggingface/optimum | 2,060 | Support int8 tinyllama tflite export. | ### Feature request
tflite exporter for decoder only llms such as tinyllama
### Motivation
Some platforms only support full int8 op and full int8 tflite models can be deployed. Is there a support plan? Looking forward to your reply, thank you.
### Your contribution
no | https://github.com/huggingface/optimum/issues/2060 | closed | [
"feature-request",
"Stale"
] | 2024-10-15T03:25:54Z | 2024-12-09T02:11:36Z | 1 | hayyaw |
huggingface/diffusers | 9,673 | high cpu usage when loading multiple loras at once. | ### Describe the bug
Hi, I was making a synthesis system using celery and diffusers,
and I found the cpu usage of program goes high when loading loras,
it is okay when I use just one worker, but it becomes hard when using 8 workers at once.
It happens when lora loaded first time, and I think it is because of peft, because I didn't get any trouble before peft support.
so Is there any way to lower cpu usage when loading loras? or is there any way not to use peft when sdxl lora loading?
### Reproduction
```python
# test lora downloaded from https://civitai.com/models/150986/blueprintify-sd-xl-10
from diffusers import AutoPipelineForText2Image
import torch
from uuid import uuid4
from tqdm import tqdm
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda")
num_of_iterations = 10
for _ in tqdm(range(num_of_iterations)):
lora_name = str(uuid4().hex)
pipeline.load_lora_weights(
"./test",
weight_name="lora.safetensors",
adapter_name=lora_name,
low_cpu_mem_usage=True,
)
pipeline.set_adapters([lora_name], adapter_weights=[1.0])
```
### Logs
_No response_
### System Info
torch==2.1.1+cu121
diffusers==0.30.3
accelerate==0.32.1
peft==0.13.0
transformers==4.42.3
python==3.9.5
### Who can help?
@sayakpaul | https://github.com/huggingface/diffusers/issues/9673 | closed | [
"bug"
] | 2024-10-15T01:49:37Z | 2024-10-15T05:07:40Z | 5 | gudwns1215 |
huggingface/datasets | 7,226 | Add R as a How to use from the Polars (R) Library as an option | ### Feature request
The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd
## Add Polars (R) option
The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well.
```r
library(polars)
df <- pl$read_parquet("hf://datasets/SALURBAL/core__admin_cube_public/core__admin_cube_public.parquet")
```
## Polars (python) option

## Libraries Currently

### Motivation
There are many data/analysis/research/statistics teams (particularly in academia and pharma) that use R as the default language. R has great integration with most of the newer data techs (arrow, parquet, polars) and having this included could really help in bringing this community into the hugging faces ecosystem.
**This is a small/low-hanging-fruit front end change but would make a big impact expanding the community**
### Your contribution
I am not sure which repositroy this should be in, but I have experience in R, Python and JS and happy to submit a PR in the appropriate repository. | https://github.com/huggingface/datasets/issues/7226 | open | [
"enhancement"
] | 2024-10-14T19:56:07Z | 2024-10-14T19:57:13Z | null | ran-codes |
huggingface/lerobot | 472 | How to resume training with a higher offline steps than initial set up? | ### System Info
```Shell
- `lerobot` version: unknown
- Platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.25.2
- Dataset version: 3.0.1
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.4.1 (True)
- Cuda version: 11080
- Using GPU in script?: <fill in>
```
### Information
- [X] One of the scripts in the examples/ folder of LeRobot
- [X] My own task or dataset (give details below)
### Reproduction
1. python lerobot/scripts/train.py \
hydra.run.dir=outputs/train/pusht\
device=cuda
env=pusht_act \
env.task=pusht-v0 \
dataset_repo_id= takuzennn/pusht_v0 \
policy=act_pusht \
training.eval_freq=2000 \
training.log_freq=250 \
training.offline_steps=300000 \
training.save_model=true \
training.save_freq=2000 \
eval.n_episodes=30 \
eval.batch_size=12 \
wandb.enable=true \
2. python lerobot/scripts/train.py \
hydra.run.dir=outputs/train/pusht \
training.offline_steps=800000 \
resume=true
### Expected behavior
I expect it to stop at 800000 steps, but it still stops at 300000 steps. | https://github.com/huggingface/lerobot/issues/472 | closed | [] | 2024-10-13T19:28:04Z | 2024-10-22T05:51:42Z | null | Takuzenn |
huggingface/transformers.js | 973 | I would like to help | ### Question
Hi, I would like to help with the project. Is there anything that needs to be done?
Currently I found an issue, probably in ONNXRuntime. I will look into it next week.
Here is example of WebGPU Whisper that works with mobile platforms including iPhone and Android: https://github.com/FL33TW00D/whisper-turbo
Current Transformers.js solution have some bugs. It will crash after model loading, page will restart on mobile device. I tried to connect remote debugging to Chrome PC via some ios remote debugging bridge, but it just restarts and I cannot get any logs. Any help how to get logs would be appreciated as I don't have much experience with iOS Safari debugging and I also happen to have Windows PC.
Here is photo from Safari - iPhone, you can see it does not support float32, but only float16. I suspect this is the issue and there are like 3 separate pull requests in ONNX to fix something around float16 support. But I did not have time to merge all current ONNX PRs and build it yet. First I would like to see some log with actual error

This is what I will be working on next weekend.
If there is something else I should look into or help with testing, let me know.
Thank you for great project and great work! :-)
| https://github.com/huggingface/transformers.js/issues/973 | open | [
"question"
] | 2024-10-12T20:29:07Z | 2024-10-14T19:37:51Z | null | cyberluke |
huggingface/diffusers | 9,661 | from_pretrained: filename argument removed? | **What API design would you like to have changed or added to the library? Why?**
I do believe there was a `filename` argument in the past to load a specific checkpoint in a huggingface repository. It appears that this has been removed with no replacement.
**What use case would this enable or better enable? Can you give us a code example?**
It's impossible to use any of the checkpoints here https://huggingface.co/SG161222/Realistic_Vision_V6.0_B1_noVAE/tree/main without manually downloading and using `from_single_file`. The checkpoint I want to load is called `Realistic_Vision_V6.0_NV_B1_fp16.safetensors`, but it seems that the procedure in `from_pretrained` tries to force and impose a specific name on the user. I understand the need for standards, but many have not respected the standards in the past and now these models cannot be used without additional work. | https://github.com/huggingface/diffusers/issues/9661 | closed | [
"stale"
] | 2024-10-12T20:02:31Z | 2024-11-13T00:37:52Z | 4 | oxysoft |
huggingface/transformers | 34,107 | How to specific customized force_token_ids in whisper | ```
ValueError: A custom logits processor of type <class 'transformers.generation.logits_process.ForceTokensLogitsProcessor'> with values <transformers.generation.logits_process.ForceTokensLogitsProcessor object at 0x7f4230cfac50> has been passed to `.generate()`, but it has already been created with the values <transformers.generation.logits_process.ForceTokensLogitsProcessor object at 0x7f422829c510>. <transformers.generation.logits_process.ForceTokensLogitsProcessor object at 0x7f422829c510> has been created by passing the corresponding arguments to generate or by the model's config default values. If you just want to change the default values of logits processor consider passing them as arguments to `.generate()` instead of using a custom logits processor
```
this way don't work:
```
inputs = inputs.to(self.model.dtype)
with torch.no_grad():
if forced_decoder_ids is not None:
generated_ids = self.model.generate(
inputs, forced_decoder_ids=forced_decoder_ids
)
else:
generated_ids = self.model.generate(inputs)
``` | https://github.com/huggingface/transformers/issues/34107 | closed | [
"Generation",
"Audio"
] | 2024-10-12T07:34:38Z | 2024-12-28T08:06:48Z | null | MonolithFoundation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.