repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 10,004 | how to use kohya sd-scripts flux loras with text encoder keys in diffusers? | resulting lora weights from setting train text encoder to true is incompatible with diffusers load_lora_weights. the script networks/convert_flux_lora.py does not convert the text encoder keys either. | https://github.com/huggingface/diffusers/issues/10004 | open | [
"contributions-welcome"
] | 2024-11-23T20:54:30Z | 2025-03-16T15:39:25Z | null | neuron-party |
huggingface/transformers.js | 1,050 | How to lengthen the Whisper max audio length? | ### Question
I'm working from the [webgpu-whisper](https://github.com/huggingface/transformers.js/tree/main/examples/webgpu-whisper) demo, and I'm having a hard time lengthening the maximum audio input allowed. I made the following changes:
```js
-const MAX_AUDIO_LENGTH = 30; // seconds
+const MAX_AUDIO_LENGTH = 12... | https://github.com/huggingface/transformers.js/issues/1050 | closed | [
"question"
] | 2024-11-22T17:50:50Z | 2024-11-26T03:59:03Z | null | stinoga |
huggingface/diffusers | 9,996 | Flux.1 cannot load standard transformer in nf4 | ### Describe the bug
loading different flux transformer models is fine except for nf4.
it works for 1% of fine-tunes provided on Huggingface, but it doesn't work for 99% standard fine-tunes available on CivitAI.
example of such model: <https://civitai.com/models/118111?modelVersionId=1009051>
*note* i'm using `... | https://github.com/huggingface/diffusers/issues/9996 | open | [
"bug",
"wip"
] | 2024-11-22T16:55:11Z | 2024-12-28T19:56:54Z | 16 | vladmandic |
huggingface/diffusers | 9,990 | How to diagnose problems in training custom inpaint model | ### Discussed in https://github.com/huggingface/diffusers/discussions/9989
<div type='discussions-op-text'>
<sup>Originally posted by **Marquess98** November 22, 2024</sup>
What I want to do is to perform image inpainting when the input is a set of multimodal images, using sdxl as the pre trained model. But the... | https://github.com/huggingface/diffusers/issues/9990 | closed | [] | 2024-11-22T03:16:50Z | 2024-11-23T13:37:53Z | null | Marquess98 |
huggingface/Google-Cloud-Containers | 123 | Querying PaliGemma VLMs | My collaborators and I are trying to use your very useful containers to deploy and use Google's PaliGemma models on GCS/Vertex. I was wondering what is the best way to query the model with images, especially if the images are stored locally? I see that there is an [example showing this for Llama Vision](https://github.... | https://github.com/huggingface/Google-Cloud-Containers/issues/123 | closed | [
"question"
] | 2024-11-21T14:52:41Z | 2024-12-04T16:31:01Z | null | kanishkamisra |
huggingface/diffusers | 9,983 | Using StableDiffusionControlNetImg2ImgPipeline Enable_vae_tiling(), seemingly fixed the patch is 512 x 512, where should I set the relevant parameters | ```
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "a beautiful landscape photograph"
pipe.enable_vae_tiling()
``` | https://github.com/huggingface/diffusers/issues/9983 | closed | [] | 2024-11-21T09:21:24Z | 2024-12-02T08:32:52Z | null | reaper19991110 |
huggingface/datatrove | 305 | How to read text files | Hey all is there any text reader in the repo?
I have text files where each line is a document/data sample.
Are there any readers which can read these kind of files directly? | https://github.com/huggingface/datatrove/issues/305 | open | [] | 2024-11-21T06:55:21Z | 2025-05-16T10:51:33Z | null | srinjoym-cerebras |
huggingface/diffusers | 9,979 | flux img2img controlnet channels error | ### Describe the bug
When I use flux's img2img controlnet for inference, a channel error occurs.
### Reproduction
```python
import numpy as np
import torch
import cv2
from PIL import Image
from diffusers.utils import load_image
from diffusers import FluxControlNetImg2ImgPipeline, FluxControlNetPipeline
fr... | https://github.com/huggingface/diffusers/issues/9979 | closed | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-11-21T03:39:12Z | 2025-04-23T20:43:51Z | 10 | wen020 |
huggingface/diffusers | 9,976 | ControlNet broken from_single_file | ### Describe the bug
controlnet loader from_single_file was originally added via #4084
and method `ControlNet.from_single_file()` works for non-converted controlnets.
but for controlnets in safetensors format that contain already converted state_dict, it errors out.
its not reasonable to expect from user to k... | https://github.com/huggingface/diffusers/issues/9976 | closed | [
"bug"
] | 2024-11-20T13:46:14Z | 2024-11-22T12:22:53Z | 7 | vladmandic |
huggingface/lerobot | 515 | ACT is working, but not Diffusion | Hello Team,
your work is so good, I am currently working on creating some nice policies with Lerobot repo, architecture and software. I tried ACT on my robot, it is working fine, able to execute the tasks what it learnt in the evaluation.
I tried training Diffusion policy, multiple times with different params and ... | https://github.com/huggingface/lerobot/issues/515 | closed | [
"question",
"policies",
"stale"
] | 2024-11-19T18:58:28Z | 2025-11-30T02:37:09Z | null | Kacchan16 |
huggingface/transformers.js | 1,042 | how can i pass embeddings or context to a text2text-generation model | ### Question
I downloaded the model to local. I found that there doesn't seem to be an API that allows me to pass embeddings. How can I make this model understand the context?
Then I tried to pass the context content to this model, but the model didn't seem to accept it and output the following words.
The code i... | https://github.com/huggingface/transformers.js/issues/1042 | closed | [
"question"
] | 2024-11-19T18:32:45Z | 2024-11-20T05:34:45Z | null | electroluxcode |
huggingface/transformers.js | 1,041 | Full preload example | ### Question
Hello!
I'm looking for a full "preload model" nodejs example.
Say I do this:
```ts
import { env } from '@huggingface/transformers';
env.allowRemoteModels = false;
env.localModelPath = '/path/to/local/models/';
```
how do I "get" the model to that path? I want to download it when building... | https://github.com/huggingface/transformers.js/issues/1041 | closed | [
"question"
] | 2024-11-19T12:34:04Z | 2024-11-26T12:44:55Z | null | benjick |
huggingface/transformers.js | 1,038 | script.convert tfjs model to onnx support | ### Question
I'm using tfjs-node to create an image-classifier model;
but I'm stuck with how to convert model.json to a format that can be used by optimum or script.convert to convert it to a onnx file.
I'm able to convert to a graph model using
```
tensorflowjs_converter --input_format=tfjs_layers_model \ --... | https://github.com/huggingface/transformers.js/issues/1038 | open | [
"question"
] | 2024-11-18T15:42:46Z | 2024-11-19T10:08:28Z | null | JohnRSim |
huggingface/chat-ui | 1,573 | Include chat-ui in an existing React application | Hello,
Is it possible to integrate / embed chat-ui in an existing application, like a React component?
For example, to add a chat module to an existing website with the UI of chat-ui.
As is the case with Chainlit : https://docs-prerelease.chainlit.io/customisation/react-frontend | https://github.com/huggingface/chat-ui/issues/1573 | open | [
"enhancement"
] | 2024-11-18T14:11:58Z | 2024-11-18T14:15:17Z | 0 | martin-prillard |
huggingface/optimum | 2,097 | TFJS support model.json to ONNX conversion | ### Feature request
Currently using node to create an image-classifier model.json with tfjs
- I don't think Optimum support this format to convert to onnx?
It would be nice to just use optimum and point to model.json.
### Motivation
Currently I'm creating the model converting it to graph and then converting t... | https://github.com/huggingface/optimum/issues/2097 | open | [
"exporters",
"tflite"
] | 2024-11-18T12:55:05Z | 2024-11-19T10:22:35Z | 0 | JohnRSim |
huggingface/optimum-benchmark | 294 | How to Use a Local Model When Calling the Python API | 
| https://github.com/huggingface/optimum-benchmark/issues/294 | closed | [] | 2024-11-18T06:36:24Z | 2024-12-09T12:23:30Z | null | WCSY-YG |
huggingface/lerobot | 511 | Minimum Requirements - Running Policies in production/ Training Policies | I was wondering what types of hardware can policies trained using lerobot can run on. Lets say I wanted to run policies in production on say a raspberry pi. Is it possible to run training on beefier hardware and then deploy policies to lower-end hardware to run? Is it better to record with various cameras or just use t... | https://github.com/huggingface/lerobot/issues/511 | closed | [
"question"
] | 2024-11-17T17:34:50Z | 2025-04-07T16:23:41Z | null | rkeshwani |
huggingface/transformers.js | 1,035 | How can I implement partial output in the react demo? | ### Question
Hello! I am reading the Transformers.js documentation for "[Building a react application](https://huggingface.co/docs/transformers.js/tutorials/react)", but I encountered an issue at [step 4](https://huggingface.co/docs/transformers.js/tutorials/react#step-4-connecting-everything-together).
I don't kn... | https://github.com/huggingface/transformers.js/issues/1035 | open | [
"question"
] | 2024-11-17T11:29:22Z | 2024-12-02T23:00:13Z | null | DikkooXie |
huggingface/lerobot | 510 | Do we have to compulsory use trossen robotics robots for this repo? | Or any robot will work fine?
Also one more question.
Do we have to use depth camera or simple camera will work fine? | https://github.com/huggingface/lerobot/issues/510 | closed | [
"question",
"robots"
] | 2024-11-17T11:14:52Z | 2025-04-07T16:27:40Z | null | hemangjoshi37a |
huggingface/diffusers | 9,942 | Unable to install pip install diffusers>=0.32.0dev | ### Describe the bug
I am installing the following version
pip install diffusers>=0.32.0dev
However it does nothing
```
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>pip install diffusers>=0.32.0dev
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>
```
I even uninstalled the previous version
```... | https://github.com/huggingface/diffusers/issues/9942 | closed | [
"bug"
] | 2024-11-17T10:26:19Z | 2024-11-17T12:27:23Z | 0 | nitinmukesh |
huggingface/candle | 2,622 | How to compute `Atan2` for tensors? | I am trying to implement DeepPhase in candle but I am struggling figuring out how to calculate the phase angles from two tensors using `atan2` operation. | https://github.com/huggingface/candle/issues/2622 | open | [] | 2024-11-16T16:45:36Z | 2024-11-17T14:21:50Z | null | cryscan |
huggingface/transformers.js | 1,032 | How to identify which models will work with transformers.js? | ### Question
I've tried multiple models from MTEB dashboard (e.g. `jinaai/jina-embeddings-v3`, `jinaai/jina-embeddings-v2`, `dunzhang/stella_en_400M_v5`), but none of them work.
It's not clear which models will work?
```ts
const generateGteSmallEmbedding = await pipeline(
'feature-extraction',
'dunzhang/s... | https://github.com/huggingface/transformers.js/issues/1032 | open | [
"question"
] | 2024-11-15T22:13:00Z | 2024-12-22T02:41:43Z | null | punkpeye |
huggingface/datasets | 7,291 | Why return_tensors='pt' doesn't work? | ### Describe the bug
I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List?

### Steps to reproduce the bug
,
daemon=True
)
... | https://github.com/huggingface/safetensors/issues/541 | open | [] | 2024-11-15T00:37:55Z | 2025-02-26T09:51:23Z | 4 | vedantroy |
huggingface/peft | 2,216 | How to specify the coefficients of loading lora during inference? | https://github.com/huggingface/peft/issues/2216 | closed | [] | 2024-11-14T11:47:00Z | 2024-11-18T11:30:03Z | null | laolongboy | |
huggingface/chat-ui | 1,565 | Is there any place that uses this environment variable? | https://github.com/huggingface/chat-ui/blob/ab349d0634ec4cf68a781fd7afc5e7fdd6bb362f/.env#L59-L65
It seems like it can be deleted. | https://github.com/huggingface/chat-ui/issues/1565 | closed | [] | 2024-11-14T11:12:49Z | 2024-11-14T11:17:04Z | 2 | calycekr |
huggingface/diffusers | 9,927 | HeaderTooLarge when train controlnet with sdv3 | ### Describe the bug
Hello, I tried diffuser to train controlnet with sdv3 but it didn't start training and send `safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge` feedback. I don't know how to handle it.
### Reproduction
Follow the README_v3 guide.
### Logs
```shell
(diffusers) [... | https://github.com/huggingface/diffusers/issues/9927 | closed | [
"bug"
] | 2024-11-14T07:28:03Z | 2024-11-21T13:02:05Z | 3 | Viola-Siemens |
huggingface/datasets | 7,290 | `Dataset.save_to_disk` hangs when using num_proc > 1 | ### Describe the bug
Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours.
Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than... | https://github.com/huggingface/datasets/issues/7290 | open | [] | 2024-11-14T05:25:13Z | 2025-11-24T09:43:03Z | 4 | JohannesAck |
huggingface/trl | 2,356 | How to train from scratch? Can you provide the code | ### System Info
train from scratch
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
train from scratch
### Expected behavior
train from scr... | https://github.com/huggingface/trl/issues/2356 | closed | [
"❓ question"
] | 2024-11-14T02:39:41Z | 2024-12-13T23:00:20Z | null | sankexin |
huggingface/sentence-transformers | 3,054 | 'scale' hyperparameter in MultipleNegativesRankingLoss | I am looking through the MultipleNegativesRankingLoss.py code and I have question about the 'scale' hyperparameter. Also known as the 'temperature', the scale is used to stretch or compress the range of output values from the similarity function. A larger scale creates greater distinction between positive and negative ... | https://github.com/huggingface/sentence-transformers/issues/3054 | closed | [
"question"
] | 2024-11-14T00:11:23Z | 2025-01-16T13:54:45Z | null | gnatesan |
huggingface/diffusers | 9,924 | Can we get more schedulers for flow based models such as SD3, SD3.5, and flux | It seems advanced schedulers such as DDIM, and the dpm++ 2m does work with flow based model such as SD3, SD3.5, and flux.
However, I only see 2 flow based schedulers in diffusers codebase:
FlowMatchEulerDiscreteScheduler, and'
FlowMatchHeunDiscreteScheduler
I tried to use DPMSolverMultistepScheduler, but it do... | https://github.com/huggingface/diffusers/issues/9924 | open | [
"wip",
"scheduler"
] | 2024-11-14T00:07:56Z | 2025-01-14T18:31:12Z | 40 | linjiapro |
huggingface/pytorch-image-models | 2,332 | [BUG] How to customize the number of classification heads | **Describe the bug**
**To Reproduce**
Steps to reproduce the behavior:
from timm.models import create_model
checkpoint_path = "/nas_mm_2/yinxiaofei.yxf/open_source_model/InternViT-300M-448px/tmp/timm__vit_intern300m_patch14_448.ogvl_dist/model.safetensors"
model = create_model('vit_intern300m_patch14_448',chec... | https://github.com/huggingface/pytorch-image-models/issues/2332 | closed | [
"bug"
] | 2024-11-12T08:08:50Z | 2024-11-12T15:28:42Z | null | JarvisFei |
huggingface/unity-api | 30 | [QUESTION] | I have a simple game built in unity and I'm using this Hugging face API client for voice parsing. I'm trying to understand when I build and run the game, and want to distribute it to many users, how do I keep the same api key every time so that users can install and run voice control it without any issue? | https://github.com/huggingface/unity-api/issues/30 | closed | [
"question"
] | 2024-11-12T02:35:52Z | 2024-11-20T01:46:16Z | null | harshal-14 |
huggingface/swift-transformers | 140 | How to use customized tokenizer? | Hello. I am writing this post because I have a question about loading the tokenizer model. I am trying to use a pre-trained tokenizer in a Swift environment. After training, how do I apply the byproduct .model and .vocab files so that I can use the tokenizer I trained in Swift while using the swift-transformer API? I w... | https://github.com/huggingface/swift-transformers/issues/140 | open | [
"tokenization"
] | 2024-11-11T09:36:14Z | 2025-09-10T13:19:10Z | null | cch1219 |
huggingface/diffusers | 9,900 | Potential bug in repaint? | https://github.com/huggingface/diffusers/blob/dac623b59f52c58383a39207d5147aa34e0047cd/src/diffusers/schedulers/scheduling_repaint.py#L322
According to line5 of algorithm 1 in the paper, the second part in line 322 should remove the `**0.5`?
thanks! | https://github.com/huggingface/diffusers/issues/9900 | closed | [] | 2024-11-10T10:41:26Z | 2024-12-16T19:38:22Z | 3 | jingweiz |
huggingface/finetrainers | 82 | [question] what is the difference between cofgvideo scheduler and normal diffuers scheduler | ### Feature request / 功能建议
CogVideoXDPMScheduler VS DPMSCheduler
CogVideoXDDIMScheduler VS DDIM Scheduler
Hi Aryan, is there any sampling difference between these two sampler?
@a-r-r-o-w
### Motivation / 动机
/
### Your contribution / 您的贡献
/ | https://github.com/huggingface/finetrainers/issues/82 | closed | [] | 2024-11-09T17:15:57Z | 2024-12-19T14:43:23Z | null | foreverpiano |
huggingface/optimum | 2,092 | Add support for RemBERT in the ONNX export | ### Feature request
Add RemBERT to supported architectures for ONNX export.
### Motivation
The support for [RemBert](https://huggingface.co/docs/transformers/model_doc/rembert) was previously available in Transformers see [here](https://github.com/huggingface/transformers/issues/16308). However, now it seems that R... | https://github.com/huggingface/optimum/issues/2092 | closed | [
"onnx"
] | 2024-11-08T15:12:34Z | 2024-12-02T13:54:10Z | 1 | mlynatom |
huggingface/lerobot | 502 | Low accuracy for diffusion policy+aloha env+sim_transfer_cude_human dataset | I'm trying to use diffusion model and aloha env to train on sim_transfer_cude_human dataset. But after 60000 training step, the evaluation accuracy is only 2%-6%. Idont know why? If I load pre-trained act policy, the accuracy can reach 80%. | https://github.com/huggingface/lerobot/issues/502 | open | [
"question",
"simulation"
] | 2024-11-08T02:20:14Z | 2025-11-29T02:48:27Z | null | Kimho666 |
huggingface/local-gemma | 41 | How to load from file? | How to load model from file, eg. .h5 file, instead of downloading the model?
Especially the model saved by keras_nlp. | https://github.com/huggingface/local-gemma/issues/41 | open | [] | 2024-11-07T03:01:25Z | 2024-11-07T03:03:31Z | null | datdq-abivin |
huggingface/diffusers | 9,876 | Why isn’t VRAM being released after training LoRA? | ### Describe the bug
When I use train_dreambooth_lora_sdxl.py, the VRAM is not released after training. How can I fix this?
### Reproduction
Not used.
### Logs
_No response_
### System Info
- 🤗 Diffusers version: 0.31.0.dev0
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.17
- Running on G... | https://github.com/huggingface/diffusers/issues/9876 | open | [
"bug",
"stale"
] | 2024-11-06T11:58:59Z | 2024-12-13T15:03:25Z | 14 | hjw-0909 |
huggingface/diffusers | 9,866 | Flux controlnet can't be trained, do this script really work? | ### Describe the bug
run with one num processes, the code broke down and returns:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by ... | https://github.com/huggingface/diffusers/issues/9866 | closed | [
"bug",
"stale"
] | 2024-11-05T08:51:57Z | 2024-12-05T15:19:12Z | 4 | liuyu19970607 |
huggingface/optimum-quanto | 346 | How to support activation 4bit quantization? | As mentioned in title. | https://github.com/huggingface/optimum-quanto/issues/346 | closed | [
"Stale"
] | 2024-11-04T09:59:21Z | 2024-12-10T02:10:31Z | null | Ther-nullptr |
huggingface/transformers | 34,591 | How to retrain the GLIP model on the Object365 dataset | Since I made some modifications to the GLIP model, I need to perform some pre-training again to improve performance. I replaced `_base_ = [../_base_/datasets/coco_detection.py]` with `_base_ = [../_base_/datasets/objects365v1_detection.py]` in `glip_atss_swin-t_a_fpn_dyhead_16xb2_ms-2x_funtune_coco.py` to train on Obje... | https://github.com/huggingface/transformers/issues/34591 | closed | [] | 2024-11-04T03:54:17Z | 2024-11-04T06:46:17Z | null | Polarisamoon |
huggingface/diffusers | 9,847 | Merge Lora weights into base model | I have finetuned the stable diffusion model and would like to merge the lora weights into the model itself. Currently I think in PEFT this is supported using `merge_and_unload` function but I seem to not find this option in diffusers. So is there any way to get a base model but with finetuned weights and If i am not wr... | https://github.com/huggingface/diffusers/issues/9847 | closed | [] | 2024-11-02T18:00:28Z | 2024-11-03T03:03:45Z | 1 | yaswanth19 |
huggingface/chat-ui | 1,550 | Add full-text search in chat history | ## Describe your feature request
Allow users to search for specific keywords or phrases within the chat history, making it easier to find and recall previous conversations.
## Screenshots (if relevant)
An example of the search bar placement could be found in #1079
## Implementation idea
One possible impl... | https://github.com/huggingface/chat-ui/issues/1550 | closed | [
"enhancement"
] | 2024-11-01T19:27:41Z | 2025-05-28T15:03:19Z | 5 | kadykov |
huggingface/diffusers | 9,837 | [Feature] Is it possible to customize latents.shape / prepare_latent for context parallel case? | **Is your feature request related to a problem? Please describe.**
One may need to extend the code to context parallel case and the latent sequence length needs to get divided.
Instead of copying all the code of pipeline.py, the minimum modification is just adding few lines about dividing the latent shape and all_gat... | https://github.com/huggingface/diffusers/issues/9837 | closed | [
"stale"
] | 2024-11-01T14:32:05Z | 2024-12-01T15:07:36Z | 3 | foreverpiano |
huggingface/diffusers | 9,836 | [Feature] Can we record layer_id for DiT model? | **Is your feature request related to a problem? Please describe.**
Some layerwise algorithm may be based on layer-id.
just need some simple modification for transformer2Dmodel and its inner module like attention part, batch_norm part. just pass the layer_id as an extra parameter.
| https://github.com/huggingface/diffusers/issues/9836 | closed | [
"stale"
] | 2024-11-01T14:26:31Z | 2025-01-27T01:31:21Z | 9 | foreverpiano |
huggingface/diffusers | 9,835 | unused parameters lead to error when training contrlnet_sd3 | ### Discussed in https://github.com/huggingface/diffusers/discussions/9834
<div type='discussions-op-text'>
<sup>Originally posted by **Zheng-Fang-CH** November 1, 2024</sup>

Is there someone mee... | https://github.com/huggingface/diffusers/issues/9835 | closed | [] | 2024-11-01T13:57:03Z | 2024-11-17T07:33:25Z | 6 | Daryu-Fan |
huggingface/diffusers | 9,833 | SD3.5-large. Why is it OK when calling with a single thread, but not with multiple threads? | ### Describe the bug
First, I created a SD3.5-large service:
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
import uuid
from diffusers import BitsAndBytesConfig, SD3Transformer2DModel, DDIMScheduler, DDPMParallelScheduler
from diffusers import StableDiffusion3Pipeline
import torch
from transf... | https://github.com/huggingface/diffusers/issues/9833 | closed | [
"bug"
] | 2024-11-01T08:00:04Z | 2024-11-02T02:14:50Z | 1 | EvanSong77 |
huggingface/diffusers | 9,825 | Support IPAdapters for FLUX pipelines | ### Model/Pipeline/Scheduler description
IPAdapter for FLUX is available now, do you have any plans to add IPAdapter to FLUX pipelines?
### Open source status
- [X] The model implementation is available.
- [X] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links f... | https://github.com/huggingface/diffusers/issues/9825 | closed | [
"help wanted",
"wip",
"contributions-welcome",
"IPAdapter"
] | 2024-10-31T23:07:32Z | 2024-12-21T17:49:59Z | 10 | chenxiao111222 |
huggingface/diffusers | 9,822 | Loading SDXL loras into Flux | ### Describe the bug
Currently it's possible to load SDXL loras without warning into Flux.
### Reproduction
Is it possible for you to implement a raise a warning (and an error when a boolean is active) when the list of layers here is zero:
https://github.com/huggingface/diffusers/blob/41e4779d988ead99e7acd78dc8e7... | https://github.com/huggingface/diffusers/issues/9822 | closed | [
"bug"
] | 2024-10-31T18:01:29Z | 2024-12-10T14:37:32Z | 8 | christopher5106 |
huggingface/datasets | 7,268 | load_from_disk | ### Describe the bug
I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that?
### Steps to reproduce the bug
when trying ... | https://github.com/huggingface/datasets/issues/7268 | open | [] | 2024-10-31T11:51:56Z | 2025-07-01T08:42:17Z | 3 | ghaith-mq |
huggingface/peft | 2,188 | How to change 'modules_to_save' setting when reloading a lora finetuned model | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.19
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)... | https://github.com/huggingface/peft/issues/2188 | closed | [] | 2024-10-30T12:26:37Z | 2024-12-08T15:03:37Z | null | dengchengxifrank |
huggingface/huggingface.js | 996 | @huggingface/hub: how to use `modelInfo` with proper typing | THe `modelInfo` method is allowing the caller to define which field will be provided, it has been added in https://github.com/huggingface/huggingface.js/pull/946
https://github.com/huggingface/huggingface.js/blob/186ab738e2f9c7c3613330d45e44848186958815/packages/hub/src/lib/model-info.ts#L9-L11
Here is an example... | https://github.com/huggingface/huggingface.js/issues/996 | closed | [] | 2024-10-30T10:41:36Z | 2024-10-30T12:02:47Z | null | axel7083 |
huggingface/diffusers | 9,802 | Multidiffusion (panorama pipeline) is missing segmentation inputs? | I'm looking at the multidiffusion panorama pipeline page (https://huggingface.co/docs/diffusers/en/api/pipelines/panorama). It looks like there is no way to specify the segmentation and associated prompts as in the original paper https://multidiffusion.github.io/ . If the code only has the panorama capability and not t... | https://github.com/huggingface/diffusers/issues/9802 | open | [
"stale"
] | 2024-10-29T20:15:15Z | 2024-12-24T15:03:30Z | 5 | jloveric |
huggingface/transformers.js | 1,000 | Error while converting LLama-3.1:8b to ONNX | ### Question
Hey @xenova,
Thanks a lot for this library! I tried converting [`meta-llama/Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) to ONNX using the following command (on `main`):
```bash
python -m scripts.convert --quantize --model_id "meta-llama/Llama-3.1-8B-Instruct"
`... | https://github.com/huggingface/transformers.js/issues/1000 | open | [
"question"
] | 2024-10-29T09:40:14Z | 2024-10-29T09:40:14Z | null | charlesbvll |
huggingface/chat-ui | 1,545 | Support markdown & code blocks in text input | ## Describe your feature request
Would be nice to support code block in the text input bar, that would make it easier to input code. we could also support basic markdown features like bold or italic, maybe not headings tho.
## Screenshots (if relevant)
Try https://claude.ai/new to see an example of how this co... | https://github.com/huggingface/chat-ui/issues/1545 | open | [
"enhancement",
"front"
] | 2024-10-28T08:42:58Z | 2024-11-11T20:26:32Z | 2 | nsarrazin |
huggingface/peft | 2,181 | How can I do to export mode format as gguf | ### Feature request
This is a good project,I just got it today and encountered some problems.
my any code
``` python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Qwen2-0.5B")
model = AutoModelForCausalL... | https://github.com/huggingface/peft/issues/2181 | closed | [] | 2024-10-26T13:51:45Z | 2024-10-26T13:59:18Z | null | xu756 |
huggingface/diffusers | 9,772 | Support ControlNetPlus Union if not already supported | It's not clear if ControlNetPlus is already supported by diffusers https://github.com/xinsir6/ControlNetPlus/tree/main/pipeline which consists of union controlnet for SDXL. This model seems to support the only SDXL segmentation that I'm aware of. If not already supported, it should be!
https://github.com/xinsir6/Con... | https://github.com/huggingface/diffusers/issues/9772 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-10-25T17:43:43Z | 2024-12-11T17:07:54Z | 5 | jloveric |
huggingface/transformers.js | 994 | Will these mistakes have an impact? | ### Question
After AutoProcessor.from_pretrained is loaded, an error occurred, and the error message is as follows:
````typescript
ort-wasm-simd-thread…jsep.wasm:0x10367e0 2024-10-25 20:11:31.705399 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred e... | https://github.com/huggingface/transformers.js/issues/994 | open | [
"question"
] | 2024-10-25T12:17:03Z | 2024-11-12T11:10:11Z | null | aidscooler |
huggingface/transformers.js | 993 | How do I know the loading progress when loading .onnx file? | ### Question
Because the .onnx file is large(about 170M),I decided to provide a loading progress. Code as below:
```` typescript
const modelSettings = {
// Do not require config.json to be present in the repository
config: { model_type: "custom" },
subfolder: "",
proces... | https://github.com/huggingface/transformers.js/issues/993 | open | [
"question"
] | 2024-10-25T05:52:12Z | 2024-10-25T17:54:30Z | null | aidscooler |
huggingface/finetrainers | 70 | How to set the resolutions when finetuning I2V model? | I want to train a video diffusion with lower resolutions. I set the height_buckets=256 and width_buckets=256 in prepare_dataset.sh and process the data. But I run into the following error while run the train_image_to_video_lora.sh script.
ValueError: It is currently not possible to generate videos at a different res... | https://github.com/huggingface/finetrainers/issues/70 | closed | [] | 2024-10-25T05:36:19Z | 2024-11-11T18:27:29Z | null | TousakaNagio |
huggingface/optimum | 2,080 | "ValueError: Trying to export a codesage model" while trying to export codesage/codesage-large | ### System Info
```shell
optimum 1.23.2
MacOS 14.7
Python 3.9
```
### Who can help?
@michaelbenayoun
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (g... | https://github.com/huggingface/optimum/issues/2080 | open | [
"bug"
] | 2024-10-25T05:27:22Z | 2024-10-25T05:27:22Z | 0 | TurboEncabulator9000 |
huggingface/chat-ui | 1,543 | RFC enable multimodal and tool usage at once for OAI endpoints ? | https://github.com/huggingface/chat-ui/blob/8ed1691ecff94e07d10dfb2874d3936d293f4842/src/lib/server/endpoints/openai/endpointOai.ts#L191C53-L191C65
Just played around with combining both of this
What do you think about making tool calling only if no image is in conversation ?
Otherwise we need to insert models twi... | https://github.com/huggingface/chat-ui/issues/1543 | open | [] | 2024-10-24T17:37:50Z | 2024-10-24T17:39:14Z | 0 | flozi00 |
huggingface/transformers.js | 991 | Loading models from "non-URL" locations in the browser | ### Question
Hi! I have an application where the model files will be pre-loaded in a custom format into the browsers IndexDb. Based on my understanding, transformer.js currently only supports loading models by URL and then caches them in the browser cache. Getting the model files from the IndexDb instead, seems a li... | https://github.com/huggingface/transformers.js/issues/991 | open | [
"question"
] | 2024-10-24T12:18:19Z | 2024-12-04T19:30:07Z | null | AKuederle |
huggingface/finetrainers | 68 | How to set the hyperparameters when finetuning I2V model with LoRA? | File "/home/shinji106/ntu/cogvideox-factory/training/dataset.py", line 411, in __iter__
self.buckets[(f, h, w)].append(data)
KeyError: (16, 320, 720)
The resolution is (13, 320, ... | https://github.com/huggingface/finetrainers/issues/68 | closed | [] | 2024-10-24T08:06:33Z | 2025-01-10T23:40:06Z | null | TousakaNagio |
huggingface/datasets | 7,249 | How to debugging | ### Describe the bug
I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the ... | https://github.com/huggingface/datasets/issues/7249 | open | [] | 2024-10-24T01:03:51Z | 2024-10-24T01:03:51Z | null | ShDdu |
huggingface/sentence-transformers | 3,015 | How to customize the dataloader? e.g. Custom Data Augmentation | Hi,
I've always been used to the old .fit behaviour where I could pass in the good DataLoader, implementing the Dataset myself, according to my needs.
With the new trainer interface, how am I supposed to tweak the dataloader?
Let's say I want to apply some random transformations to the input text, how can I d... | https://github.com/huggingface/sentence-transformers/issues/3015 | open | [] | 2024-10-23T17:11:13Z | 2024-11-15T10:32:35Z | null | msciancalepore98 |
huggingface/diffusers | 9,756 | Could not find loading_adapters.ipynb | ### Describe the bug
while reading doc [Load adapters](https://huggingface.co/docs/diffusers/using-diffusers/loading_adapters)
I tried to open in Colab to run an example on this page.
<img width="504" alt="open_colab" src="https://github.com/user-attachments/assets/0b1397f1-d266-4d83-84ab-276ea796a2a4">
I... | https://github.com/huggingface/diffusers/issues/9756 | closed | [
"bug"
] | 2024-10-23T13:03:11Z | 2024-11-01T15:27:56Z | 6 | thliang01 |
huggingface/accelerate | 3,190 | How to save the optimizer state while enabling Deepspeed to save the model | ### System Info
```Shell
Unrelated to configuration
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such... | https://github.com/huggingface/accelerate/issues/3190 | closed | [] | 2024-10-23T11:58:08Z | 2024-11-01T02:53:38Z | null | ITerydh |
huggingface/diffusers | 9,750 | Is it possible to provide img2img code for CogView3? | Is it possible to provide img2img code for CogView3? | https://github.com/huggingface/diffusers/issues/9750 | open | [
"stale",
"contributions-welcome"
] | 2024-10-23T07:40:38Z | 2024-12-20T15:04:01Z | 3 | ChalvYongkang |
huggingface/optimum | 2,076 | Problem converting tinyllama to onnx model with optimum-cli | ### System Info
```shell
main branch newest
local pip install
```
### Who can help?
@michaelbenayoun
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (g... | https://github.com/huggingface/optimum/issues/2076 | open | [
"bug"
] | 2024-10-22T06:23:51Z | 2024-10-22T06:36:42Z | 0 | hayyaw |
huggingface/diffusers | 9,731 | How to use Playground2.5 to train lora with own dataset to generate pictures of a specific style? | ### Describe the bug
Hi,
I have been working on training models using the same dataset as "stabilityai/stable-diffusion-xl-base-1.0" with the script examples/text_to_image/train_text_to_image_lora_sdxl.py, and I achieved quite promising results.
Now, I am trying to further improve the performance by switching to... | https://github.com/huggingface/diffusers/issues/9731 | open | [
"bug",
"stale"
] | 2024-10-21T12:10:12Z | 2024-11-20T15:03:04Z | null | hjw-0909 |
huggingface/diffusers | 9,727 | FLUX.1-dev dreambooth save problem trained on multigpu | ### Describe the bug
I tried to train flux using accelerate and deepspeed, but when using two L40s, the model could not be saved properly. What is the problem?
### Reproduction
train.sh:
accelerate launch --config_file config.yaml train_flux.py \
--pretrained_model_name_or_path="./FLUX.1-dev" \
--resolution=1... | https://github.com/huggingface/diffusers/issues/9727 | closed | [
"bug"
] | 2024-10-21T03:37:23Z | 2024-10-29T06:38:00Z | 1 | jyy-1998 |
huggingface/diffusers | 9,726 | FLUX.1-dev dreambooth problem trained on multigpu | ### Describe the bug
I tried to use accelerate and deepspeed to train flux, and it worked fine when using two L40s, but an error occurred when using two a100s. What is the reason?
### Reproduction
train.sh:
accelerate launch --config_file config.yaml train_flux.py \
--pretrained_model_name_or_path="./FLUX.1-dev"... | https://github.com/huggingface/diffusers/issues/9726 | closed | [
"bug"
] | 2024-10-21T03:20:44Z | 2024-10-21T03:32:42Z | 0 | jyy-1998 |
huggingface/tokenizers | 1,661 | How to Read Information in Large Tokenizer's Vocabulary | TLDR; This is how the byte-level BPE works. Main advantages are:
- Smaller vocabularies
- No unknown token
This is totally expected behavior. The byte-level BPE converts all the Unicode code points into multiple byte-level characters:
1. Each Unicode code point is decomposed into bytes (1 byte for ASCII characte... | https://github.com/huggingface/tokenizers/issues/1661 | closed | [] | 2024-10-20T13:38:53Z | 2024-10-21T07:29:43Z | null | kaizhuanren |
huggingface/diffusers | 9,719 | `disable_progress_bar` is ignored for some models (Loading checkpoint shards) | ### Describe the bug
When loading some pipelines, `diffusers.utils.logging.disable_progress_bar()` doesn't disable all progress bars. In particular the "Loading checkpoint shards" progress bar still appears. The "Loading pipeline components..." progress bar, however, is disabled as expected. Models I found, where this... | https://github.com/huggingface/diffusers/issues/9719 | closed | [
"bug"
] | 2024-10-19T17:42:37Z | 2024-10-19T19:29:12Z | 2 | JonasLoos |
huggingface/optimum | 2,069 | High CUDA Memory Usage in ONNX Runtime with Inconsistent Memory Release | ### System Info
```shell
Optimum version: 1.22.0
Platform: Linux (Ubuntu 22.04.4 LTS)
Python version: 3.12.2
ONNX Runtime Version: 1.19.2
CUDA Version: 12.1
CUDA Execution Provider: Yes (CUDA 12.1)
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
... | https://github.com/huggingface/optimum/issues/2069 | closed | [
"question",
"Stale"
] | 2024-10-19T02:45:54Z | 2024-12-25T02:02:08Z | null | niyathimariya |
huggingface/transformers.js | 981 | Any gotcha's with manually adding items to transformers-cache? | ### Question
For [papeg.ai](https://www.papeg.ai) I've implemented that the service worker caches `.wasm` files from `jsDelivir` that Transformers.js [wasn't caching itself yet](https://github.com/huggingface/transformers.js/issues/685#issuecomment-2325125036).
I've been caching those filesi n the 'main' Papeg.ai... | https://github.com/huggingface/transformers.js/issues/981 | open | [
"question"
] | 2024-10-18T12:53:07Z | 2024-10-18T12:56:21Z | null | flatsiedatsie |
huggingface/transformers | 34,241 | How to output token by token use transformers? | ### System Info
...
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
...
### Expect... | https://github.com/huggingface/transformers/issues/34241 | closed | [
"Discussion",
"bug"
] | 2024-10-18T09:45:19Z | 2024-11-26T08:04:43Z | null | xuanzhangyang |
huggingface/lerobot | 477 | Collecting human operated datasets in simulation | Hello,
Can you provide info on how human supervision was provided for the simulated datasets (e.g. `lerobot/aloha_sim_transfer_cube_human`)? I am starting to setup a similar MuJoCo gym environment for the Stretch (https://github.com/mmurray/gym-stretch) and I would like to collect/train on some human teleop data, bu... | https://github.com/huggingface/lerobot/issues/477 | closed | [
"question",
"dataset",
"simulation"
] | 2024-10-17T23:24:17Z | 2025-10-08T08:49:32Z | null | mmurray |
huggingface/lighteval | 365 | [FT] Using lighteval to evaluate a model on a single sample, how? | Thank you the team for the great work. I have a question. Can you please help me to use lighteval to evaluate a model on a single sample?
For example, if I have an input from mmlu I, my model generates output O, how can I use lighteval to evaluate O with using the Acc metric?
Thanks! | https://github.com/huggingface/lighteval/issues/365 | closed | [
"feature"
] | 2024-10-17T12:43:45Z | 2024-10-24T10:12:54Z | null | dxlong2000 |
huggingface/diffusers | 9,700 | Flux inversion | current img2img is not so well, [RF Inversion](https://rf-inversion.github.io/)) provides an inverse method for Flux real image editing, can we implement it using diffusers?
or how can we use DDIM inversion in Flux? | https://github.com/huggingface/diffusers/issues/9700 | closed | [] | 2024-10-17T07:03:59Z | 2024-12-17T16:00:30Z | 8 | yuxu915 |
huggingface/diffusers | 9,698 | Unable to Retrieve Intermediate Gradients with CogVideoXPipeline | ### Describe the bug
When generating videos using the CogVideoXPipeline model, we need to access the gradients of intermediate tensors. However, we do not require additional training or parameter updates for the model.
We tried using register_forward_hook to capture the gradients, but this approach failed because t... | https://github.com/huggingface/diffusers/issues/9698 | closed | [
"bug"
] | 2024-10-17T04:30:56Z | 2024-10-27T10:24:41Z | 4 | lovelyczli |
huggingface/diffusers | 9,697 | train_text_to_image_sdxl training effect is very poor | I use DeepSpeed for training: train_text_to_image_sdxl.py
1.The data volume is 231 pieces
2. deepspeed json

3.Training Script
 and would like to load it for inference. I use the following code suggested in the readme
```
model_name = "THUDM/CogVideoX-5b-I2V"
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
model_name, torch_dtype=torch.bfloat16
).to("... | https://github.com/huggingface/finetrainers/issues/40 | closed | [] | 2024-10-16T17:25:21Z | 2024-12-03T03:01:23Z | null | Yuancheng-Xu |
huggingface/transformers.js | 975 | Supporting Multiple Pipelines? | ### Question
First of all, thank you so much for creating transformers.js! This is a fantastic library, and I had lots of fun building with it!
I have a question regarding using pipelines API: Would it be possible to start multiple pipelines? For example, instead of using just one pipeline to run inference, can we ... | https://github.com/huggingface/transformers.js/issues/975 | closed | [
"question"
] | 2024-10-16T08:06:44Z | 2024-10-21T15:58:20Z | null | kelayamatoz |
huggingface/chat-ui | 1,525 | Standardize Chat Prompt Templates to Use Jinja Format | ## Describe your feature request
Currently, the `chatPromptTemplate` for each model that can be set in env uses **Handlebars** format. However, the `chat_prompt` in the actual model's `tokenizer_config.json` uses **Jinja** format. This inconsistency is causing significant inconvenience. Since **Jinja** is widely use... | https://github.com/huggingface/chat-ui/issues/1525 | open | [
"enhancement"
] | 2024-10-16T05:26:12Z | 2024-11-20T00:44:16Z | 8 | calycekr |
huggingface/alignment-handbook | 201 | Full parameter fine-tuning keeps consuming system RAM and lead to crash | I am using alignment handbook to perform a full parameter fine-tuning of llama3 models with Deepspeed stage 2 on my own dataset which is relatively large (400k+ records).
The training was performed on a slurm cluster with two nodes (each has 4 H100 GPUs).
I have noticed that during the training, the system memory ut... | https://github.com/huggingface/alignment-handbook/issues/201 | closed | [] | 2024-10-15T15:04:18Z | 2024-10-17T18:56:53Z | 2 | xiyang-aads-lilly |
huggingface/chat-ui | 1,522 | Add example prompt field to tools | ## Describe your feature request
This lets the user specify a prompt that would call the tool. It can be shown as a demo if you're not sure how to use a tool.
We should show it somewhere in the UI so the user can easily start a conversation from that demo.
It can also be used for validating that a tool works... | https://github.com/huggingface/chat-ui/issues/1522 | open | [
"enhancement",
"front",
"back",
"tools"
] | 2024-10-15T12:42:42Z | 2024-10-15T12:42:43Z | 0 | nsarrazin |
huggingface/optimum | 2,060 | Support int8 tinyllama tflite export. | ### Feature request
tflite exporter for decoder only llms such as tinyllama
### Motivation
Some platforms only support full int8 op and full int8 tflite models can be deployed. Is there a support plan? Looking forward to your reply, thank you.
### Your contribution
no | https://github.com/huggingface/optimum/issues/2060 | closed | [
"feature-request",
"Stale"
] | 2024-10-15T03:25:54Z | 2024-12-09T02:11:36Z | 1 | hayyaw |
huggingface/diffusers | 9,673 | high cpu usage when loading multiple loras at once. | ### Describe the bug
Hi, I was making a synthesis system using celery and diffusers,
and I found the cpu usage of program goes high when loading loras,
it is okay when I use just one worker, but it becomes hard when using 8 workers at once.
It happens when lora loaded first time, and I think it is because of p... | https://github.com/huggingface/diffusers/issues/9673 | closed | [
"bug"
] | 2024-10-15T01:49:37Z | 2024-10-15T05:07:40Z | 5 | gudwns1215 |
huggingface/datasets | 7,226 | Add R as a How to use from the Polars (R) Library as an option | ### Feature request
The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd
## Add Polars (R) option
The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well.
```r
library(polars)
... | https://github.com/huggingface/datasets/issues/7226 | open | [
"enhancement"
] | 2024-10-14T19:56:07Z | 2024-10-14T19:57:13Z | null | ran-codes |
huggingface/lerobot | 472 | How to resume training with a higher offline steps than initial set up? | ### System Info
```Shell
- `lerobot` version: unknown
- Platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.25.2
- Dataset version: 3.0.1
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.4.1 (True)
- Cuda version: 11080
- Using GPU in script?: <fill ... | https://github.com/huggingface/lerobot/issues/472 | closed | [] | 2024-10-13T19:28:04Z | 2024-10-22T05:51:42Z | null | Takuzenn |
huggingface/transformers.js | 973 | I would like to help | ### Question
Hi, I would like to help with the project. Is there anything that needs to be done?
Currently I found an issue, probably in ONNXRuntime. I will look into it next week.
Here is example of WebGPU Whisper that works with mobile platforms including iPhone and Android: https://github.com/FL33TW00D/whi... | https://github.com/huggingface/transformers.js/issues/973 | open | [
"question"
] | 2024-10-12T20:29:07Z | 2024-10-14T19:37:51Z | null | cyberluke |
huggingface/diffusers | 9,661 | from_pretrained: filename argument removed? | **What API design would you like to have changed or added to the library? Why?**
I do believe there was a `filename` argument in the past to load a specific checkpoint in a huggingface repository. It appears that this has been removed with no replacement.
**What use case would this enable or better enable? Can yo... | https://github.com/huggingface/diffusers/issues/9661 | closed | [
"stale"
] | 2024-10-12T20:02:31Z | 2024-11-13T00:37:52Z | 4 | oxysoft |
huggingface/transformers | 34,107 | How to specific customized force_token_ids in whisper | ```
ValueError: A custom logits processor of type <class 'transformers.generation.logits_process.ForceTokensLogitsProcessor'> with values <transformers.generation.logits_process.ForceTokensLogitsProcessor object at 0x7f4230cfac50> has been passed to `.generate()`, but it has already been created with the values <trans... | https://github.com/huggingface/transformers/issues/34107 | closed | [
"Generation",
"Audio"
] | 2024-10-12T07:34:38Z | 2024-12-28T08:06:48Z | null | MonolithFoundation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.