repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
βŒ€
user
stringlengths
2
28
huggingface/chat-ui
594
TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed
i use the lasted main version and i have error when make chat, and in GUI , it show "Sorry, something went wrong. Please try again." TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed at new NodeError (node:internal/errors:405:5) at ReadableStreamDefaultController.enqueue (node:inte...
https://github.com/huggingface/chat-ui/issues/594
closed
[ "support" ]
2023-11-29T04:28:27Z
2024-06-17T12:48:45Z
18
AlexBlack2202
huggingface/chat-ui
593
Show image in chat box
Can I show a image by http link on chat box?
https://github.com/huggingface/chat-ui/issues/593
open
[ "support" ]
2023-11-29T03:17:17Z
2023-11-30T17:57:32Z
3
ntqnhanguyen
huggingface/optimum
1,554
ORT Models Failing because of the latest fsdp changes on transformers Trainer.
### System Info ```shell optimum from source transformers from source ``` ### Who can help? @JingyaHuang ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset...
https://github.com/huggingface/optimum/issues/1554
closed
[ "bug" ]
2023-11-28T20:22:40Z
2023-12-26T18:15:02Z
6
AdamLouly
huggingface/chat-ui
592
Authentication Doc and Code may be out-of-date/not working
## Description Hello, Following the doc in the `README`: https://github.com/huggingface/chat-ui#basic-and-bearer. The UI should support (if setup in the `.env.local` file) `Basic` and `Bearer` authentication, however, what I noticed since the requests have been moved to the `huggingface` module is that the author...
https://github.com/huggingface/chat-ui/issues/592
open
[ "bug", "documentation", "back" ]
2023-11-28T18:50:15Z
2023-11-29T13:29:22Z
1
muscionig
huggingface/transformers.js
421
[Question] FeatureExtractionPipeline input length
@xenova : First of all thank you so much for your amazing work with this open source library. It opens up many possibilities. One thing that caught my attention which is [FeatureExtractionPipeline](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FeatureExtractionPipeline) can accept any am...
https://github.com/huggingface/transformers.js/issues/421
closed
[ "question" ]
2023-11-28T17:28:28Z
2023-12-02T11:20:52Z
null
devfacet
huggingface/sentence-transformers
2,361
How to divide long texts into chunks using sentence-transformers?
Hello, I encounter the issue of my texts exceeding the maximum lengths allowed by pretrained models. So I intend to divide my texts into smaller chunks and then calculate the average embeddings over them. However, I find this process is not as straightforward as I initially thought. In order to properly chunk th...
https://github.com/huggingface/sentence-transformers/issues/2361
closed
[]
2023-11-28T16:35:44Z
2023-12-25T12:38:42Z
null
srhouyu
huggingface/alignment-handbook
56
Why does the alignment-handbook account for user & system Inputs in loss calculation
I noticed that the alignment-handbook doesn't ignore the loss calculated from both the user and system inputs Based on my knowledge, many SFT choose to ignore these. I'm curious about the reasoning behind this difference.
https://github.com/huggingface/alignment-handbook/issues/56
open
[]
2023-11-28T06:03:53Z
2024-05-30T07:45:29Z
3
xffxff
huggingface/transformers
27,737
How to save the generated output of BarkModel to an npz file?
Hello there! I'm using the BarkModel from Hugging Face Transformers and I'm wondering how to save the generated results to an npz file. I'd like to use these saved results as history prompts for the next generation. In the [suno-ai/bark](https://github.com/suno-ai/bark) , when using the [`semantic_to_waveform`](h...
https://github.com/huggingface/transformers/issues/27737
closed
[]
2023-11-28T03:55:19Z
2024-01-10T08:03:57Z
null
chet-chen
huggingface/alignment-handbook
55
Running on single GPU(16GB)
Hi, What is the best way to run this on my high performance laptop? Should this somehow work? Can i calculate how many days/weeks it will run? Thanks in advance Specs: > OS: Win 11 (WSL2) > CPU: Intel Core i7 12850HX > Make: Lenovo Thinkpad P16 gen 1 > Memory: 128GB DDR5-4800 (2400MHz) > GPU: Nvidia ...
https://github.com/huggingface/alignment-handbook/issues/55
open
[]
2023-11-27T19:50:12Z
2023-12-13T14:58:31Z
1
patchie
huggingface/chat-ui
588
Hallucinations when using web search
I have tried to run a mistral model with the search api but the web results don't seem to be making it to the model. I'm hosting the model through text-gen-webui and encountering the exact same issue as #571. I've given it a go with [openhermes-2.5-mistral-7b.Q5_K_M.gguf](https://imgur.com/a/HQV1lGD), [it seems ...
https://github.com/huggingface/chat-ui/issues/588
open
[ "support", "websearch" ]
2023-11-27T17:12:22Z
2023-12-27T21:25:42Z
2
NasonZ
huggingface/chat-ui
587
How do I format the ChatPromptTemplate ?
I currently have a working setup with llamacpp+mistral 7b instruct with the following loca.env : ``` MODELS=`[ { "name": "Mistral", "chatPromptTemplate": "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#i...
https://github.com/huggingface/chat-ui/issues/587
open
[ "support", "models" ]
2023-11-27T15:21:17Z
2023-12-19T07:21:50Z
5
iChristGit
huggingface/candle
1,379
Help request: How to compile CUDA kernels with `cc-rs`?
Hello everybody, In the process of adding PagedAttention to candle-vllm, I need to compile some CUDA kernels. I am currently trying to use `cc-rs` in a `build.rs` to automatically build the kernels. However, I am not making much progress as I have run into issues that seem to be tied to the build stage. I would r...
https://github.com/huggingface/candle/issues/1379
closed
[]
2023-11-27T14:32:10Z
2023-11-27T20:57:11Z
null
EricLBuehler
huggingface/transformers
27,726
How to load PixArtAlphaPipeline in 8bit?
I know there is example but I couldn't make it work. I am trying to make an auto installer and gradio interface for Pix Art Alpha Pipeline so common people can install and use on their Windows PCs Currently my below code working and I want to make it load in 8 bit is that possible? ``` if torch.cuda.is_available...
https://github.com/huggingface/transformers/issues/27726
closed
[]
2023-11-27T11:36:44Z
2024-01-05T08:03:56Z
null
FurkanGozukara
huggingface/diffusers
5,942
How to prepare dataset for text-guided image to image generation
As the title suggests, I want to use stable diffusion to fine-tune my own dataset. How should I build it? I have tried: --input_image --xx.jpg --xx.jpg --output_image --yy.jpg --yy.jpg metadata.csv but it did't work ,can anybody help?
https://github.com/huggingface/diffusers/issues/5942
closed
[ "stale" ]
2023-11-27T06:58:57Z
2024-01-09T15:06:12Z
null
feelme0461
huggingface/alignment-handbook
52
What about the system prompt?
It seems that the system prompt is left to be `\n` or rather blank. Inspecting UltraChat (https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k?row=5), seems that no system prompt is added to the dataset. There must be something that I missed in regards to addition of system prompts to the dataset for tra...
https://github.com/huggingface/alignment-handbook/issues/52
open
[]
2023-11-27T02:55:38Z
2023-11-27T02:55:38Z
0
timothylimyl
huggingface/alignment-handbook
50
What is the expected "global batch size"?
In the recipes README there is this statement: > If you scale up/down the number of GPUs, we recommend also scaling up the per-device batch size or number of gradient accumulation steps to keep the global batch size constant (and thus replicate our results). Q: What is the expected "global batch size"? For ex...
https://github.com/huggingface/alignment-handbook/issues/50
closed
[]
2023-11-26T21:47:41Z
2023-11-27T04:14:22Z
null
ohmeow
huggingface/transformers.js
417
[Question] Any examples of processing video frames of a user uploaded video (specifically for depth estimation)?
Hi there, I'm wondering if there are any examples of processing video frames of a user uploaded video? I'm specifically looking to run depth estimation on each frame of a short video, but any similar example would be useful. If not, does this approach seem correct? * Use one of the approaches described [here](https...
https://github.com/huggingface/transformers.js/issues/417
open
[ "question" ]
2023-11-26T09:18:04Z
2023-12-10T22:51:18Z
null
jparismorgan
huggingface/chat-ui
583
Option to share the web interface locally/online ?
I wish we could make the ui available on phone/mac or even outside the local network. For example in SillyTavern (https://github.com/SillyTavern/SillyTavern) You can either open it up to all devices in the local network or open a cloudflare tunnel to access it through a link. Is that possible to add?
https://github.com/huggingface/chat-ui/issues/583
open
[ "enhancement", "back" ]
2023-11-26T00:44:08Z
2024-04-22T16:45:44Z
2
iChristGit
huggingface/candle
1,375
Question: How to interface a C++ API `torch::Tensor` with `candle_core::Tensor`?
I was wondering if there is a way to use a C++ API that accepts a Pytorch `torch::Tensor` with a Candle `candle_core::Tensor`? For reference, I want to use [this](https://github.com/vllm-project/vllm/blob/main/csrc/ops.h) C++ API. Can I convert between tensor types? @LaurentMazare, would it be possible to use [tch-r...
https://github.com/huggingface/candle/issues/1375
closed
[]
2023-11-25T19:05:27Z
2023-11-25T23:04:03Z
null
EricLBuehler
huggingface/accelerate
2,187
how to collect outputs(not tensor dtype) on multi gpus
As the toy example below, ``` val_dataset = ['a', 'b', 'c', 'd', 'e'] val_dataloader = DataLoader( val_dataset, batch_size=2 ) accelerator = Accelerator() val_dataloader = accelerator.prepare(val_dataloader) for step, batch in enumerate(val_dataloader): print(batch, accelerator.device) ``` ...
https://github.com/huggingface/accelerate/issues/2187
closed
[]
2023-11-25T02:51:21Z
2023-11-27T06:07:19Z
null
shliu0
huggingface/chat-ui
581
Trying to set up with TGI
I have installed TGI using docker, I can see the api docs at http://127.0.0.1:8080/docs/ But still cannot set up the env.local file, I have tried to set it up with the example, but always failing. ![image](https://github.com/huggingface/chat-ui/assets/20077386/032a02c0-9d3b-473e-9c1b-a3c948eb06d3) ![image](https://g...
https://github.com/huggingface/chat-ui/issues/581
open
[ "support" ]
2023-11-24T19:20:27Z
2023-12-19T06:02:25Z
2
iChristGit
huggingface/transformers.js
412
[Question] Does any version support Node 14
Hi, I have tried downgrading the library to version 2, and even to 1, but that one was missing types. Is there some way to be able to use it with Node 14? I have seen that mostly the issues are with nullish coalescing characters, so wanted to make sure if there could be other issues that tie it to Node 18+, and a...
https://github.com/huggingface/transformers.js/issues/412
closed
[ "question" ]
2023-11-24T16:01:54Z
2023-12-04T13:16:26Z
null
Ncifra
huggingface/hf_transfer
20
[Usage] How to enable the progress bar?
I've installed `hf_transfer-0.1.4`. But when I use `huggingface-cli download`, the progress bar mentioned [here](https://huggingface.co/docs/huggingface_hub/guides/download#faster-downloads) seems to be disabled at default. And I failed to figure out how to enable it. Could anyone be kind enough to provide some guid...
https://github.com/huggingface/hf_transfer/issues/20
closed
[]
2023-11-24T08:13:00Z
2023-11-27T12:15:10Z
null
tongyx361
huggingface/gsplat.js
39
How to implement point clouds render?
Hi, great work! I see that this library is upon [antimatter15/splat](https://github.com/antimatter15/splat), but this library does not have the same render which is very similar to point clouds like that lib. I want to know how to implement this function base on your gsplat library? By the way, do you have any documen...
https://github.com/huggingface/gsplat.js/issues/39
open
[]
2023-11-24T07:27:33Z
2024-01-22T21:12:06Z
null
xinnai
huggingface/alignment-handbook
46
Weird DPO loss
Hi, I would like to raise some attention to issue #38. It seems that the DPO-Lora training loss (red line) drops abruptly at the beginning of each epoch, which seems weird. (I tried Lora model global batch size 64, multi_gpu acceleration, 8GPUs, learning rate 1e-4, others same suggested) In the mean time, the f...
https://github.com/huggingface/alignment-handbook/issues/46
open
[]
2023-11-24T03:07:46Z
2024-05-28T07:09:10Z
1
ChenDRAG
huggingface/diffusers
5,912
How to set config in VaeImageProcessor?
I created a `StableDiffusionControlNetImg2ImgPipeline` and I want to manually set the config `do_normalize` in `VaeImageProcessor`. I wonder how can I set? I look for it in the pipe.vae.config and see nothing about it.
https://github.com/huggingface/diffusers/issues/5912
closed
[ "stale" ]
2023-11-23T12:54:22Z
2023-12-26T21:29:17Z
null
youyuge34
huggingface/chat-ui
576
Cannot build using latest Chat UI Space template
Using the Dockerfile created from the ChatUI-Space template, but cloning it to a local machine and trying to build it fails at `npm run build` > #18 [chatui-builder 12/12] RUN npm run build #0 0.673 #0 0.673 > chat-ui@0.6.0 build #0 0.673 > vite build #0 0.673 #0 1.678 vite v4.3.9 building SSR bundle for produc...
https://github.com/huggingface/chat-ui/issues/576
open
[ "support", "spaces" ]
2023-11-23T12:23:06Z
2023-11-30T14:11:32Z
1
simon376
huggingface/transformers
27,666
how to remove punctuation marks.
### System Info i trained t5-large for translation. the result of train was good But when i input some sentence, the result is like that "What are you doing now?.??....." [?.??......] <- how to delete that punctuation marks. i put some parameter like max_length. But i can not solve that situation ### Who ...
https://github.com/huggingface/transformers/issues/27666
closed
[]
2023-11-23T07:21:33Z
2023-12-31T08:03:43Z
null
chanyong-owl
huggingface/blog
1,655
how to scale fine-tuning whisper in English?
I'm attempting to fine-tune whisper using the excellent hugging face tut: https://huggingface.co/blog/fine-tune-whisper. The delta between the tut's case and my case is that I am using English which has 1M more test cases (and also I'm using big GPUs so I am using `whisper-large-v3`). No matter how much compute I th...
https://github.com/huggingface/blog/issues/1655
open
[]
2023-11-22T22:45:29Z
2024-03-10T06:55:47Z
null
jsteinberg-rbi
huggingface/datasets
6,446
Speech Commands v2 dataset doesn't match AST-v2 config
### Describe the bug [According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover,...
https://github.com/huggingface/datasets/issues/6446
closed
[]
2023-11-22T20:46:36Z
2023-11-28T14:46:08Z
3
vymao
huggingface/alignment-handbook
45
Reproducing of Lora Model Result on MT-Bench
Recently, I attempted to fit the DPO on my own dataset. Initially, I tried to reproduce the results of your LORA model( 7.43 on MT-Bench). However, I encountered some issues. Despite using all your parameters and data, here are my results on MT-Bench: | Model | MT-Bench | |--------|--------| | Zephyr-SFT-Lora-Ow...
https://github.com/huggingface/alignment-handbook/issues/45
open
[]
2023-11-22T03:42:32Z
2023-12-11T17:09:32Z
27
wlhgtc
huggingface/optimum
1,551
Running llama-2-13b resulted in `Killed`
### System Info ```shell This is my run.py code: import torch import transformers import requests print(torch.cuda.is_available()) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load model and adapter weights from local directory model = transformers.AutoMo...
https://github.com/huggingface/optimum/issues/1551
closed
[ "bug" ]
2023-11-21T13:11:40Z
2024-01-09T15:58:09Z
1
maxloopinmok
huggingface/optimum-quanto
32
Are threre some exmples show how to export onnx model ? torch.onnx.export
https://github.com/huggingface/optimum-quanto/issues/32
closed
[]
2023-11-21T11:33:37Z
2024-03-13T08:15:51Z
null
youkiwang
huggingface/transformers
27,615
How to get the number of trainable parameters for a hf model
### Feature request ' peft_parameters = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=8, bias="none", task_type="CAUSAL_LM" ) train_params = TrainingArguments( output_dir="./results_modified", num_train_epochs=1, per_device_train_batch_size=4, gradient_accumulation_step...
https://github.com/huggingface/transformers/issues/27615
closed
[]
2023-11-21T00:37:01Z
2023-11-21T19:28:32Z
null
mathmax12
huggingface/chat-ui
571
trying to replicate the api search with the local search option
When I try searching for information on the site (huggingface.co/chat) it works fine and gives correct information, but when doing the same thing using the same model I get hallucinations.. Ive tried all sorts of temperature settings and models. This is the result locally: ![image](https://github.com/huggingface/ch...
https://github.com/huggingface/chat-ui/issues/571
closed
[ "support" ]
2023-11-20T20:57:23Z
2023-12-05T15:19:49Z
29
iChristGit
huggingface/trl
1,014
How to avoid training radomness?
I’m using the `trl.SFTTrainer` to fine-tune Vicuna, and I’m using the same data and parameters for fine-tuning. However, I’ve noticed that even after setting: ``` def set_seed(seed=42): # set seed for all possible avenues of stochasticity numpy.random.seed(seed=seed) random.seed(seed) torch.manu...
https://github.com/huggingface/trl/issues/1014
closed
[]
2023-11-20T16:47:28Z
2024-01-03T15:05:11Z
null
zhaochenyang20
huggingface/candle
1,349
How to pass bounding box instead of points in the segment-anything example?
Is it possible to pass a bounding box instead of points when using the segment-anything model? Is this just 4 points?
https://github.com/huggingface/candle/issues/1349
open
[]
2023-11-20T15:44:22Z
2023-11-20T15:44:22Z
null
svelterust
huggingface/alignment-handbook
43
Did you use RMSprop or AdamW as the optimizer?
Hi to whoever is reading this πŸ€— ## Question After reading the Zephyr pre-printed paper https://arxiv.org/pdf/2310.16944.pdf and going through the configuration files here, I saw that there was a mismatch between the optimizer used in https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-...
https://github.com/huggingface/alignment-handbook/issues/43
closed
[]
2023-11-20T15:23:03Z
2024-03-07T06:55:07Z
3
alvarobartt
huggingface/sentence-transformers
2,359
How to evaluate the result of dataset that does not have any labels
Hi, I was trying to look at the different evaluation metrics that are provided to SentenceTransformers. I have a column of text in my dataset that I compare against a query and get the top k similarity using cosine similarity. I do not know if there is any method to evaluate the result. Should I consider the cosine ...
https://github.com/huggingface/sentence-transformers/issues/2359
open
[]
2023-11-20T14:52:21Z
2023-11-20T14:52:21Z
null
Yarmohamadshr
huggingface/alignment-handbook
42
How to QLoRA training with ZeRO-3 on two or more GPUs?
I added a 4-bit load after the command LoRA training with ZeRO-3 on two or more GPUs to achieve a mix of QLoRA and ZeRO-3. But the program encountered the following error: RuntimeError: expected there to be only one unique element in <generator object Init._convert_to_deepspeed_param.<locals>.all_gather_coalesced.<loc...
https://github.com/huggingface/alignment-handbook/issues/42
open
[]
2023-11-20T14:13:36Z
2024-05-17T00:27:27Z
null
Di-Zayn
huggingface/transformers
27,600
How to get input sentence embedding from Llama or Llama2?
I'm trying to get the sentence embedding that I input, I checked some common practice to do it, but I'm not sure I'm doing the it right. Who may be help? @gante Thanks if you can be help. my code is as below: ``` model = LlamaForCausalLM.from_pretrained( args.pretrained_name_or_path, torch_dtype=torch...
https://github.com/huggingface/transformers/issues/27600
closed
[]
2023-11-20T13:18:08Z
2023-11-22T14:32:26Z
null
waterluck
huggingface/transformers
27,592
How to always use initial prompt in Whisper?
I checked this PR (#22496 ) but still can't figure out how to always use the initial prompt. is it possible to provide a use case?
https://github.com/huggingface/transformers/issues/27592
closed
[]
2023-11-19T18:35:23Z
2023-11-20T08:29:41Z
null
GanymedeNil
huggingface/pytorch-image-models
2,038
how to run the efficientmit.py
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternat...
https://github.com/huggingface/pytorch-image-models/issues/2038
closed
[ "enhancement" ]
2023-11-19T02:50:59Z
2023-11-19T17:16:48Z
null
1377534928
huggingface/chat-ui
566
Is Chat-UI gonna support the new Assistant API?
They store the threads, and there's also multi-modal support
https://github.com/huggingface/chat-ui/issues/566
open
[ "enhancement", "models" ]
2023-11-19T02:06:44Z
2023-11-20T08:42:49Z
1
wayliums
huggingface/alignment-handbook
40
How do I get the training scrips to utilize all my GPUs?
Hello there, I'm running this script: ``` ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml ``` ... but on my machine with 2x3090s ... only GPU 0 is being utilized. What do I ...
https://github.com/huggingface/alignment-handbook/issues/40
closed
[]
2023-11-19T00:11:24Z
2023-11-19T01:20:21Z
null
ohmeow
huggingface/transformers.js
401
[Question | Bug] What am I doing wrong while using the `question-answering` model?
## The Problem I'm trying to use `question-answering` model to answer simple questions in a given context. But I always get a TypeError about floats. I guess that's an internal issue, because at top level of code I am not using floating point numbers. But maybe I am doing something wrong. By the way, I'm using Ty...
https://github.com/huggingface/transformers.js/issues/401
closed
[ "question" ]
2023-11-18T12:58:50Z
2023-11-19T12:44:00Z
null
AyresMonteiro
huggingface/transformers.js
399
[Question] Is it possible to encode and decode with `AutoTokenizer.from_pretrained` and keep spaces?
I'm trying to build a pure JS online tokenizer, visually similar to https://github.com/1rgs/tokenwiz (but without the Python backend) I'm doing something like: ```js const model = await AutoTokenizer.from_pretrained('mistralai/Mistral-7B-v0.1') const textInput = `[INST] <<SYS>> You are a friendly Llama. <</SY...
https://github.com/huggingface/transformers.js/issues/399
closed
[ "question" ]
2023-11-17T18:46:05Z
2023-11-17T20:18:02Z
null
daaain
huggingface/alignment-handbook
39
Why zephyr-7b-dpo-lora is finetuned from mistralai/Mistral-7B-v0.1 instead of zepher-7b-sft model?
There is a misalignment between zephyr-7b-dpo-lora and zephyr-7b-dpo-full. The former one is finetuned from mistralai/Mistral-7B-v0.1. The latter is finetuned from zephyr-7b-dpo-full. I wonder what causes this misalignment ? Also, have you benchmarked performance improvement of the lora finetunning script? In m...
https://github.com/huggingface/alignment-handbook/issues/39
open
[]
2023-11-17T18:11:59Z
2024-03-21T19:18:08Z
2
ChenDRAG
huggingface/optimum
1,545
Add support to export facebook encodec models to ONNX
### Feature request When I try to use optimum-cli to export the facebook/encodec_32khz model I get this error: ``` % optimum-cli export onnx --model facebook/encodec_32khz encodec.onnx Framework not specified. Using pt to export to ONNX. /Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-pack...
https://github.com/huggingface/optimum/issues/1545
open
[ "feature-request", "onnx" ]
2023-11-17T11:16:01Z
2025-12-12T06:23:33Z
6
giamic
huggingface/peft
1,142
How to do Gradient Checkpoint + LoRA
### System Info <img width="570" alt="image" src="https://github.com/huggingface/peft/assets/18441985/9b3ae040-d78a-477b-a9ec-6ab26b687a68"> ### Who can help? I need help with using LoRA + gradient checkpointing. Using the reentrant option appears to be the solution, but it slows down training a lot, for LLam...
https://github.com/huggingface/peft/issues/1142
closed
[]
2023-11-17T09:34:16Z
2025-10-06T10:22:58Z
null
tcapelle
huggingface/accelerate
2,164
how to get same timestamp in different subprocesses while using accelerate launch
I would like to get a unique timestamp to name my result folder like below ``` def get_time_string() -> str: x = datetime.datetime.now() return f"{(x.year - 2000):02d}{x.month:02d}{x.day:02d}-{x.hour:02d}{x.minute:02d}{x.second:02d}" ``` , however, it sometimes will get a different timestamp in different...
https://github.com/huggingface/accelerate/issues/2164
closed
[]
2023-11-17T06:36:00Z
2023-11-29T07:30:04Z
null
shliu0
huggingface/open_asr_leaderboard
14
How to run calc_rtf.py? Cannot reproduce rtf results.
There is no guide on how to execute calc_rtf.py. For example, this one https://github.com/huggingface/open_asr_leaderboard/blob/main/transformers/calc_rtf.py references 4469669.mp3. But there is no such file in the repo from what I see. So the results are not reproducible. Same for https://github.com/huggingface/...
https://github.com/huggingface/open_asr_leaderboard/issues/14
open
[]
2023-11-16T21:14:31Z
2023-11-16T21:14:31Z
null
galv
huggingface/transformers.js
397
[Question] Tokenizing a base64 for string is very slow?
Hi! I happened to be encoding some files using transformers.js and one of the files happened to have some base64 in it. What I noticed is that base64 takes an enormously long time, relative to the number of tokens produced. Tokenizing a string of english text to the same number of tokens is far quicker. For example: ...
https://github.com/huggingface/transformers.js/issues/397
closed
[ "question" ]
2023-11-16T20:27:51Z
2023-11-17T19:48:57Z
null
samlhuillier
huggingface/transformers.js
396
[Question] How to use transformer.js in langchain
Hi all, I'm writing a custom LLM to use transformer.js with langchain. Does a structure like this make sense? Any advice for optimizing it or best practices to apply? Any suggestions or feedback would be greatly appreciated 😊 πŸš€ ``` import { pipeline } from "@xenova/transformers"; import { LLM } from "langcha...
https://github.com/huggingface/transformers.js/issues/396
open
[ "question" ]
2023-11-16T17:27:52Z
2023-12-21T16:27:28Z
null
mrddter
huggingface/autotrain-advanced
349
How to reload the checkpoints for LLM finetuning?
May I ask how to resume from the latest checkpoint using `autotrain llm` if it crashed. I only found one from the `dreambooth` trainers, but I cannot find the `resume_from_checkpoint` anywhere else. I was wondering if it has currently not fully supported this feature yet or I was missing something? It would be supe...
https://github.com/huggingface/autotrain-advanced/issues/349
closed
[ "stale" ]
2023-11-16T11:51:25Z
2024-02-02T08:58:47Z
null
xihajun
huggingface/trl
1,004
Guidance on how to fix the scheduler and ConstantLengthDataset
Hello, I want to fix the issue related to the `ConstantLengthDataset` not knowing the dataset's length in advance. Besides having a broken progressbar and a wrong epoch count, the only problem I see is related to the scheduler, as most of us are training using cosine with warmup; if we want a complete cycle, the ...
https://github.com/huggingface/trl/issues/1004
closed
[]
2023-11-16T10:58:30Z
2024-01-05T15:05:18Z
null
tcapelle
huggingface/diffusers
5,816
low attention to prompt in SDXL
Hi, One of the difference between DALLE3 and SDXL is that SDXL pay less attention to prompt, Is there a way to solve this problem? I don't Know. for example changing the text encoder to other can help to solve this problem ? Thanks
https://github.com/huggingface/diffusers/issues/5816
closed
[ "question", "stale" ]
2023-11-16T07:24:15Z
2024-01-09T15:06:55Z
null
saeedkhanehgir
huggingface/transformers
27,526
How to preupgrade transformer cache and build the upgraded into docker image?
### System Info Linux ubuntu 22.04 Docker 24.05 I am not sure if this is the right place for this issue. Apology if it isn't and please direct me to the right place. I have been using transformer in docker images that are deployed at runpod/replicate. The containers of the images could go cold and be relaunch...
https://github.com/huggingface/transformers/issues/27526
closed
[]
2023-11-16T02:53:54Z
2023-12-24T08:03:44Z
null
lanyusan
huggingface/optimum
1,538
Optimum supports AMDGPUγ€€οΌŸ
### Feature request Onnxruntime supports AMD-ROCM , how to compile on optimum ### Motivation Our company is currently testing amdgpu and has learned that optim can accelerate inference on CUDA. We are not sure if it will support ROCM in the future? ### Your contribution none
https://github.com/huggingface/optimum/issues/1538
closed
[]
2023-11-15T04:15:21Z
2024-01-09T16:10:39Z
1
taikai-zz
huggingface/tokenizers
1,391
How to split special token in encode?
i have converted a slow tokenizer into PreTrainedTokenizerFast, and get a tokenizer.json file.But i found that this tokenizer did not split special tokens.Here is my add_tokens in tokenizer.json: ` tokenizer.add_special_tokens( [ AddedToken("[gMASK]", normalized=True, single_word=...
https://github.com/huggingface/tokenizers/issues/1391
closed
[]
2023-11-15T03:41:22Z
2024-01-04T06:26:38Z
null
leizhao1234
huggingface/diffusers
5,786
How to load a precomputed dataset in the cache folder on a different machine?
**Is your feature request related to a problem? Please describe.** Some slurm cluster may have a limit on time allocation, so I'd like to precompute the dataset on my local machine then move it to a location on the cluster to directly reuse it. **Describe the solution you'd like** I saw load dataset automatica...
https://github.com/huggingface/diffusers/issues/5786
closed
[ "question", "stale" ]
2023-11-14T02:26:00Z
2024-01-09T15:07:14Z
null
linnanwang
huggingface/alignment-handbook
22
How to perform full parameter finetuning without A100 GPUs
Hi, thank you for your great work! I'd like to reproduce full parameter fine-tuning of dpo training. However I only have 10 * Nvidia A40 GPUs (46 Gbs memory each). I tried the command `CUDA_VISIBLE_DEVICES=2,3,4,5,6,7,8,9 ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deeps...
https://github.com/huggingface/alignment-handbook/issues/22
open
[]
2023-11-14T01:33:41Z
2024-02-14T13:47:16Z
null
ChenDRAG
huggingface/controlnet_aux
83
How to get keypoints output .json file like original OpenPose ?
https://github.com/huggingface/controlnet_aux/issues/83
open
[]
2023-11-13T21:55:35Z
2023-11-17T21:04:49Z
null
mayank64ce
huggingface/chat-ui
550
Can this ui be run on a colab?
I am wondering if this ui can be used inside a colab.
https://github.com/huggingface/chat-ui/issues/550
closed
[ "question" ]
2023-11-13T16:58:35Z
2023-11-15T16:17:10Z
null
amida47
huggingface/text-generation-inference
1,258
How to deal with bias=True Model
### Feature request How to deploy model within bias=True. Example: vinai/PhoGPT-7B5-Instruct ### Motivation . ### Your contribution .
https://github.com/huggingface/text-generation-inference/issues/1258
closed
[ "Stale" ]
2023-11-13T09:20:08Z
2024-01-20T01:46:38Z
null
anhnh2002
huggingface/trl
985
how to setup epoch number in SFTTrainer?
there my example code from datasets import load_dataset from trl import SFTTrainer dataset = load_dataset("IMDB", split="train") trainer = SFTTrainer( "sshleifer/tiny-gpt2", train_dataset=dataset, dataset_text_field="text", max_seq_length=512, ) trainer.train()
https://github.com/huggingface/trl/issues/985
closed
[]
2023-11-12T20:02:31Z
2023-11-14T18:29:53Z
null
KlausikPL
huggingface/diffusers
5,774
How to fine tune Stable Diffusion on custom dataset {caption, image}?
I need to do the task that fine tuning SD on custom dataset {caption, image} and custom size? Could you please give me a tutorial for this task?
https://github.com/huggingface/diffusers/issues/5774
closed
[ "stale" ]
2023-11-12T14:52:23Z
2024-01-09T15:07:21Z
null
npk7264
huggingface/diffusers
5,772
Does webdataset faster than default huggingface datasets?
### Describe the bug Hi, I see there is a large scale training example https://github.com/huggingface/diffusers/blob/controlnet_webdatasets/examples/controlnet/train_controlnet_webdatasets.py using webdatasets, which suggests that webdatasets may have better data loading performance than huggingface datasets that is o...
https://github.com/huggingface/diffusers/issues/5772
closed
[ "question", "stale" ]
2023-11-12T08:40:22Z
2024-01-09T15:07:23Z
null
Luciennnnnnn
huggingface/chat-ui
549
How can I use this offline with local models?
I really like the web_search feature, can I somehow use it with local models? I tried but I dont see any bat files to launch it.
https://github.com/huggingface/chat-ui/issues/549
closed
[ "support" ]
2023-11-11T23:59:09Z
2023-11-20T21:38:27Z
9
iChristGit
huggingface/diffusers
5,766
Image+Image+Text to Image
Maybe a dumb question but I can't seem to find good ways to have multiple images to image modeling. I looked into Multi-ControlNet but I can't tell how to use it. I'm trying to train a model that takes in 2 images and a prompt: 1. a template base image (e.g. a photo of a room in someone's house with a painting on the...
https://github.com/huggingface/diffusers/issues/5766
closed
[ "question", "stale" ]
2023-11-11T20:15:27Z
2024-01-09T15:07:25Z
null
tval2
huggingface/optimum
1,531
Pytorch + TensorRT support
### Feature request Is it possible to start supporting Pytorch and TensorRT inference optimizations? There are a lot of use cases where it could be useful, and optimum seems to already have a lot of good tooling to enable this. ### Motivation Using Pytorch or TensorRT in production is painful today, and requires a l...
https://github.com/huggingface/optimum/issues/1531
closed
[ "feature-request", "Stale" ]
2023-11-11T17:27:47Z
2025-02-27T02:04:37Z
2
youssefadr
huggingface/optimum
1,530
AnimateDiff support?
### Feature request Hi! can u guys please support animatediff for onnx in the future? it will be great for both gpu directml and cpu too kind regards ### Motivation not a bug, just a feature that i really would like to see for us directml and cpu users for onnx ### Your contribution i would but i don't know an...
https://github.com/huggingface/optimum/issues/1530
closed
[ "feature-request", "Stale" ]
2023-11-11T14:21:25Z
2025-03-01T02:08:38Z
1
Amin456789
huggingface/autotrain-advanced
338
How to
I successfully trained the mistral 7B sharded model on google colab using the autotrain Now, how can I do inference , I am unable to merger the adapter with the base model , can someone please share the code for inference with me . Please help
https://github.com/huggingface/autotrain-advanced/issues/338
closed
[ "stale" ]
2023-11-11T12:58:24Z
2024-05-06T13:35:52Z
null
eviIgenius
huggingface/diffusers
5,761
The cost of consistency decoder
### Describe the bug I replace original VAE decoder of a stable diffusion model with Consistency Decoder, then CUDA out of memory occurs. My question is that How large of Consistency Decoder is compared to original VAE decoder. - `diffusers` version: 0.23.0 - Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35...
https://github.com/huggingface/diffusers/issues/5761
closed
[ "question", "stale" ]
2023-11-11T03:54:20Z
2024-01-09T15:07:30Z
null
Luciennnnnnn
huggingface/candle
1,319
Question: How to edit specific indices of a tensor?
Hello everybody, While developing beam search for candle-sampling, I have run into a small issue where it appears there is no way to edit specific indices of a tensor after creation. For example, in Python the following works for lists (and very similar for pytorch tensors): ```python values = [[1,2,3],[4,5,6]] ...
https://github.com/huggingface/candle/issues/1319
closed
[]
2023-11-11T01:10:42Z
2023-11-26T15:53:19Z
null
EricLBuehler
huggingface/datasets
6,400
Safely load datasets by disabling execution of dataset loading script
### Feature request Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution. Any suggested workarounds are welcome as well. ### Motivation This is a security vulnerability that could lead to arbitrary code e...
https://github.com/huggingface/datasets/issues/6400
closed
[ "enhancement" ]
2023-11-10T23:48:29Z
2024-06-13T15:56:13Z
4
irenedea
huggingface/diffusers
5,758
how to run huggingface model in replicate
### Describe the bug i am trying to run https://medium.com/ai-artistry/streamlining-ai-agent-development-with-autogen-and-llava-b84fb0d25262 code by adding https://huggingface.co/LLaVA-VL/llava_plus_v0_7b instead of replicate code. My Question is: Challenges running the huggingface model using replicate? somet...
https://github.com/huggingface/diffusers/issues/5758
closed
[ "bug" ]
2023-11-10T20:31:04Z
2023-11-11T03:33:51Z
null
andysingal
huggingface/diffusers
5,756
How to we generate LCM LoRA of an existing model?
I generated a DreamBooth model from SDXL base 1.0 To get the speed boost of LCM I need to generate a LCM LoRA from this model How we do it? I don't see documentation
https://github.com/huggingface/diffusers/issues/5756
closed
[ "stale" ]
2023-11-10T15:44:52Z
2023-12-27T13:28:38Z
null
FurkanGozukara
huggingface/chat-ui
548
MaxListenersExceededWarning: Possible EventEmitter memory leak detected.
Running dev, and no errors until i try to write into the chat interface on the website locally hosted in WSL2 (win11). Worked before i updated to version v.0.6.0 error message in web ui: ![image](https://github.com/huggingface/chat-ui/assets/1792727/adc2f421-6cb7-400d-b559-1240b13ff349) Error message in ter...
https://github.com/huggingface/chat-ui/issues/548
closed
[ "support" ]
2023-11-10T13:56:03Z
2023-11-16T20:02:07Z
7
patchie
huggingface/sentence-transformers
2,355
How to Finetune a Clip Model with Custom Data
I want to do my custom data training to get high accuracy embeddings of my image data. Are there any scripts or documentation that would be helpful? thank you.
https://github.com/huggingface/sentence-transformers/issues/2355
closed
[]
2023-11-10T07:27:23Z
2023-12-25T03:23:20Z
null
unmo
huggingface/diffusers
5,742
where is the Parameter Description?
https://github.com/huggingface/diffusers/issues/5742
closed
[]
2023-11-10T07:07:03Z
2023-11-13T18:01:56Z
null
MRG-DOT
huggingface/setfit
436
【question】could you tell me the latest embedding model which usable by setfit?
Hi! This is not bug report but question. From my understand, when we use SetFit, we have to choose one of embedding model from sentense transformer. But now, I feel those models are kind of old and would like to know the latest model for embedding which can be used by setfit Thank you in adv
https://github.com/huggingface/setfit/issues/436
closed
[ "question" ]
2023-11-10T02:10:01Z
2023-11-12T01:02:24Z
null
Yongtae723
huggingface/datasets
6,394
TorchFormatter images (H, W, C) instead of (C, H, W) format
### Describe the bug Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy. However, pytorch normally uses (C, H, W) format. Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways. If not using the format it is possible to ...
https://github.com/huggingface/datasets/issues/6394
closed
[]
2023-11-09T16:02:15Z
2024-04-11T12:40:16Z
9
Modexus
huggingface/transformers.js
386
[Question] Any plan to rewrite js in typescript ?
I'm doing it for my own usage although I'm loosing the benfit of upgrades. Typings are usefull you know :) While doing it I found this, in models.js, line 1027 : ```javascript let sampledTokens = sampler(logits); ``` should be ```javascript let sampledTokens = sampler.sample(logits); ```
https://github.com/huggingface/transformers.js/issues/386
closed
[ "question" ]
2023-11-09T13:41:10Z
2023-11-15T18:18:39Z
null
pnocera
huggingface/candle
1,304
How to repeat_interleave on Tensor?
There is [repeat_interleave](https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html) function, but I can't find analog in candle. I need convert `tensor([[6110, 1]])` to `tensor([[6110, 1], [6110, 1], [6110, 1]])` I found some examples [like](https://github.com/huggingface/candle/blob/...
https://github.com/huggingface/candle/issues/1304
closed
[]
2023-11-09T06:31:04Z
2023-11-09T08:16:19Z
null
bragovo
huggingface/diffusers
5,709
How to run stable diffusion pipeline using multithreading in fastapi ?
Hi.. I have created an stable diffusion API using Fastapi and it is working perfectly fine if sequential request are been made. I have tried to implement multithreading in the api to concurrently run multiple request, but the problem is every request output generation time is dependent on total number of request that a...
https://github.com/huggingface/diffusers/issues/5709
closed
[ "stale" ]
2023-11-08T16:19:45Z
2024-01-09T15:07:46Z
null
minkvirparia
huggingface/gsplat.js
23
How do you set up initial camera position?
When loading a splat file, I'd like to set the initial camera position to a specific location. How can this be achieved?
https://github.com/huggingface/gsplat.js/issues/23
closed
[ "enhancement", "question" ]
2023-11-08T16:04:04Z
2023-11-11T16:35:57Z
null
reconlabs-chris
huggingface/safetensors
381
Would a CLI to perform convert operation be useful?
### Feature request Could it be possible to add to this repo a CLI tool that would use the library to convert files stored in different format and convert them to safetensors. It would be useful to have also from the command line a way to introspect a model and find some property about it (layers, metadata, ...) ###...
https://github.com/huggingface/safetensors/issues/381
closed
[ "Stale" ]
2023-11-08T15:39:02Z
2024-01-02T01:48:28Z
2
remyleone
huggingface/transformers
27,361
Add how to preprocess mask for finetuning with SAM
### Feature request The [SAM image processor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam/image_processing_sam.py) takes images as input and resizes them so that the longest edge is 1024 (using default values). This is the size expect as input fo the SAM model. For inference, th...
https://github.com/huggingface/transformers/issues/27361
closed
[ "Feature request", "Vision" ]
2023-11-08T11:53:31Z
2024-01-08T16:40:38Z
null
rwood-97
huggingface/chat-ui
546
Custom Theme
I want to change the UI layout yet still be able to update the code in order to enjoy the new features as they are released. Is there a way to add my changes in a way that would be similar to a theme? or an outside addon?
https://github.com/huggingface/chat-ui/issues/546
closed
[]
2023-11-08T08:26:43Z
2023-11-15T09:32:22Z
2
kaplanyaniv
huggingface/datasets
6,388
How to create 3d medical imgae dataset?
### Feature request I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii') ### Motivation help us to upload 3d medical dataset to huggingface! ### Your contribution I'll submit a PR if I find a way to...
https://github.com/huggingface/datasets/issues/6388
open
[ "enhancement" ]
2023-11-07T11:27:36Z
2023-11-07T11:28:53Z
null
QingYunA
huggingface/datasets
6,387
How to load existing downloaded dataset ?
Hi @mariosasko @lhoestq @katielink Thanks for your contribution and hard work. ### Feature request First, I download a dataset as normal by: ``` from datasets import load_dataset dataset = load_dataset('username/data_name', cache_dir='data') ``` The dataset format in `data` directory will be: ``` ...
https://github.com/huggingface/datasets/issues/6387
closed
[ "enhancement" ]
2023-11-06T22:51:44Z
2023-11-16T18:07:01Z
null
liming-ai
huggingface/gsplat.js
15
Does it work with polycam models?
Hello! Thank you for your work, it looks very promising. Got it working with the README file... Just tried it with a .ply object out of polycam and got error ``` Uncaught (in promise) RangeError: byte length of Float32Array should be a multiple of 4 at new Float32Array (<anonymous>) at R.setData (Scene.ts...
https://github.com/huggingface/gsplat.js/issues/15
closed
[ "question" ]
2023-11-06T21:15:51Z
2023-11-10T18:26:55Z
null
karen-pal
huggingface/chat-ui
545
Chat-UI throws an 403 forbidden when access settings
When viewing the settings page after first setup the settings page fives the error: ```Failed to load resource: the server responded with a status of 403 (Forbidden) settings:1``` in the console. Without any explanation of what and why. Setup: ```yaml services: # Chat ui webserver chat-ui: container_nam...
https://github.com/huggingface/chat-ui/issues/545
closed
[ "support" ]
2023-11-06T15:09:33Z
2024-02-15T21:03:04Z
5
IT-Guy007
huggingface/alignment-handbook
9
How to finetune or lora on custom dataset
How to finetune or lora on custom dataset
https://github.com/huggingface/alignment-handbook/issues/9
open
[]
2023-11-05T02:38:33Z
2024-11-11T07:52:57Z
null
universewill
huggingface/peft
1,080
Add docs on how to merge adapters after 4bit QLoRA with PEFT 0.6
### Feature request there has been some controversy on how to correctly **merge the adapters with the base model after 4bit LoRA** training. to me it seems there are two ways to merge and save: - ChrisHayduk https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930 - TheBloke https://github.com/Th...
https://github.com/huggingface/peft/issues/1080
closed
[]
2023-11-04T10:07:16Z
2023-11-17T22:22:06Z
null
geronimi73
huggingface/huggingface_hub
1,801
Entire operation get cancelled when 1 file fails when using api.upload_folder - how to make it iterative
I am using below code. Uploaded like 80 GB file and the entire operation failed just because of 1 png failed to upload for some reason I see uploaded repo has 0 changes How can I make it iterative? So after each file upload it is committed to the repo I don't need commit or file history. Just upload newer file...
https://github.com/huggingface/huggingface_hub/issues/1801
closed
[ "bug" ]
2023-11-04T00:20:00Z
2023-11-26T09:09:35Z
null
FurkanGozukara
huggingface/transformers.js
378
Security issue - content security policy - script unsafe-eval
Context: I use @xenova/transformers 2.6.2 npm package from a web application to do image classifcations. Here is the gist of my setup: ```js const modelPath = 'own-domain/models-and-wasm/' env.localModelPath = "/"; env.useBrowserCache = true; env.backends.onnx.wasm.wasmPaths = modelPath; const classifier =...
https://github.com/huggingface/transformers.js/issues/378
open
[ "question" ]
2023-11-03T13:50:30Z
2023-11-06T13:44:57Z
null
stiano
huggingface/diffusers
5,643
How to use the ip adapter controlnet?
Hi, I can't use this specific controlnet because it's from here: https://huggingface.co/lllyasviel/sd_control_collection/tree/main and the format doesn't allow from_pretrained. When I use from_single_file, I get: ``` stable_diffusion/convert_from_ckpt.py", line 422, in convert_ldm_unet_checkpoint new_checkp...
https://github.com/huggingface/diffusers/issues/5643
closed
[]
2023-11-03T13:34:44Z
2023-11-13T15:12:29Z
null
alexblattner
huggingface/dataset-viewer
2,050
Should we support video datasets?
Like https://huggingface.co/datasets/commaai/commavq There was a previous intent in datasets: https://github.com/huggingface/datasets/pull/5339
https://github.com/huggingface/dataset-viewer/issues/2050
closed
[ "question", "feature request" ]
2023-11-03T13:33:00Z
2023-12-11T15:04:08Z
null
severo