repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js | 482 | How tot get the same output as the python library for the Resnet Model ? | ### Question
Hi,
I am trying to translate a python script to use it in my node server. Currently, I spawn a process to execute the python code, but I would like to improve response time by using the transformers.js version.
My problem is that I don't have the same output with the two codes.
The python output... | https://github.com/huggingface/transformers.js/issues/482 | closed | [
"question"
] | 2023-12-28T11:38:20Z | 2024-01-10T15:04:22Z | null | Spoutnik97 |
huggingface/diffusers | 6,370 | How to use diffusers lora in the AUTOMATIC1111 | Thanks for your great work, I use the train_text_to_image_lora_sdxl.py to train my custom dataset and get these output, And I get the good result. But I want to use the AUTOMATIC1111 to use the lora weight, I move the pytorch_lora_weights to the AUTOMATIC1111 lora folder But get the error report:`AssertionError: conver... | https://github.com/huggingface/diffusers/issues/6370 | closed | [] | 2023-12-28T06:17:19Z | 2024-01-02T13:38:26Z | null | chongxian |
huggingface/computer-vision-course | 163 | How to include "What you'll learn" section for this course? | Hello everyone,
Our PR for Fundamentals of Computer Vision was merged a few days back. After that, one thing we still need to acknowledge based on your [feedback](https://github.com/johko/computer-vision-course/issues/38#issuecomment-1764502604) on our chapter outline is building a demo using Gradio to give learners ... | https://github.com/huggingface/computer-vision-course/issues/163 | closed | [] | 2023-12-27T12:41:26Z | 2024-04-26T13:36:59Z | null | seshupavan |
huggingface/transformers | 28,260 | How to set pad_token of Llava for batched generation and training? | Hello, @younesbelkada I'm trying to use Llava for batched generation, using the default pad_token. here is the script:
```python
import json
from PIL import Image
from transformers import AutoProcessor, LlavaForConditionalGeneration,AutoTokenizer
from torch.utils.data import Dataset,DataLoader
import torch
impor... | https://github.com/huggingface/transformers/issues/28260 | closed | [] | 2023-12-27T12:17:02Z | 2024-02-05T02:43:32Z | null | TideDra |
huggingface/transformers | 28,259 | How to add new merge rules in AutoTokenizer | ### Model description
I'm training new tokenizer from llama2, however, it seems that BPE tokenizer will clear the origin "vocab" and "merge" dict, and the training result is highly bias in my own datasets (about 6M C function) with some ugly tokens.
I wonder that is it possible to train a tokenizer from llama2 with... | https://github.com/huggingface/transformers/issues/28259 | open | [
"New model"
] | 2023-12-27T12:15:26Z | 2023-12-27T12:15:26Z | null | Sandspeare |
huggingface/accelerate | 2,289 | [QUESTION] why stage3_gather_16bit_weights_on_model_save is set to false no matter what value of it in deepspeed config | [`accelerator._prepare_deepspeed()`](https://github.com/huggingface/accelerate/blob/d08c23c20975f39393b431143237c193733e7bb8/src/accelerate/accelerator.py#L1464C13-L1464C82) looks to force the `stage3_gather_16bit_weights_on_model_save` to `false`, which should raise an exception in [`accelerator.get_state_dict()`](htt... | https://github.com/huggingface/accelerate/issues/2289 | closed | [] | 2023-12-27T10:04:28Z | 2024-01-05T06:59:16Z | null | LaniakeaS |
huggingface/diffusers | 6,352 | how to choose save precision for lora file in training | I'm confused about my lora precision(fp16,bf16,float) and whether i can choose precision about my lora weights. I searched for the params about the **StableDiffusionXLPipeline.save_lora_weights** function used to save lora in sdxl text2img training script and didnt find params like 'save_precision' or sth.
anyone ca... | https://github.com/huggingface/diffusers/issues/6352 | closed | [] | 2023-12-27T09:02:47Z | 2023-12-28T08:21:29Z | null | DoctorTar |
huggingface/transformers.js | 481 | Why do certain models not load? | ### Question
I was keen to try:
https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0
I tried:
```ts
import {
AutoModelForCausalLM,
AutoTokenizer,
} from '@xenova/transformers';
const autoTokenizer = await AutoTokenizer.from_pretrained(
'Upstage/SOLAR-10.7B-Instruct-v1.0',
);
const model ... | https://github.com/huggingface/transformers.js/issues/481 | open | [
"question"
] | 2023-12-27T01:44:52Z | 2024-05-10T18:21:57Z | null | adaboese |
huggingface/peft | 1,298 | [Question] What is the main difference between "modules_to_save" and "target_modules"? | Hi, in my work I need to add some special token to LLAMA, so I need to train the parameter of ["embed_tokens", "lm_head"] for both layers, what confuses me is that should I add this parameter to LoraConfig's "modules_to_save " or "target_modules"? Looking forward to your reply! | https://github.com/huggingface/peft/issues/1298 | closed | [] | 2023-12-26T07:37:05Z | 2024-02-03T15:03:27Z | null | SatireY |
huggingface/datasets | 6,534 | How to configure multiple folders in the same zip package | How should I write "config" in readme when all the data, such as train test, is in a zip file
train floder and test floder in data.zip | https://github.com/huggingface/datasets/issues/6534 | open | [] | 2023-12-26T03:56:20Z | 2023-12-26T06:31:16Z | null | d710055071 |
huggingface/trl | 1,140 | How to additional finetune with new data from previous adapter ? | Hi All, I have question about finetune. Currently I use SFTtrainer for finetuning Llama2-7b-chat model and save it in adapter format. The question is, In case of I want to additional finetune with new data from previous adapter, How I could to do. Normally I additional finetune by merge adapter with base model before f... | https://github.com/huggingface/trl/issues/1140 | closed | [] | 2023-12-25T04:19:34Z | 2024-02-01T15:05:24Z | null | SiraHaruethaipree |
huggingface/optimum | 1,613 | Convert opus translation to onnx and run inference from it | To convert I use this snippet
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers.models.marian import MarianOnnxConfig
import onnxruntime as ort
model_ckpt = "Helsinki-NLP/opus-mt-en-zh"
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
ref_model = AutoModelForSeq2SeqLM.from_... | https://github.com/huggingface/optimum/issues/1613 | closed | [] | 2023-12-25T04:04:47Z | 2025-04-29T01:45:20Z | 5 | x4080 |
huggingface/chat-ui | 658 | chat-ui do not support TGI http url when deploy publicly | hi @nsarrazin, the chat-ui works well locally
~~~
# .env.local
endpoints: [{"type":"tgi","url":"http://127.0.0.1:8080/generate_stream"}]
~~~
but if deploy it in public, when chat from the external brower, get the 403 error:
~~~
403
You don't have access to this conversation. If someone gave you this link, ask... | https://github.com/huggingface/chat-ui/issues/658 | closed | [] | 2023-12-25T03:08:10Z | 2024-04-25T16:27:52Z | 1 | walkacross |
huggingface/transformers.js | 475 | How to use your own models | ### Question
Hey I really appreciate your work here!
I'm very interested in setting up a perfect RAG pipeline / flow and therefore I need a good document extraction with table-transformers and layout detection.
Example :
https://github.com/deepdoctection/deepdoctection
Where I'd use
https://huggingface.c... | https://github.com/huggingface/transformers.js/issues/475 | closed | [
"question"
] | 2023-12-24T21:38:02Z | 2024-05-15T09:32:26Z | null | DomEscobar |
huggingface/datasets | 6,530 | Impossible to save a mapped dataset to disk | ### Describe the bug
I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py).
After... | https://github.com/huggingface/datasets/issues/6530 | open | [] | 2023-12-23T15:18:27Z | 2023-12-24T09:40:30Z | 1 | kopyl |
huggingface/sentence-transformers | 2,392 | util.paraphrase_mining returning scores only above 0.98 | Hey,
I'm using util.paraphrase_mining (sentence-transformers v2.2.2) to get similarity scores (cosine) in a corpus of ~20k texts with the encoder model being all-MiniLM-L6-v2 and with the parameters query_chunk_size=500, corpus_chunk_size=1000, top_k=500000, max_pairs=5000000.
The returned list of triplets contain s... | https://github.com/huggingface/sentence-transformers/issues/2392 | closed | [
"question"
] | 2023-12-23T13:00:27Z | 2024-01-29T14:20:33Z | null | sinangokce |
huggingface/chat-ui | 656 | Web Search failed with "Invalid URL" | 
Why is this happening? It seems to happen regardless of whether I have USE_LOCAL_WEBSEARCH set to true or false.
```
SERPAPI_KEY=<my key>
USE_LOCAL_WEBSEARCH=true
MODELS=`[
{
"name": "mistralai/Mix... | https://github.com/huggingface/chat-ui/issues/656 | closed | [] | 2023-12-22T19:19:34Z | 2024-01-09T05:45:13Z | 5 | gururise |
huggingface/chat-ui | 655 | Generation failed (Module.summarize) when using TogetherAI openai compatible endpoint | TogetherAI offers an [OpenAI compatible endpoint](https://docs.together.ai/docs/openai-api-compatibility). When using this endpoint with the model setup as follows:
```
MODELS=`[
{
"name": "mistralai/Mixtral-8x7b-Instruct-v0.1",
"displayName": "Mixtral-8x7b",
"endpoints" : [{
"ty... | https://github.com/huggingface/chat-ui/issues/655 | open | [] | 2023-12-22T17:34:59Z | 2024-01-23T05:14:26Z | 1 | gururise |
huggingface/datasets | 6,529 | Impossible to only download a test split | I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function.
Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed b... | https://github.com/huggingface/datasets/issues/6529 | open | [] | 2023-12-22T16:56:32Z | 2024-02-02T00:05:04Z | 2 | ysig |
huggingface/transformers.js | 470 | How to convert a model with .pt tail | ### Question
I'm new to this area,I'm woundering how to convert a model with .pt tail?thanks a lot | https://github.com/huggingface/transformers.js/issues/470 | open | [
"question"
] | 2023-12-22T10:20:16Z | 2023-12-23T20:46:37Z | null | Bzayyz |
huggingface/transformers.js | 469 | How to convert a model with .pt tail | ### Question
I'm new to this area,I'm woundering how to convert a model with .p2 tail?thanks a lot | https://github.com/huggingface/transformers.js/issues/469 | closed | [
"question"
] | 2023-12-22T10:20:05Z | 2023-12-22T10:20:54Z | null | Bzayyz |
huggingface/chat-ui | 650 | chat-ui docker image failed to connect the mongo docker contrainer | step 1: build the chat-ui image
~~~
docker build -t chat-ui -f ./Dockerfile.local .
~~~
step 2:
~~~
# bind the 27016
docker run -d -p 27016:27017 --name mongo-chatui mongo:latest
~~~
step 3: run a contrainer
~~~
# add a .env.local config
MONGODB_URL=mongodb://localhost:27016
HF_TOKEN=<your access tok... | https://github.com/huggingface/chat-ui/issues/650 | open | [
"support",
"docker"
] | 2023-12-22T08:34:52Z | 2025-05-25T20:37:17Z | 6 | walkacross |
huggingface/chat-ui | 649 | Formatting is incorrect when using LiteLLM (Together.ai) | I'm using Mixtral-7b-Instruct-v0.1 via [LiteLLM](https://github.com/BerriAI/litellm) to provide a OpenAI compatible API to together.ai where the model is hosted.
Everything works fine, including streaming; however, the formatting is messed up as shown. Any ideas why?
 | https://github.com/huggingface/candle/issues/1463 | open | [] | 2023-12-21T18:42:38Z | 2024-01-01T11:56:29Z | null | tyfeng1997 |
huggingface/transformers | 28,179 | How to fine tune facebook/esm2_t33_650M_UR50D | ### System Info
How to fine tune facebook/esm2_t33_650M_UR50D?It's too big and the model.half() couldn't work. Besids, i always met the error : CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc). Is it possible that the model in the hug... | https://github.com/huggingface/transformers/issues/28179 | closed | [] | 2023-12-21T09:50:27Z | 2024-01-30T08:03:39Z | null | Admire7494 |
huggingface/alignment-handbook | 81 | Why we use a lower batch size when comparing SFT lora with SFT full fine-tuning ? | https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/sft/config_lora.yaml
| https://github.com/huggingface/alignment-handbook/issues/81 | closed | [] | 2023-12-20T21:09:33Z | 2024-01-07T21:03:14Z | 2 | shamanez |
huggingface/trl | 1,115 | How to prepare multi-turn dialogue dataset for dpo? | the single-turn dialogue dataset is like:
dpo_dataset_dict = {
"prompt": [
"hello",
"how are you",
"What is your name?",
"What is your name?",
"Which is the best programming language?",
"Which is the best programming language?",
"Which is the best pro... | https://github.com/huggingface/trl/issues/1115 | closed | [
"🏋 DPO"
] | 2023-12-20T09:14:45Z | 2024-10-03T14:12:48Z | null | chloefresh |
huggingface/transformers | 28,155 | What is the minimum video card with large memory required to run the mixtral-8x7b model | I mean the model that just came out:mistralai/Mixtral-8x7B-Instruct-v0.1,looks like a lot of parameter files,what is the minimum nvidia graphics card video memory required? | https://github.com/huggingface/transformers/issues/28155 | closed | [] | 2023-12-20T01:54:45Z | 2024-01-28T08:04:44Z | null | zysNLP |
huggingface/dataset-viewer | 2,218 | JobManagerCrashedError jobs are never retried | Currently, we have 7768 jobs with error_code `JobManagerCrashedError`. Some of them are caused by zombie killer set crashes.
```
Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.aggregate([{$match:{error_code:"JobManagerCrashedError","details.copied_from_artifact":{$exists:false}}}... | https://github.com/huggingface/dataset-viewer/issues/2218 | closed | [
"question"
] | 2023-12-19T15:22:30Z | 2024-01-09T20:32:58Z | null | AndreaFrancis |
huggingface/optimum | 1,608 | XENOVA conversion issues | ### System Info
```shell
using the requirements.txt in Xenova for environment.
https://github.com/xenova/transformers.js/blob/main/scripts/requirements.txt
```
### Who can help?
@xenova
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An off... | https://github.com/huggingface/optimum/issues/1608 | closed | [
"bug"
] | 2023-12-19T02:11:58Z | 2023-12-19T04:54:00Z | 3 | gidzr |
huggingface/safetensors | 409 | Doesn't work with versions of torch where "meta" dtype is not supported. | ### System Info
This is on my mac where I was just testing the interface. It seems like this could easily be fixed.
```
...
>>> from safetensors.torch import save_file
>>> x
{'a': tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])}
>>> x['a'].device
device(type='cpu')
>>> save_file(x, filename='foo')
Traceback... | https://github.com/huggingface/safetensors/issues/409 | closed | [
"Stale"
] | 2023-12-18T15:51:28Z | 2024-01-23T01:49:25Z | null | danpovey |
huggingface/candle | 1,457 | How to do to quantize manually a phi-2 version, starting from safetensors file | Hi
I have fine tuned a phi-2 model using lora
I merged adapter with base model to get a trained one
I now have a bunch of safetensors file
How is it possible to convert these files into a gguf file ( llama.cpp concerter does not support phi)
In other words, how is it possible to achieve the same as : mo... | https://github.com/huggingface/candle/issues/1457 | closed | [] | 2023-12-18T15:14:37Z | 2023-12-18T15:58:12Z | null | ghost |
huggingface/optimum | 1,605 | Static Quantization - Token classification | Hi,
I am following the code [here](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/quantization/token-classification) for doing static quantization on my token classification model.
The inference time for quantized model(static) is almost the same as non quantized one. I have tried dynamic q... | https://github.com/huggingface/optimum/issues/1605 | open | [
"quantization"
] | 2023-12-18T13:31:33Z | 2024-10-09T09:21:22Z | 0 | akshay-babbar |
huggingface/diffusers | 6,211 | [Examples] How much time you support training scripts of text to video in diffusers? | I want to train svd in diffusers, can you support this feature in examples.
Thanks for your contributions. | https://github.com/huggingface/diffusers/issues/6211 | closed | [
"stale"
] | 2023-12-18T08:26:57Z | 2024-01-26T15:05:32Z | null | jiaxiangc |
huggingface/optimum | 1,604 | Table Transformer to ONNX | ### Feature request
Hi all,
I am trying to convert Table-transformer model from transformers(pretrained) to ONNX. Error reads something like " 'table-transformer' is not a supported format.
Is there any way to convert table-transformer (TATR) to ONNX model. Any help would be cherished.
Thanks.
### Motivation
M... | https://github.com/huggingface/optimum/issues/1604 | closed | [
"feature-request",
"onnx"
] | 2023-12-18T07:18:21Z | 2024-02-28T08:52:49Z | 3 | balajiChundi |
huggingface/safetensors | 407 | Does safetensors save the model's hierarchical structure? Is it similar to ONNX? | If safetensors saves the model's hierarchical structure, how can one access this structure? Is it possible to read it directly like with ONNX?Can I directly load a model from safetensors?
If the hierarchical structure of the model is not preserved, does it mean that the original model must be read from config.json? | https://github.com/huggingface/safetensors/issues/407 | closed | [
"Stale"
] | 2023-12-17T15:04:55Z | 2024-02-24T01:45:09Z | 3 | ZDragonX |
huggingface/datasets | 6,507 | where is glue_metric.py> @Frankie123421 what was the resolution to this? | > @Frankie123421 what was the resolution to this?
use glue_metric.py instead of glue.py in load_metric
_Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
| https://github.com/huggingface/datasets/issues/6507 | closed | [] | 2023-12-17T09:58:25Z | 2023-12-18T11:42:49Z | null | Mcccccc1024 |
huggingface/peft | 1,278 | How to add trainable parameters? (bugs in 'modules_to_save') | ### System Info
Hi,
How can I train other weights in the model rather than fix them during lora training?
### Who can help?
@BenjaminBossan Hi, I find you are active recently so I @ you here..
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially su... | https://github.com/huggingface/peft/issues/1278 | closed | [] | 2023-12-17T05:34:09Z | 2024-01-29T15:03:39Z | null | shawnricecake |
huggingface/accelerate | 2,262 | When I trained with two processes the gradient of the parameters could not be shared and I ended up with two different models. How to solve this problem? | When I trained with two processes the gradient of the parameters could not be shared and I ended up with two different models. Did anyone meet this problem before? How to solve it? | https://github.com/huggingface/accelerate/issues/2262 | closed | [] | 2023-12-15T13:48:34Z | 2024-06-11T12:26:07Z | null | zypsjtu |
huggingface/datasets | 6,501 | OverflowError: value too large to convert to int32_t | ### Describe the bug

### Steps to reproduce the bug
just loading datasets
### Expected behavior
how can I fix it
### Environment info
pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3... | https://github.com/huggingface/datasets/issues/6501 | open | [] | 2023-12-15T10:10:21Z | 2025-06-27T04:27:14Z | 1 | zhangfan-algo |
huggingface/diffusers | 6,178 | How to train Stable Diffusion with DDPM? | I want to train Stable Diffusion with DDPM, but I can't find the code in this project. I found a lot of training code elsewhere on the internet, but most of it is distillation code on pre-trained models, not the original DDPM training code. I also tried to implement the original training code myself, but I couldn't get... | https://github.com/huggingface/diffusers/issues/6178 | closed | [] | 2023-12-15T02:43:07Z | 2023-12-15T02:54:06Z | null | MenSanYan |
huggingface/dataset-viewer | 2,208 | Add a collection with datasets infos | While working on enabling private datasets (#39) under conditions (isPro, isEnterprise), I thought we missed a place where we control the access to the dataset.
I think the first step in the DAG, instead of dataset-config-names, should be more about the dataset characteristics: if it's private or public, maybe if it... | https://github.com/huggingface/dataset-viewer/issues/2208 | closed | [
"question",
"refactoring / architecture",
"P2"
] | 2023-12-14T13:59:42Z | 2024-01-11T14:30:03Z | null | severo |
huggingface/dataset-viewer | 2,207 | Backfill job processes datasets with disabled viewer? | If I read the code correctly, the backfill cronjob does not check if the dataset viewer is disabled (`viewer: false` in the README).
If we want to implement the dataset viewer for private datasets, under conditions (isPro, isEnterprise), we will have to check these conditions before adding jobs. | https://github.com/huggingface/dataset-viewer/issues/2207 | closed | [
"bug",
"question",
"P2"
] | 2023-12-14T13:01:53Z | 2024-02-06T16:03:10Z | null | severo |
huggingface/huggingface_hub | 1,907 | How to fix "VBox(children=(HTML(value='<center> <img..." error? When trying login() | ### Describe the bug
Hello. I am doing like below but it doesn't show enter token panel as supposed to be
What could be the reason?

Pip freeze is as below
```
alembic @ file:///home/conda/feedstoc... | https://github.com/huggingface/huggingface_hub/issues/1907 | closed | [
"bug"
] | 2023-12-14T11:45:44Z | 2025-03-15T08:03:44Z | null | FurkanGozukara |
huggingface/unity-api | 17 | Android support | Great repo! My question is - does it work on Android?
I did some research but couldn't find much - except for some comments on [YouTube](https://www.youtube.com/watch?v=Ngmb7l7tO0I) that speech recognition doesn't really work on Android ("_when i export to an a Android Device the text always is "you", no matter what... | https://github.com/huggingface/unity-api/issues/17 | open | [
"question"
] | 2023-12-14T11:15:56Z | 2024-01-18T10:56:45Z | null | dogadogan |
huggingface/alignment-handbook | 76 | can we inference with lora adapter after running the SFT ? | I trained the model using SFT on a custom dataset using lora config, which produced a Lora adapter, can we infer with it like having a base model and this adapter on top of it, or merge it ? | https://github.com/huggingface/alignment-handbook/issues/76 | closed | [] | 2023-12-14T10:55:20Z | 2023-12-28T07:14:29Z | 2 | Tejaswi-kashyap-006 |
huggingface/accelerate | 2,251 | when a tensor is generated from some_func(A.shape) (where A is a tensor), the generated tensor locates in cpu, not A's device | how to solve it ? I have tried tensor.to(A.device) and tensor.to(accelerator.device), but it seems not to work. | https://github.com/huggingface/accelerate/issues/2251 | closed | [] | 2023-12-14T09:18:15Z | 2023-12-14T14:38:17Z | null | weizhenhuan |
huggingface/peft | 1,265 | When generate outputs, how to get the probility of the outputs? Is there any param to let the model output probility ? | ### Feature request
xx
### Motivation
xx
### Your contribution
xx | https://github.com/huggingface/peft/issues/1265 | closed | [] | 2023-12-14T08:05:34Z | 2023-12-14T10:37:19Z | null | ShawnALiu |
huggingface/transformers | 28,025 | How to combine two pretrained model in huggingface transformers? | ### Feature request
I want to combine two pretrained model(LLAMA and BERT) in a new python class. More specific,The way I've tried is to define a new class c that inherits llama and load bert in c's \_\_init\_\_ function.

If you go to https://... | https://github.com/huggingface/chat-ui/issues/631 | open | [
"enhancement"
] | 2023-12-13T10:50:19Z | 2023-12-14T14:26:31Z | 4 | patchie |
huggingface/optimum | 1,592 | Can optimum.bettertransformer supports LLAVA model? | ### System Info
```shell
Local NVIDIA env:
(llava) xuyang@nobisuke:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0
Python=3.10.4
Torch... | https://github.com/huggingface/optimum/issues/1592 | closed | [
"bug"
] | 2023-12-13T09:08:35Z | 2023-12-13T12:37:13Z | 1 | xiaovhua |
huggingface/blog | 1,702 | How to introduce new alphabets in Whisper fine-tuning | Dear @sanchit-gandhi,
I was following your tutorial, [Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper), to fine-tune Whisper with a dataset in the Amharic language. Amharic is used in Whisper training as speech-translation only, [Amharic audio -> corresponding... | https://github.com/huggingface/blog/issues/1702 | open | [] | 2023-12-13T02:47:31Z | 2024-10-02T02:16:12Z | null | mequanent |
huggingface/chat-ui | 629 | Unable to use Azure AD for OpenID signin | Azure AD does not return the `picture` claim for the `profile` scope which results in a Zod validation error and authentication failing with `HTTP 500`:
```
chat-ui-chat-ui-1 | 21:07:21 28|index | ZodError: [
chat-ui-chat-ui-1 | 21:07:21 28|index | {
chat-ui-chat-ui-1 | 21:07:21 28|index | "code": "inval... | https://github.com/huggingface/chat-ui/issues/629 | closed | [
"support"
] | 2023-12-12T21:22:19Z | 2024-02-19T09:39:51Z | 8 | zacps |
huggingface/chat-ui | 628 | isModelsModalOpen is not defined in ChatIntroduction.svelte probably after recent update ? | Hi getting this error after updating to the latest version :
Am Running :
{
'chat-ui': '0.6.0',
npm: '10.2.4',
node: '21.3.0',
acorn: '8.11.2',
ada: '2.7.4',
ares: '1.20.1',
base64: '0.5.1',
brotli: '1.0.9',
cjs_module_lexer: '1.2.2',
cldr: '44.0',
icu: '74.1',
llhttp: '9.1.3',
... | https://github.com/huggingface/chat-ui/issues/628 | closed | [
"support"
] | 2023-12-12T18:49:31Z | 2023-12-24T07:40:42Z | 7 | DrShivang |
huggingface/autotrain-advanced | 389 | How to disable default used --multi_gpu ? | File "/app/env/lib/python3.10/site-packages/accelerate/commands/launch.py", line 822, in _validate_launch_command
raise ValueError("You need to use at least 2 processes to use `--multi_gpu`.")
ValueError: You need to use at least 2 processes to use `--multi_gpu`.
How to disable this from the default provided... | https://github.com/huggingface/autotrain-advanced/issues/389 | closed | [] | 2023-12-12T13:32:03Z | 2023-12-15T09:21:52Z | null | FiveTechSoft |
huggingface/chat-ui | 627 | Rlhf data collection feature | Is it possible to add a way to generate multiple drafts for a given input. And then based on what the user picks save that data so that it can be used for rlhf? | https://github.com/huggingface/chat-ui/issues/627 | open | [
"enhancement",
"front",
"back"
] | 2023-12-12T13:29:06Z | 2023-12-14T08:53:14Z | 0 | nivibilla |
huggingface/transformers | 27,974 | how to replace the existing token in a tokenizer | ### Feature request
I have a tokenizer which have lots of preserved tokens like bellow:
```
'<reserved_7>': 100,
'<reserved_8>': 101,
'<reserved_9>': 102,
'<reserved_10>': 103,
'<reserved_11>': 104,
'<reserved_12>': 105,
'<reserved_13>': 106,
'<reserved_14>': 107,
```
I want to replace the '<reser... | https://github.com/huggingface/transformers/issues/27974 | closed | [] | 2023-12-12T12:59:53Z | 2025-05-05T19:18:29Z | null | muziyongshixin |
huggingface/chat-ui | 623 | ChatUI with Docker - Permissions Issue | I'm trying to use the ChatUI space with Docker. I have a private, custom model which I've trained.
I want to access it in a private space using Docker ChatUI
I seem to be running into permissions errors.
Things I've tried:
Following the instructions set out here: https://huggingface.co/blog/Llama2-for-non-engin... | https://github.com/huggingface/chat-ui/issues/623 | open | [
"support"
] | 2023-12-12T08:10:31Z | 2023-12-28T13:58:22Z | 1 | aidansys17 |
huggingface/text-generation-inference | 1,332 | How can I set log output to local file | ### Feature request
I want to set the TGI log to file instead of stdout.
### Motivation
I want to set the TGI log to file instead of stdout.
### Your contribution
how can I use params in command of env variables to set log output to file. | https://github.com/huggingface/text-generation-inference/issues/1332 | closed | [
"Stale"
] | 2023-12-12T07:54:26Z | 2024-01-18T01:46:56Z | null | soulseen |
huggingface/alignment-handbook | 74 | A question about the SFTTrainer (also a theoretical question about SFT in general) | I have a general question about Supervised Fine Tuning (SFT) for Dialogue applications.
Should the SFT process use the same LM objective (next-token prediction) that is used in pre-training a language model?
The "Dialogue" task is predicting "assistant" tokens, right? Shouldn't the objective be predicting only th... | https://github.com/huggingface/alignment-handbook/issues/74 | open | [] | 2023-12-12T06:54:02Z | 2024-01-22T14:34:15Z | 3 | PradeepKadubandi |
huggingface/transformers.js | 453 | Summarization Parameters not working | ### Question
I've tried several of the supported summarization models with the code used in the browser extension example.
The only one I get any results from in a reasonable time is t5-small.
My problem with it is that despite any parameters I try to pass in the result is always same length.
I've traced thro... | https://github.com/huggingface/transformers.js/issues/453 | open | [
"question"
] | 2023-12-12T06:21:52Z | 2023-12-19T21:52:32Z | null | kwlayman |
huggingface/safetensors | 400 | torch.nn.Module named_parameters() seem to be failing for safetensors | ### System Info
safetensors==0.4.1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Reproduction
Noticed this issue with the new Mixtral model
https://github.com/vllm-project/vllm/issues/2020
Is there any way to fix this with safetensors?
### Expected behavior
Load the m... | https://github.com/huggingface/safetensors/issues/400 | closed | [
"Stale"
] | 2023-12-11T18:54:06Z | 2024-01-17T01:48:50Z | 1 | 0-hero |
huggingface/optimum | 1,583 | Add support for Chatglm2 & qwen onnx models | ### Feature request
Need to export ChatGLM2 & Qwen models to onnx using hf optimum.
ChatGLM2: model-card-> [https://huggingface.co/THUDM/chatglm2-6b](https://github.com/huggingface/optimum/issues/url)
Qwen: model-card-> [https://huggingface.co/Qwen/Qwen-7B-Chat](https://github.com/huggingface/optimum/issues/url)
... | https://github.com/huggingface/optimum/issues/1583 | closed | [] | 2023-12-11T15:22:59Z | 2024-04-24T10:21:48Z | 4 | manishghop |
huggingface/peft | 1,247 | How to save parameters in prompt_encoder layers in p-tuning? | I want to resume training from checkpoint in p-tuning, but the model only save parameters in prompt_embeddings.
<img width="370" alt="image" src="https://github.com/huggingface/peft/assets/58416622/a085224f-32f2-409c-9a51-77c7438bc6a2">
| https://github.com/huggingface/peft/issues/1247 | closed | [] | 2023-12-11T02:44:59Z | 2024-01-19T15:03:32Z | null | lyt719 |
huggingface/optimum-benchmark | 102 | How to evaluate a model that already exists locally and hasn't been uploaded yet, "model=?" | 
i really want to know how to load qwen model, thank you very much | https://github.com/huggingface/optimum-benchmark/issues/102 | closed | [] | 2023-12-10T08:35:59Z | 2024-01-11T08:18:17Z | null | WCSY-YG |
huggingface/transformers | 27,928 | [Question] What is the main difference between "AutoModelForCasualLM" and "PeftModelForCausalLM"? | I also wrote it down in peft repo. However this issue is also related to transformers. So i write my question here again.
issue is here in peft(https://github.com/huggingface/peft/issues/1245)
Hello, Sorry for naive question.
I noticed that the``model.generate()`` function performed differently when inferrence rig... | https://github.com/huggingface/transformers/issues/27928 | closed | [] | 2023-12-10T03:10:36Z | 2024-02-01T00:49:07Z | null | daehuikim |
huggingface/peft | 1,245 | [Question] What is the main difference between "AutoModelForCasualLM" and "PeftModelForCausalLM"? | Because This is is related to "transformers". Therefore I wrote this question in transformers repo either.
issue is here in transformers(https://github.com/huggingface/transformers/issues/27928)
Hello, Sorry for naive question.
I noticed that the``model.generate()`` function performed differently when inferrence r... | https://github.com/huggingface/peft/issues/1245 | closed | [] | 2023-12-10T03:08:54Z | 2023-12-11T11:15:25Z | null | daehuikim |
huggingface/diffusers | 6,113 | How to use the models from sd_control_collection hf repo in diffusers | How to load/convert the models at https://huggingface.co/lllyasviel/sd_control_collection/tree/main with diffusers?
```
>>> pipe = diffusers.StableDiffusionPipeline.from_single_file("diffusers_xl_canny_full.safetensors")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubunt... | https://github.com/huggingface/diffusers/issues/6113 | closed | [] | 2023-12-09T14:11:26Z | 2024-06-11T18:22:03Z | null | anilsathyan7 |
huggingface/tokenizers | 1,410 | How to create Tokenizer.json? | I have this tokenizer and I want to convert it to **tokenizer.json** format.
- added_tokens.json
- normalizer.json
- special_tokens_map.json
- config.json
- preprocessor_config.json
- vocab.json
- merges.txt
- pytorch_model.bin
Is it possible to replace my tokenizer data wit... | https://github.com/huggingface/tokenizers/issues/1410 | closed | [
"Stale"
] | 2023-12-08T09:41:18Z | 2024-01-14T01:52:39Z | null | kenaii |
huggingface/optimum | 1,577 | Support the ORT of the Stable Diffusion XL inpaint model | ### Feature request
Hi all.
We would like to convert the stable-diffusion-xl-inpaint model below to ONNX and run it using ORT. The conversion to ONNX went well using Optimum's cli, but there doesn't seem to be a Python class for ORT inference.
https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting... | https://github.com/huggingface/optimum/issues/1577 | closed | [
"feature-request",
"Stale"
] | 2023-12-08T09:21:06Z | 2025-02-19T02:02:54Z | 2 | 0-chan-kor |
huggingface/chat-ui | 617 | Does Chat-UI support multithreading? | Maybe it depends on node.js, but I want to know the CPU utilization. | https://github.com/huggingface/chat-ui/issues/617 | closed | [
"question"
] | 2023-12-08T05:36:18Z | 2023-12-14T07:30:01Z | null | calycekr |
huggingface/chat-ui | 615 | npm run error (latest git pull) | I created a .env.local as:
```
MONGODB_URL=mongodb://localhost:27017
MONGODB_DB_NAME=chat-ui
MONGODB_DIRECT_CONNECTION=false
COOKIE_NAME=hf-chat
HF_TOKEN=
HF_API_ROOT=https://api-inference.huggingface.co/models
OPENAI_API_KEY=
```
Then I tried:
```
npm install #everything went fine
npm run dev -- --hos... | https://github.com/huggingface/chat-ui/issues/615 | closed | [
"support"
] | 2023-12-07T10:59:53Z | 2024-04-24T12:29:46Z | 4 | shuther |
huggingface/chat-ui | 614 | Docker build - multiple errors - documentation | I can't find documentation to build it myself; so I tried:
`docker-compose build up`
But I got multiple errors amoung:
> chat-ui/.env: line 23: unexpected character "\"" in variable name "\"PROVIDER_URL\": \"\","
Even `source .env` returned multiple errors; I tried to change the `into a ' with no luck.
My go... | https://github.com/huggingface/chat-ui/issues/614 | open | [
"support"
] | 2023-12-07T10:55:04Z | 2024-06-01T12:44:18Z | 4 | shuther |
huggingface/text-generation-inference | 1,318 | how to run tgi installed locally without any UI | ### System Info
how to run tgi installed locally without any UI?
pip install text-generation , giving error: ERROR: No matching distribution found for text-generation
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
... | https://github.com/huggingface/text-generation-inference/issues/1318 | closed | [
"Stale"
] | 2023-12-07T08:47:13Z | 2024-01-13T01:46:40Z | null | poojitharamachandra |
huggingface/autotrain-advanced | 376 | How to a Autotrain Seq2Seq ? | Hi everyone , I'm trying to finetune a Helsinki-NLP/opus-mt-tc-big-ar-en on local arabic of morocco which is called Daraija Arabic , the problem is that I'm unable to use Autotrain I keep getting 500 error code
.
## What I am trying to do
I'm trying to create a tokenizer... | https://github.com/huggingface/tokenizers/issues/1407 | open | [
"bytefallback",
"Feature Request"
] | 2023-12-06T09:03:35Z | 2024-08-27T01:57:04Z | null | dinhanhx |
huggingface/transformers.js | 432 | Cannot download the model from huggingface | Because of the network reason, when using transfomer.js we cannot download the model successful
How to set the network proxy for the model download
| https://github.com/huggingface/transformers.js/issues/432 | open | [
"question"
] | 2023-12-06T08:18:58Z | 2023-12-10T13:42:50Z | null | wujohns |
huggingface/blog | 1,677 | how to achieve image-text matching of BLIP2 | Hi, Thanks to the authors for the works.
I am trying to achieve image-text matching of BLIP2, but I didn't find any examples of that. Can you give me some help or tips? | https://github.com/huggingface/blog/issues/1677 | open | [] | 2023-12-06T07:03:21Z | 2023-12-06T07:08:48Z | null | wkqun555 |
huggingface/diffusers | 6,070 | How to overload existing class in diffusers | That's just for personal development. I want to write a new class inherited from existing class (e.g. `ControlNetModel`) and I added some new parameters to `__init__` function, but found that the `__init__` function is still the parent's implementation, whether to add the decorator `register_to_config` or not.
Hope ... | https://github.com/huggingface/diffusers/issues/6070 | closed | [] | 2023-12-06T06:41:44Z | 2024-09-25T14:44:04Z | null | OrangeSodahub |
huggingface/diffusers | 6,067 | How to run the fine_tuned model? | Hi all,
I used the instructions given [here](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) to fine_tune the model on dog pictures (as explained in the link).
The fine_tuning has finished, and a folder called path-to-save-model has been created (that has the weights of the model). Now how d... | https://github.com/huggingface/diffusers/issues/6067 | closed | [] | 2023-12-06T01:01:56Z | 2025-04-28T10:32:33Z | null | alireza18878 |
huggingface/text-generation-inference | 1,314 | What is the default tokenizer behaviour? | ### System Info
N/A
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
I'm trying to understand whether special tokens (i.e. BOS and EOS) are added and suppressed on tokenization and decoding.
Encoding:
- I searched ... | https://github.com/huggingface/text-generation-inference/issues/1314 | closed | [] | 2023-12-05T17:35:05Z | 2024-01-19T13:14:13Z | null | RonanKMcGovern |
huggingface/chat-ui | 609 | [Feature Request] Uploading PDFS/Text Files/Images? | I love the search function and it makes the chat feel so much more accurate! I use it mainly as a direct ChatGPT replacment, using code models when needed or normal models for chat.
Can we have the option to upload images/pdfs/other files to the chat? the images could be integrated by clip/blip, and the PDF or text ... | https://github.com/huggingface/chat-ui/issues/609 | open | [] | 2023-12-05T12:20:39Z | 2024-10-04T01:13:18Z | 3 | iChristGit |
huggingface/trl | 1,059 | How can I have the evaluation pass in only the response to a prompted/instructed generation into the metric. | I have created the following metric:
```py
class MyCustomMetric(Metric):
def _info(self):
# Returns the MetricInfo that defines the name, description, etc.
return datasets.MetricInfo(
# This should be a short description of your metric.
description="_DESCRIPTION",
... | https://github.com/huggingface/trl/issues/1059 | closed | [] | 2023-12-04T19:01:34Z | 2024-01-12T15:05:10Z | null | CakeCrusher |
huggingface/distil-whisper | 49 | How to make training data? | I have a folder like this:
audio_1
transcript_1.txt
audio_2
transcript_2.txt
how can I make this folder into huggingface dataset? | https://github.com/huggingface/distil-whisper/issues/49 | open | [] | 2023-12-04T18:44:40Z | 2023-12-12T16:51:48Z | null | satani99 |
huggingface/computer-vision-course | 77 | Issue with rendering the course | If we try to render the course to preview how our added content looks like, it throws the following error
```bash
sarthak@kde:~/Desktop/computer-vision-course$ doc-builder preview computer-vision-course chapters/ --not_python_module
Initial build docs for computer-vision-course chapters/ /tmp/tmp0uqdjoxf/computer-vi... | https://github.com/huggingface/computer-vision-course/issues/77 | open | [
"question"
] | 2023-12-04T01:02:22Z | 2023-12-08T18:17:19Z | null | sarthak247 |
huggingface/sentence-transformers | 2,363 | How to retrieve the epoch of the saved model from model.save ? | Hi,
Thank you for the repo.
Can anyone help me with retrieving the epoch of the saved model, in both cases where save_best_model=True and save_best_model=False?
Thank you
```
model.fit(train_objectives=[(train_dataloader, train_loss)],
evaluator=evaluator,
epochs=num_epochs,
... | https://github.com/huggingface/sentence-transformers/issues/2363 | closed | [] | 2023-12-02T15:25:52Z | 2024-01-09T22:16:20Z | null | gowrijsuria |
huggingface/transformers.js | 426 | [Question] feature-extraction discrepancies across different platforms | I'm observing discrepancies in feature-extraction results across different platforms. Here's the code:
```js
import { pipeline, env } from '@xenova/transformers'
const extractor = await pipeline('feature-extraction', 'Xenova/gte-small', {
quantized: false,
cache_dir: './.cache',
local_files_only: false,... | https://github.com/huggingface/transformers.js/issues/426 | closed | [
"question"
] | 2023-12-01T17:12:04Z | 2023-12-05T18:51:03Z | null | devfacet |
huggingface/chat-ui | 604 | "Invalid State: Controller is already closed" error when trying to use chat-ui locally with llama.cpp | HELP NEEDED
**What is the issue?**
Not able to use chat-ui locally to get the response back when using the llama.cpp as a server.
I can load the chat-ui after installing it via npm install and npm run dev. The env.local file is also configured and UI allows to send the request. However, the response never comes ba... | https://github.com/huggingface/chat-ui/issues/604 | closed | [] | 2023-11-30T16:42:06Z | 2023-11-30T17:41:19Z | 1 | ManasInd |
huggingface/optimum | 1,556 | RuntimeError: Cannot infer the task from a local directory yet, please specify the task manually. | ### System Info
windows 10 - ryzen 3600x - 16 gb ddr4-3000 - python 3.10 - latest optimum inside a venv
### Who can help?
_No response_
### Information
When I try to convert a model to openvino using
optimum-cli export openvino -m "d:\sdxl\LCMphoton" "d:\sdxl\LCMphotonov"
I have this error :
Ru... | https://github.com/huggingface/optimum/issues/1556 | closed | [
"bug"
] | 2023-11-30T16:09:24Z | 2023-12-09T22:37:44Z | 2 | patientx |
huggingface/safetensors | 396 | [Feature request] How about support async save to disk? | ### Feature request
How about support async save to disk?
### Motivation
the weight or optimizer is vary large for LLMs,so,it will waste a lot of time for tensor from cpu to disk。
If we can support async save to disk, it will be vary helpful.
### Your contribution
. | https://github.com/huggingface/safetensors/issues/396 | closed | [
"Stale"
] | 2023-11-30T02:55:25Z | 2024-02-13T01:46:40Z | null | ZHUI |
huggingface/transformers.js | 424 | [Question] Batch inference for vit | It seems like all the tests in the repository related to processors and image models use one image per input.
1. Do the models support feeding a batch of images as input during inference? Is there a speed benefit from this?
2. Are there any other optimization/parallelization tools in transformers.js that I can use ... | https://github.com/huggingface/transformers.js/issues/424 | closed | [
"question"
] | 2023-11-29T09:52:16Z | 2023-12-05T14:49:36Z | null | arseniymerkulov |
huggingface/transformers | 27,755 | How to inference the model with 200k length context | ### Model description
I want to test Yi-34B-200k, Although I ran through the model, as the context length increased, OOM appeared, and I wondered how I could test to 200k context length with sufficient GPU resources.
### Open source status
- [X] The model implementation is available
- [X] The model weights are avai... | https://github.com/huggingface/transformers/issues/27755 | closed | [] | 2023-11-29T07:37:06Z | 2024-05-24T07:24:56Z | null | taishan1994 |
huggingface/transformers.js | 423 | Not able to load local classification onnx model | Was trying to follow the instruction of this page to load local custom model, but failed to find local path https://huggingface.co/docs/transformers.js/custom_usage
the code snippet
`
import { env, AutoTokenizer, AutoModelForSequenceClassification } from '@xenova/transformers';
env.useFS = true;
env.localModel... | https://github.com/huggingface/transformers.js/issues/423 | closed | [
"question"
] | 2023-11-29T06:40:09Z | 2023-11-30T07:27:27Z | null | purezhanghan |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.