repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/transformers.js
482
How tot get the same output as the python library for the Resnet Model ?
### Question Hi, I am trying to translate a python script to use it in my node server. Currently, I spawn a process to execute the python code, but I would like to improve response time by using the transformers.js version. My problem is that I don't have the same output with the two codes. The python output is a vector of dimension 2048 The js output is a vector of dimension 1000 It seems that my code has a problem as soon as the ImageProcessor step because the `inputs` are not equal Python code : ```python import torch from transformers import logging logging.set_verbosity_error() from PIL import Image class ImgToVec: def __init__(self, pretrained_model="microsoft/resnet-50"): from transformers import AutoImageProcessor, ResNetModel self.pretrained_model = pretrained_model self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") self.image_processor = AutoImageProcessor.from_pretrained(pretrained_model) self.model = ResNetModel.from_pretrained(pretrained_model).to(self.device) def get_embedding(self, file): im = Image.open(file) inputs = self.image_processor(im, return_tensors="pt").to(self.device) print(f"inputs : {inputs} dimensiosn : {inputs['pixel_values'].size()}") with torch.no_grad(): outputs = self.model(**inputs) return outputs.pooler_output[0, :, 0, 0].tolist() # https://cdn-lfs.huggingface.co/repos/cf/db/cfdbeec4acf4145f96e47e07a9e161cade4dbce7cfad3ba24765bf1713d53ef3/d65b6f72943d5e2d4f7e5e4dedfb93aea0fbbda140ae7c3ee772124b579e07c4?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27football-match.jpg%3B+filename%3D%22football-match.jpg%22%3B&response-content-type=image%2Fjpeg&Expires=1704020059&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwNDAyMDA1OX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy9jZi9kYi9jZmRiZWVjNGFjZjQxNDVmOTZlNDdlMDdhOWUxNjFjYWRlNGRiY2U3Y2ZhZDNiYTI0NzY1YmYxNzEzZDUzZWYzL2Q2NWI2ZjcyOTQzZDVlMmQ0ZjdlNWU0ZGVkZmI5M2FlYTBmYmJkYTE0MGFlN2MzZWU3NzIxMjRiNTc5ZTA3YzQ%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=kWwcSkWcf8K62Tgr57HYD5VObZuozl3Jf%7EHV5alcyRA-gvbREfzgjMKU9rVOc84r0uwo9d3f-si-PoJ3GdyB8WObJFJWF0nE9SX5C-f3Nookj4SWevcJkLNgF27KqUPMhWWZ8B3KjEDvcxPirjHfc4fv87-uM%7EQIuazixgu0i8lXpzeSyKdZGNIc3zUG-hDzU3EKCGBWbwnGG9Yq%7Evz%7Eit-vvYc7i1AoYTAteZUP1ngDdywjwNf6VvvGqmyBdMcwVDiA0ShwAhW9Z3mqt%7EVz6HaYipWejY0mWmyVhyCWFtJOe9yrk%7ETJKr5cOV3yq6sM0jSheh3GuSd%7E2qYzjBsDVQ__&Key-Pair-Id=KVTP0A1DKRTAX result = ImgToVec("microsoft/resnet-50").get_embedding("./football-match.jpg") ``` My JS code : ```ts class ImgToVec { public async getEmbedding( file: string, pretrainedModel = 'Xenova/resnet-50', ): Promise<number[]> { const { ResNetForImageClassification, AutoProcessor, RawImage } = await import('@xenova/transformers'); const model = await ResNetForImageClassification.from_pretrained( pretrainedModel, ); const imageProcessor = await AutoProcessor.from_pretrained(pretrainedModel); const image = await RawImage.read(file); const inputs = await imageProcessor(image); const outputs = await model(inputs, { config: { embeddingSize: 2048 } }); console.log('inputs', inputs); const embedding: number[] = outputs.data; return embedding; } } const imgToVec = new ImgToVec(); // https://cdn-lfs.huggingface.co/repos/cf/db/cfdbeec4acf4145f96e47e07a9e161cade4dbce7cfad3ba24765bf1713d53ef3/d65b6f72943d5e2d4f7e5e4dedfb93aea0fbbda140ae7c3ee772124b579e07c4?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27football-match.jpg%3B+filename%3D%22football-match.jpg%22%3B&response-content-type=image%2Fjpeg&Expires=1704020059&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwNDAyMDA1OX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy9jZi9kYi9jZmRiZWVjNGFjZjQxNDVmOTZlNDdlMDdhOWUxNjFjYWRlNGRiY2U3Y2ZhZDNiYTI0NzY1YmYxNzEzZDUzZWYzL2Q2NWI2ZjcyOTQzZDVlMmQ0ZjdlNWU0ZGVkZmI5M2FlYTBmYmJkYTE0MGFlN2MzZWU3NzIxMjRiNTc5ZTA3YzQ%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=kWwcSkWcf8K62Tgr57HYD5VObZuozl3Jf%7EHV5alcyRA-gvbREfzgjMKU9rVOc84r0uwo9d3f-si-PoJ3GdyB8WObJFJWF0nE9SX5C-f3Nookj4SWevcJkLNgF27KqUPMhWWZ8B3KjEDvcxPirjHfc4fv87-uM%7EQIuazixgu0i8lXpzeSyKdZGNIc3zUG-hDzU3EKCGBWbwnGG9Yq%7Evz%7Eit-vvYc7i1AoYTAteZUP1ngDdywjwNf6VvvGqmyBdMcwVDiA0ShwAhW9Z3mqt%7EVz6HaYipWejY0mWmyVhyCWFtJOe9yrk%7ETJKr5cOV3yq6sM0jSheh3GuSd%7E2qYzjBsDVQ__&Key-Pair-Id=KVTP0A1DKRTAX imgToVec.getEmbedding('./football-match.jpg').then((embedding) => { console.log(embedding); }); ``` Any ideas how to solve my problem please ?
https://github.com/huggingface/transformers.js/issues/482
closed
[ "question" ]
2023-12-28T11:38:20Z
2024-01-10T15:04:22Z
null
Spoutnik97
huggingface/diffusers
6,370
How to use diffusers lora in the AUTOMATIC1111
Thanks for your great work, I use the train_text_to_image_lora_sdxl.py to train my custom dataset and get these output, And I get the good result. But I want to use the AUTOMATIC1111 to use the lora weight, I move the pytorch_lora_weights to the AUTOMATIC1111 lora folder But get the error report:`AssertionError: conversion failed: lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_k_lora_A_weight. the model may not be trained by `sd-scripts`` ![image](https://github.com/huggingface/diffusers/assets/18145013/561a8450-af71-460f-a091-78eb96dcea20) how can I do to convert the lora model weight to which format AUTOMATIC1111 can accpet.
https://github.com/huggingface/diffusers/issues/6370
closed
[]
2023-12-28T06:17:19Z
2024-01-02T13:38:26Z
null
chongxian
huggingface/computer-vision-course
163
How to include "What you'll learn" section for this course?
Hello everyone, Our PR for Fundamentals of Computer Vision was merged a few days back. After that, one thing we still need to acknowledge based on your [feedback](https://github.com/johko/computer-vision-course/issues/38#issuecomment-1764502604) on our chapter outline is building a demo using Gradio to give learners a taste of what they'll learn. One of our teammates, @aman06012003 , created a simple [Cat vs Dog classifier deployed it on Hugging face spaces](https://ak0601-cat-dog-classifier.hf.space/), which we want you to take a look at and give feedback. Once the demo is finalized, there are two ways to include it, referring to the [Hugging Face Audio Course](https://huggingface.co/learn/audio-course/chapter0/introduction). One is to create a new .mdx file in our fundamentals folder. The other is to create a new chapter - Welcome to the course, where we add what you'll learn, community notes, etc. We are still determining the optimal path, so please guide us. Team members - @seshu-pavan , @bellabf , @aman06012003 bcc - @MKhalusova @johko @merveenoyan @lunarflu Best, Fundamentals team
https://github.com/huggingface/computer-vision-course/issues/163
closed
[]
2023-12-27T12:41:26Z
2024-04-26T13:36:59Z
null
seshupavan
huggingface/transformers
28,260
How to set pad_token of Llava for batched generation and training?
Hello, @younesbelkada I'm trying to use Llava for batched generation, using the default pad_token. here is the script: ```python import json from PIL import Image from transformers import AutoProcessor, LlavaForConditionalGeneration,AutoTokenizer from torch.utils.data import Dataset,DataLoader import torch import os from tqdm import tqdm DATA_ROOT = "/mnt/gozhang/code/LLaVA/playground/data/eval/mm-vet" processor = AutoProcessor.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf") tokenizer = AutoTokenizer.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf") class MMVetDataset(Dataset): def __init__(self,data_root) -> None: super().__init__() self.data_root = data_root with open(os.path.join(data_root, "mm-vet.json"), "r") as f: data = json.load(f) self.data = [(k,v) for k,v in data.items()] def __len__(self): return len(self.data) def __getitem__(self, index): return {'id':self.data[index][0], 'image':os.path.join(self.data_root,'images',self.data[index][1]['imagename']), 'question':"USER: <image>\n"+self.data[index][1]['question']+" ASSISTANT:"} def collator(batch): ids = [b['id'] for b in batch] questions = [b['question'] for b in batch] images = [Image.open(b['image']) for b in batch] inputs = processor(text=questions,images=images,return_tensors="pt",padding=True) return ids,inputs model = LlavaForConditionalGeneration.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf",torch_dtype=torch.float16) model.to('cuda') #model.to(torch.float16) dataset = MMVetDataset(DATA_ROOT) dataloader = DataLoader(dataset,batch_size=16,collate_fn=collator) results = {} bar = tqdm(total=len(dataset)) model.eval() with torch.inference_mode(): for ids, inputs in dataloader: inputs.to('cuda') inputs['pixel_values'] = inputs['pixel_values'].half() outputs = model.generate(**inputs,temperature=0.2,do_sample=True,max_new_tokens=1024,use_cache=True) input_token_len = inputs['input_ids'].shape[1] responses=tokenizer.batch_decode(outputs[:, input_token_len:], skip_special_tokens=True, clean_up_tokenization_spaces=False) for id,res in zip(ids,responses): results[id]=res bar.update(len(responses)) with open('mmvet_result.json','w') as f: json.dump(results,f,indent=4) ``` But when generating the fifth batch, it reports `RuntimeError: probability tensor contains either inf, nan or element < 0`. Then I try different pad_token, setting `processor.tokenizer.pad_token = processor.tokenizer.unk_token` (following the raw llava codebase), or `processor.tokenizer.pad_token = processor.tokenizer.eos_token`(following the common setting), or `processor.tokenizer.pad_token = processor.tokenizer.bos_token`(following this [issue](https://discuss.huggingface.co/t/llama2-pad-token-for-batched-inference/48020)). And I find that only setting pad_token to eos_token can avoid the error. I wonder what's the effect of different pad_token during batched generation, and what's the root cause of this error, and how to set the correct pad_token for training the model?
https://github.com/huggingface/transformers/issues/28260
closed
[]
2023-12-27T12:17:02Z
2024-02-05T02:43:32Z
null
TideDra
huggingface/transformers
28,259
How to add new merge rules in AutoTokenizer
### Model description I'm training new tokenizer from llama2, however, it seems that BPE tokenizer will clear the origin "vocab" and "merge" dict, and the training result is highly bias in my own datasets (about 6M C function) with some ugly tokens. I wonder that is it possible to train a tokenizer from llama2 with the origin "vocab" and "merge" dict unchanged, only add some new vocab and merge rules from our datasets to support my requirement? ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
https://github.com/huggingface/transformers/issues/28259
open
[ "New model" ]
2023-12-27T12:15:26Z
2023-12-27T12:15:26Z
null
Sandspeare
huggingface/accelerate
2,289
[QUESTION] why stage3_gather_16bit_weights_on_model_save is set to false no matter what value of it in deepspeed config
[`accelerator._prepare_deepspeed()`](https://github.com/huggingface/accelerate/blob/d08c23c20975f39393b431143237c193733e7bb8/src/accelerate/accelerator.py#L1464C13-L1464C82) looks to force the `stage3_gather_16bit_weights_on_model_save` to `false`, which should raise an exception in [`accelerator.get_state_dict()`](https://github.com/huggingface/accelerate/blob/d08c23c20975f39393b431143237c193733e7bb8/src/accelerate/accelerator.py#L2985C17-L2985C68). Additionally, [`trainer.save_model()`](https://github.com/huggingface/transformers/blob/c48787f347bd604f656c2cfff730e029c8f8c1fe/src/transformers/trainer.py#L2827C17-L2827C77) invoke above function, then catch this exception and raise another exception. Yet, the log seems totally fine. I'm confused... Why this happened?
https://github.com/huggingface/accelerate/issues/2289
closed
[]
2023-12-27T10:04:28Z
2024-01-05T06:59:16Z
null
LaniakeaS
huggingface/diffusers
6,352
how to choose save precision for lora file in training
I'm confused about my lora precision(fp16,bf16,float) and whether i can choose precision about my lora weights. I searched for the params about the **StableDiffusionXLPipeline.save_lora_weights** function used to save lora in sdxl text2img training script and didnt find params like 'save_precision' or sth. anyone can help? thanks!
https://github.com/huggingface/diffusers/issues/6352
closed
[]
2023-12-27T09:02:47Z
2023-12-28T08:21:29Z
null
DoctorTar
huggingface/transformers.js
481
Why do certain models not load?
### Question I was keen to try: https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0 I tried: ```ts import { AutoModelForCausalLM, AutoTokenizer, } from '@xenova/transformers'; const autoTokenizer = await AutoTokenizer.from_pretrained( 'Upstage/SOLAR-10.7B-Instruct-v1.0', ); const model = await AutoModelForCausalLM.from_pretrained( 'Upstage/SOLAR-10.7B-Instruct-v1.0', ); ``` But it fails with an error: ```ts Error: Could not locate file: "https://huggingface.co/Upstage/SOLAR-10.7B-Instruct-v1.0/resolve/main/onnx/decoder_model_merged_quantized.onnx". ``` Is this an error on my side, is the model incompatible, ... ?
https://github.com/huggingface/transformers.js/issues/481
open
[ "question" ]
2023-12-27T01:44:52Z
2024-05-10T18:21:57Z
null
adaboese
huggingface/peft
1,298
[Question] What is the main difference between "modules_to_save" and "target_modules"?
Hi, in my work I need to add some special token to LLAMA, so I need to train the parameter of ["embed_tokens", "lm_head"] for both layers, what confuses me is that should I add this parameter to LoraConfig's "modules_to_save " or "target_modules"? Looking forward to your reply!
https://github.com/huggingface/peft/issues/1298
closed
[]
2023-12-26T07:37:05Z
2024-02-03T15:03:27Z
null
SatireY
huggingface/datasets
6,534
How to configure multiple folders in the same zip package
How should I write "config" in readme when all the data, such as train test, is in a zip file train floder and test floder in data.zip
https://github.com/huggingface/datasets/issues/6534
open
[]
2023-12-26T03:56:20Z
2023-12-26T06:31:16Z
null
d710055071
huggingface/trl
1,140
How to additional finetune with new data from previous adapter ?
Hi All, I have question about finetune. Currently I use SFTtrainer for finetuning Llama2-7b-chat model and save it in adapter format. The question is, In case of I want to additional finetune with new data from previous adapter, How I could to do. Normally I additional finetune by merge adapter with base model before finetune it. I'm not sure my method that i do is correct or not. Or have any other method that easily more than this. Thank
https://github.com/huggingface/trl/issues/1140
closed
[]
2023-12-25T04:19:34Z
2024-02-01T15:05:24Z
null
SiraHaruethaipree
huggingface/optimum
1,613
Convert opus translation to onnx and run inference from it
To convert I use this snippet ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from transformers.models.marian import MarianOnnxConfig import onnxruntime as ort model_ckpt = "Helsinki-NLP/opus-mt-en-zh" tokenizer = AutoTokenizer.from_pretrained(model_ckpt) ref_model = AutoModelForSeq2SeqLM.from_pretrained(model_ckpt) feature = "seq2seq-lm" onnx_path = f"onnx/{model_ckpt}-{feature}/" !python -m transformers.onnx --model={model_ckpt} --atol=1e-4 --feature={feature} {onnx_path} ``` To inference (which is not running) I use this snippet ``` import torch from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForSeq2SeqLM model = ORTModelForSeq2SeqLM.from_pretrained("./onnx/Helsinki-NLP/opus-mt-en-zh-seq2seq-lm") ``` The error is ``` FileNotFoundError: Could not find any ONNX model file for the regex ['(.*)?decoder(.*)?with_past(.*)?\\.onnx'] ``` Maybe it tries to find model.onnx but in the folder there are 2 onnx : decoder_model.onnx and encoder_model.onnx I think the snippet is from 2022, Is there any changes ? Thanks
https://github.com/huggingface/optimum/issues/1613
closed
[]
2023-12-25T04:04:47Z
2025-04-29T01:45:20Z
5
x4080
huggingface/chat-ui
658
chat-ui do not support TGI http url when deploy publicly
hi @nsarrazin, the chat-ui works well locally ~~~ # .env.local endpoints: [{"type":"tgi","url":"http://127.0.0.1:8080/generate_stream"}] ~~~ but if deploy it in public, when chat from the external brower, get the 403 error: ~~~ 403 You don't have access to this conversation. If someone gave you this link, ask them to use the 'share' feature instead. ~~~ this issue may be related this issue https://github.com/huggingface/chat-ui/issues/364 it seems that: chat-ui only support the https url, but the TGI only support the http url. it has conflicts. how to fix this?
https://github.com/huggingface/chat-ui/issues/658
closed
[]
2023-12-25T03:08:10Z
2024-04-25T16:27:52Z
1
walkacross
huggingface/transformers.js
475
How to use your own models
### Question Hey I really appreciate your work here! I'm very interested in setting up a perfect RAG pipeline / flow and therefore I need a good document extraction with table-transformers and layout detection. Example : https://github.com/deepdoctection/deepdoctection Where I'd use https://huggingface.co/microsoft/layoutlmv3-base https://huggingface.co/microsoft/table-transformer-detection I could ask you if would add one of these but I want to try it myself. As I understood I can use your script and deploy it on my huggingface.co so I could consume it, is this right?
https://github.com/huggingface/transformers.js/issues/475
closed
[ "question" ]
2023-12-24T21:38:02Z
2024-05-15T09:32:26Z
null
DomEscobar
huggingface/datasets
6,530
Impossible to save a mapped dataset to disk
### Describe the bug I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py). After I do the mapping like this: ``` train_dataset = train_dataset.map(compute_embeddings_fn, batched=True) train_dataset = train_dataset.map( compute_vae_encodings_fn, batched=True, batch_size=16, ) ``` and try to save it like this: `train_dataset.save_to_disk("test")` i get this error ([full traceback](https://pastebin.com/kq3vt739)): ``` TypeError: Object of type function is not JSON serializable The format kwargs must be JSON serializable, but key 'transform' isn't. ``` But what is interesting is that pushing to hub works like that: `train_dataset.push_to_hub("kopyl/mapped-833-icons-sdxl-1024-dataset", token=True)` Here is the link of the pushed dataset: https://huggingface.co/datasets/kopyl/mapped-833-icons-sdxl-1024-dataset ### Steps to reproduce the bug Here is the self-contained notebook: https://colab.research.google.com/drive/1RtCsEMVcwWcMwlWURk_cj_9xUBHz065M?usp=sharing ### Expected behavior It should be easily saved to disk ### Environment info NVIDIA A100, Linux (NC24ads A100 v4 from Azure), CUDA 12.2. [pip freeze](https://pastebin.com/QTNb6iru)
https://github.com/huggingface/datasets/issues/6530
open
[]
2023-12-23T15:18:27Z
2023-12-24T09:40:30Z
1
kopyl
huggingface/sentence-transformers
2,392
util.paraphrase_mining returning scores only above 0.98
Hey, I'm using util.paraphrase_mining (sentence-transformers v2.2.2) to get similarity scores (cosine) in a corpus of ~20k texts with the encoder model being all-MiniLM-L6-v2 and with the parameters query_chunk_size=500, corpus_chunk_size=1000, top_k=500000, max_pairs=5000000. The returned list of triplets contain scores only above 0.98. I was wondering why the lower scores don't appear. Thanks in advance for your answer!
https://github.com/huggingface/sentence-transformers/issues/2392
closed
[ "question" ]
2023-12-23T13:00:27Z
2024-01-29T14:20:33Z
null
sinangokce
huggingface/chat-ui
656
Web Search failed with "Invalid URL"
![image](https://github.com/huggingface/chat-ui/assets/4380009/229430b6-6d10-495f-be66-c5bc54f6061d) Why is this happening? It seems to happen regardless of whether I have USE_LOCAL_WEBSEARCH set to true or false. ``` SERPAPI_KEY=<my key> USE_LOCAL_WEBSEARCH=true MODELS=`[ { "name": "mistralai/Mixtral-8x7b-Instruct-v0.1", "displayName": "mistralai/Mixtral-8x7b-Instruct-v0.1", "description": "Mixtral-8x7b-Instruct-v0.1 is a state of the art language model, based on a mixture of experts, that outperforms ChatGPT.", "websiteUrl": "https://www.aaprintsupplyco.com", "preprompt": "", "chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}", "parameters": { "temperature": 0.4, "top_p": 0.95, "top_k": 50, "truncate": 31768, "max_new_tokens": 2048, "stop": ["[INST]","</s>"] }, "endpoints" : [{ "type": "openai", "baseURL": "https://api.together.xyz/v1" }], "promptExamples": [ { "title": "Write a blog post", "prompt": "Your goal is to help me create a compelling blog post about a topic.\nYou will follow the following process:\n\n1. Ask me for the topic of the blog post.\n2. After I provide my answer you will need to collect some additional information by going through the next steps:\na) Questions (ask any relevant questions pertaining to what additional information is needed from me to write a good blog post).\n\nOnce you have enough information, or once I say I am done, you will write the blog post." }, { "title": "Improve my English", "prompt": "I want you to act as an English grammar and spelling corrector and improver. I will speak to you and you will answer in the corrected and improved version of my text, in English. I want you to replace my simplified A0-level words and sentences with improved, higher level English words and sentences. Keep the meaning same, but make them sound better. I want you to only reply the correction, the improvements and nothing else, do not write explanations. If there is nothing to improve, just reply with the original text." }, { "title": "Assist in a task", "prompt": "I want you to be my Prompt engineer. Your goal is to help me craft the best possible instruction prompt for my needs. The prompt will be used by you, an AI model. You will follow the following process:\n\n1. Your first response will be to simply ask me what the task I want to accomplish. \n2. After I provide my answer and you will generate a first iteration of the prompt, but we will need to improve it through continual iterations by going through the next steps. You will generate two sections:\na) Revised prompt (provide your rewritten prompt, it should be clear, concise, and easily understood by you),\nb) Questions (ask any relevant questions pertaining to what additional information is needed from me to improve the prompt).\n3. We will continue this iterative process with me providing additional information to you and you updating the prompt in the Revised prompt section until I say we are done.\n\nOnly after I say I am done, will you provide a response to the revised prompt." } ] }, { "name": "openchat/openchat-3.5-1210", "displayName": "openchat/openchat-3.5-1210", "description": "OpenChat 3.5 is the #1 model on MT-Bench, with only 7B parameters. Small and fast.", "websiteUrl": "https://www.aaprintsupplyco.com", "preprompt": "", "chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}GPT4 Correct User: {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<|end_of_turn|>GPT4 Correct Assistant:{{/ifUser}}{{#ifAssistant}}{{content}}<|end_of_turn|>{{/ifAssistant}}{{/each}}", "parameters": { "temperature": 0.4, "top_p": 0.95, "top_k": 50, "truncate": 8192, "max_new_tokens": 1024, "stop": ["<|end_of_turn|>","</s>"] }, "endpoints" : [{ "type": "openai", "baseURL": "https://api.together.xyz/v1" }], "promptExamples": [ { "title": "Write a blog post", "prompt": "Your goal is to help me create a compelling blog post about a topic.\nYou will follow the following process:\n\n1. Ask me for the topic of the blog post.\n2. After I provide my answer you will need to collect some additional information by going through the next steps:\na) Questions (ask any relevant questions pertaining to what additional information is needed from me to write a good blog post).\n\nOnce you have enough information, or once I say I am done, you will write the blog post." }, { "titl
https://github.com/huggingface/chat-ui/issues/656
closed
[]
2023-12-22T19:19:34Z
2024-01-09T05:45:13Z
5
gururise
huggingface/chat-ui
655
Generation failed (Module.summarize) when using TogetherAI openai compatible endpoint
TogetherAI offers an [OpenAI compatible endpoint](https://docs.together.ai/docs/openai-api-compatibility). When using this endpoint with the model setup as follows: ``` MODELS=`[ { "name": "mistralai/Mixtral-8x7b-Instruct-v0.1", "displayName": "Mixtral-8x7b", "endpoints" : [{ "type": "openai", "baseURL": "https://api.together.xyz/v1" }], "promptExamples": [ { "title": "Write an email from bullet list", "prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)" }, { "title": "Code a snake game", "prompt": "Code a basic snake game in python, give explanations for each step." }, { "title": "Assist in a task", "prompt": "How do I make a delicious lemon cheesecake?" } ] } ]` TASK_MODEL=`{ "name": "openchat/openchat-3.5-1210", "chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}GPT4 Correct User: {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<|end_of_turn|>GPT4 Correct Assistant:{{/ifUser}}{{#ifAssistant}}{{content}}<|end_of_turn|>{{/ifAssistant}}{{/each}}", "parameters": { "temperature": 0.1, "top_p": 0.95, "repetition_penalty": 1.2, "top_k": 50, "truncate": 3072, "max_new_tokens": 1024, "stop": ["<|end_of_turn|>","</s>"] }, "endpoints" : [{ "type": "openai", "baseURL": "https://api.together.xyz/v1" }] }` ``` Inference and streaming work just fine with the output displayed in the chat window; however, in the console, the **following error always appears** after every interaction, and the conversation titles are never summarized. ``` Error: Generation failed at Module.generateFromDefaultEndpoint (/home/gene/Downloads/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:22:9) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async Module.summarize (/home/gene/Downloads/chat-ui/src/lib/server/summarize.ts:28:10) at async eval (/home/gene/Downloads/chat-ui/src/routes/conversation/[id]/+server.ts:167:26) ``` Even if I try setting TASK_MODEL='mistralai/Mixtral-8x7b-Instruct-v0.1', I still get this error.
https://github.com/huggingface/chat-ui/issues/655
open
[]
2023-12-22T17:34:59Z
2024-01-23T05:14:26Z
1
gururise
huggingface/datasets
6,529
Impossible to only download a test split
I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function. Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed before! split is passed to the dataset builder in `as_dataset`. If I'm not missing something, this seems like bad design, for the following use case: > Imagine there is a huge dataset that has an evaluation test set and you want to just download and run just to compare your method. Is there a current workaround that can help me achieve the same result? Thank you,
https://github.com/huggingface/datasets/issues/6529
open
[]
2023-12-22T16:56:32Z
2024-02-02T00:05:04Z
2
ysig
huggingface/transformers.js
470
How to convert a model with .pt tail
### Question I'm new to this area,I'm woundering how to convert a model with .pt tail?thanks a lot
https://github.com/huggingface/transformers.js/issues/470
open
[ "question" ]
2023-12-22T10:20:16Z
2023-12-23T20:46:37Z
null
Bzayyz
huggingface/transformers.js
469
How to convert a model with .pt tail
### Question I'm new to this area,I'm woundering how to convert a model with .p2 tail?thanks a lot
https://github.com/huggingface/transformers.js/issues/469
closed
[ "question" ]
2023-12-22T10:20:05Z
2023-12-22T10:20:54Z
null
Bzayyz
huggingface/chat-ui
650
chat-ui docker image failed to connect the mongo docker contrainer
step 1: build the chat-ui image ~~~ docker build -t chat-ui -f ./Dockerfile.local . ~~~ step 2: ~~~ # bind the 27016 docker run -d -p 27016:27017 --name mongo-chatui mongo:latest ~~~ step 3: run a contrainer ~~~ # add a .env.local config MONGODB_URL=mongodb://localhost:27016 HF_TOKEN=<your access token> ~~~ ~~~ docker run --rm --mount type=bind,source="$(pwd)/.env.local",target=/app/.env.local -p 3000:3000 chat-ui ~~~ ## results: when load localhost:3000 ~~~ MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27016 at Timeout._onTimeout (/app/node_modules/mongodb/lib/sdam/topology.js:278:38) at listOnTimeout (node:internal/timers:573:17) at process.processTimers (node:internal/timers:514:7) { reason: TopologyDescription { type: 'Unknown', servers: Map(1) { 'localhost:27016' => [ServerDescription] }, stale: false, compatible: true, heartbeatFrequencyMS: 10000, localThresholdMS: 15, setName: null, maxElectionId: null, maxSetVersion: null, commonWireVersion: 0, logicalSessionTimeoutMinutes: null }, code: undefined, [Symbol(errorLabels)]: Set(0) {} } MongoTopologyClosedError: Topology is closed at /app/node_modules/mongodb/lib/sdam/topology.js:218:46 { [Symbol(errorLabels)]: Set(0) {} } MongoTopologyClosedError: Topology is closed at processWaitQueue (/app/node_modules/mongodb/lib/sdam/topology.js:514:46) at Topology.selectServer (/app/node_modules/mongodb/lib/sdam/topology.js:283:9) at Topology.<anonymous> (/app/node_modules/mongodb/lib/sdam/topology.js:42:94) at node:internal/util:442:7 at new Promise (<anonymous>) at Topology.selectServerAsync (node:internal/util:428:12) at executeOperationAsync (/app/node_modules/mongodb/lib/operations/execute_operation.js:74:35) at /app/node_modules/mongodb/lib/operations/execute_operation.js:12:45 at maybeCallback (/app/node_modules/mongodb/lib/utils.js:293:21) at executeOperation (/app/node_modules/mongodb/lib/operations/execute_operation.js:12:38) { [Symbol(errorLabels)]: Set(0) {} } ~~~ @nsarrazin
https://github.com/huggingface/chat-ui/issues/650
open
[ "support", "docker" ]
2023-12-22T08:34:52Z
2025-05-25T20:37:17Z
6
walkacross
huggingface/chat-ui
649
Formatting is incorrect when using LiteLLM (Together.ai)
I'm using Mixtral-7b-Instruct-v0.1 via [LiteLLM](https://github.com/BerriAI/litellm) to provide a OpenAI compatible API to together.ai where the model is hosted. Everything works fine, including streaming; however, the formatting is messed up as shown. Any ideas why? ![image](https://github.com/huggingface/chat-ui/assets/4380009/6855fad2-288f-403e-9ab8-1f2f409fe5c9)
https://github.com/huggingface/chat-ui/issues/649
closed
[ "bug", "question", "front", "models" ]
2023-12-22T05:46:37Z
2023-12-22T17:11:09Z
null
gururise
huggingface/distil-whisper
67
I can only use its encoder to extract audio features, right? How should I use it? Could you provide an example
I can only use its encoder to extract audio features, right? How should I use it? Could you provide an example
https://github.com/huggingface/distil-whisper/issues/67
open
[]
2023-12-22T03:50:32Z
2024-01-15T18:07:34Z
null
wvinzh
huggingface/transformers.js
468
Node.js
### Question Will this library work with Node.js?
https://github.com/huggingface/transformers.js/issues/468
closed
[ "question" ]
2023-12-21T23:03:36Z
2023-12-21T23:06:53Z
null
Julianbullmagic
huggingface/gsplat.js
47
I don't need to load loading and onProgress,When data is loaded, how can I render it on the interface immediately?
I don't need to load loading,When data is loaded, how can I render it on the interface immediately? I see Class Loader Nothing's been done there
https://github.com/huggingface/gsplat.js/issues/47
closed
[]
2023-12-21T20:13:52Z
2024-01-29T20:15:01Z
null
did66
huggingface/candle
1,463
How to introduce openai triton in candle?
The handwritten CUDA operator is very complicated. How can we use openai triton in candle to simplify this process. :)
https://github.com/huggingface/candle/issues/1463
open
[]
2023-12-21T18:42:38Z
2024-01-01T11:56:29Z
null
tyfeng1997
huggingface/transformers
28,179
How to fine tune facebook/esm2_t33_650M_UR50D
### System Info How to fine tune facebook/esm2_t33_650M_UR50D?It's too big and the model.half() couldn't work. Besids, i always met the error : CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc). Is it possible that the model in the huggingface is wrong? The following is the script: from os.path import join import os import pandas as pd import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.utils.data as data import transformers from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer from datasets import Dataset,load_metric from sklearn.model_selection import train_test_split #os.environ['CUDA_VISIBLE_DEVICES'] = '1' CURRENT_DIR = os.getcwd() check_point = join(CURRENT_DIR,"esm1b_t33_650M_UR50S") #Data processing def process_tsv(file): sequences = list() labels = list() df = pd.read_csv(file,sep="\t") for ind in df.index: sequences.append(df["sequence"][ind]) labels.append(df["label"][ind]) return sequences,labels def tokenize_add_label(sequences, labels, tokenizer): """This function takes sequences and labels creates a Dataset containing tokenized sequences and add labels to it args: sequences (str): a list of sequences labels (int): a list of labels tokenizer : a pre-trained tokenizer return: Dataset: tokenized sequences and associated labels)""" sequences_tokenized = tokenizer(sequences, padding=True, truncation=True) sequences_tokenized = torch.float16(sequences_tokenized) labels = torch.tensor(labels) labels = labels.long() sequences_dataset = Dataset.from_dict(sequences_tokenized) sequences_dataset = sequences_dataset.add_column("labels", labels) return sequences_dataset sequences,labels = process_tsv(join(CURRENT_DIR,"example.tsv")) tokenizer = AutoTokenizer.from_pretrained(check_point) sequences_dataset = tokenize_add_label(sequences,labels,tokenizer) num_labels = max(labels)+1 model = AutoModelForSequenceClassification.from_pretrained(check_point,num_labels=num_labels) #device = "cuda" if torch.cuda.is_available() else "cpu" #model.to(device) model.cuda() #model = model.half() #model.enable_input_require_grads() model_name = check_point.split("/")[-1] trainer_dir = f"{model_name}-finetuned-model_esm-1b_on_7beta" if not os.path.exists(trainer_dir): os.mkdir(trainer_dir) batch_size = 1 training_args = transformers.TrainingArguments( output_dir=trainer_dir, # output directory overwrite_output_dir=True, num_train_epochs=3, # total number of training epochs per_device_train_batch_size=batch_size, # batch size per device during training per_device_eval_batch_size=batch_size, # batch size for evaluation learning_rate=2e-5, warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=trainer_dir, # directory for storing logs logging_steps=10, load_best_model_at_end=True, evaluation_strategy="epoch", save_strategy="epoch", save_total_limit=1, metric_for_best_model="accuracy", greater_is_better=True, disable_tqdm=True, gradient_accumulation_steps = 2, gradient_checkpointing=True ) metric = load_metric(join(CURRENT_DIR,"metrics","accuracy/accuracy.py")) def compute_metrics(eval_pred): logits, labels = eval_pred print("logits",logits) print("labels",labels) predictions = np.argmax(logits, axis=-1) print("predictions",predictions) return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model = model, args = training_args, train_dataset=sequences_dataset, eval_dataset=sequences_dataset, tokenizer=tokenizer, compute_metrics=compute_metrics, ) model.config.problem_type trainer.train() trainer.state.log_history ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. Some weights of EsmForSequenceClassification were not initialized from the model checkpoint at /home/wangmuqiang/fine_tune_esm2/esm1b_t33_650M_UR50S and are newly initialized: ['classifier.dense.bias', 'classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it fo
https://github.com/huggingface/transformers/issues/28179
closed
[]
2023-12-21T09:50:27Z
2024-01-30T08:03:39Z
null
Admire7494
huggingface/alignment-handbook
81
Why we use a lower batch size when comparing SFT lora with SFT full fine-tuning ?
https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/sft/config_lora.yaml
https://github.com/huggingface/alignment-handbook/issues/81
closed
[]
2023-12-20T21:09:33Z
2024-01-07T21:03:14Z
2
shamanez
huggingface/trl
1,115
How to prepare multi-turn dialogue dataset for dpo?
the single-turn dialogue dataset is like: dpo_dataset_dict = { "prompt": [ "hello", "how are you", "What is your name?", "What is your name?", "Which is the best programming language?", "Which is the best programming language?", "Which is the best programming language?", ], "chosen": [ "hi nice to meet you", "I am fine", "My name is Mary", "My name is Mary", "Python", "Python", "Java", ], "rejected": [ "leave me alone", "I am not fine", "Whats it to you?", "I dont have a name", "Javascript", "C++", "C++", ], } So, how to prepare a multi-turn dialogue dataset? Can you provide an example? Thank you!
https://github.com/huggingface/trl/issues/1115
closed
[ "🏋 DPO" ]
2023-12-20T09:14:45Z
2024-10-03T14:12:48Z
null
chloefresh
huggingface/transformers
28,155
What is the minimum video card with large memory required to run the mixtral-8x7b model
I mean the model that just came out:mistralai/Mixtral-8x7B-Instruct-v0.1,looks like a lot of parameter files,what is the minimum nvidia graphics card video memory required?
https://github.com/huggingface/transformers/issues/28155
closed
[]
2023-12-20T01:54:45Z
2024-01-28T08:04:44Z
null
zysNLP
huggingface/dataset-viewer
2,218
JobManagerCrashedError jobs are never retried
Currently, we have 7768 jobs with error_code `JobManagerCrashedError`. Some of them are caused by zombie killer set crashes. ``` Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.aggregate([{$match:{error_code:"JobManagerCrashedError","details.copied_from_artifact":{$exists:false}}},{$group:{_id:{kind:"$kind"},count:{$sum:1}}},{$sort:{count:-1}}]) [ { _id: { kind: 'split-duckdb-index' }, count: 3658 }, { _id: { kind: 'split-descriptive-statistics' }, count: 1872 }, { _id: { kind: 'config-parquet-and-info' }, count: 1765 }, { _id: { kind: 'split-first-rows-from-streaming' }, count: 322 }, { _id: { kind: 'split-first-rows-from-parquet' }, count: 72 }, { _id: { kind: 'split-opt-in-out-urls-scan' }, count: 60 }, { _id: { kind: 'dataset-config-names' }, count: 21 } ] ``` But most of them are set as crashed when deploying and are never retried, even if they are fast and straightforward to process. Should we retry those jobs in backfill? I think we should differentiate the ones that are easy to process against those that are difficult (primarily because of OOMs), maybe retry once or twice, and set a different error so that we can identify which of them are caused by limited resources.
https://github.com/huggingface/dataset-viewer/issues/2218
closed
[ "question" ]
2023-12-19T15:22:30Z
2024-01-09T20:32:58Z
null
AndreaFrancis
huggingface/optimum
1,608
XENOVA conversion issues
### System Info ```shell using the requirements.txt in Xenova for environment. https://github.com/xenova/transformers.js/blob/main/scripts/requirements.txt ``` ### Who can help? @xenova ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) "Error while initializing BPE: Token `_</w>` out of vocabulary" ### Expected behavior Been trying to run blenderbot90, 400, 1b distilled. Have had lots of issues, but I'll start with this one. version 1 attempt, and loading from local after git-large file from HF repo. tokenizer = AutoTokenizer.from_pretrained(model) model = ORTModelForSeq2SeqLM.from_pretrained(model) inputs = tokenizer("what is a black hole", return_tensors="pt") gen_tokens = model.generate(**inputs) response = tokenizer.batch_decode(gen_tokens) version 2 attempt, directly repo using pipeline from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Xenova/blenderbot_small-90M") model = ORTModelForSeq2SeqLM.from_pretrained("Xenova/blenderbot_small-90M") onnx_pipe = pipeline("conversational", model=model, tokenizer=tokenizer) text = "what is a black hole" response = onnx_pipe (text) Both cases getting this error: "Error while initializing BPE: Token `_</w>` out of vocabulary"
https://github.com/huggingface/optimum/issues/1608
closed
[ "bug" ]
2023-12-19T02:11:58Z
2023-12-19T04:54:00Z
3
gidzr
huggingface/safetensors
409
Doesn't work with versions of torch where "meta" dtype is not supported.
### System Info This is on my mac where I was just testing the interface. It seems like this could easily be fixed. ``` ... >>> from safetensors.torch import save_file >>> x {'a': tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])} >>> x['a'].device device(type='cpu') >>> save_file(x, filename='foo') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.9/site-packages/safetensors/torch.py", line 281, in save_file serialize_file(_flatten(tensors), filename, metadata=metadata) File "/usr/local/lib/python3.9/site-packages/safetensors/torch.py", line 460, in _flatten shared_pointers = _find_shared_tensors(tensors) File "/usr/local/lib/python3.9/site-packages/safetensors/torch.py", line 72, in _find_shared_tensors if v.device != torch.device("meta") and storage_ptr(v) != 0 and storage_size(v) != 0: RuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hip, msnpu, xla, vulkan device type at start of device string: meta >>> safetensors.__version__ '0.4.1' >>> torch.__version__ '1.8.1' ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Reproduction Install torch 1.8.1 and safetensors 0.4.1 (this is current safetensor version in pip default channel) run the code above (sorry I have not reduced this to a script but it's the most minimal example of using safetensors) ### Expected behavior save_file should work with older versions of torch, like 1.8.1
https://github.com/huggingface/safetensors/issues/409
closed
[ "Stale" ]
2023-12-18T15:51:28Z
2024-01-23T01:49:25Z
null
danpovey
huggingface/candle
1,457
How to do to quantize manually a phi-2 version, starting from safetensors file
Hi I have fine tuned a phi-2 model using lora I merged adapter with base model to get a trained one I now have a bunch of safetensors file How is it possible to convert these files into a gguf file ( llama.cpp concerter does not support phi) In other words, how is it possible to achieve the same as : model-v2-q4k.gguf in lmz/candle-quantized-phi
https://github.com/huggingface/candle/issues/1457
closed
[]
2023-12-18T15:14:37Z
2023-12-18T15:58:12Z
null
ghost
huggingface/optimum
1,605
Static Quantization - Token classification
Hi, I am following the code [here](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/quantization/token-classification) for doing static quantization on my token classification model. The inference time for quantized model(static) is almost the same as non quantized one. I have tried dynamic quantization too and it is showing some improvement in terms of latency but i need more latency improvements. Do i have to do anything additional to lower/improve the inference time than what is mentioned [here](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/quantization/token-classification) for static quantization. Can anyone please help me?
https://github.com/huggingface/optimum/issues/1605
open
[ "quantization" ]
2023-12-18T13:31:33Z
2024-10-09T09:21:22Z
0
akshay-babbar
huggingface/diffusers
6,211
[Examples] How much time you support training scripts of text to video in diffusers?
I want to train svd in diffusers, can you support this feature in examples. Thanks for your contributions.
https://github.com/huggingface/diffusers/issues/6211
closed
[ "stale" ]
2023-12-18T08:26:57Z
2024-01-26T15:05:32Z
null
jiaxiangc
huggingface/optimum
1,604
Table Transformer to ONNX
### Feature request Hi all, I am trying to convert Table-transformer model from transformers(pretrained) to ONNX. Error reads something like " 'table-transformer' is not a supported format. Is there any way to convert table-transformer (TATR) to ONNX model. Any help would be cherished. Thanks. ### Motivation Motivation for this is, I am working on developing a light weight table structure recognition model, ONNX model would help me in that regard. ### Your contribution None
https://github.com/huggingface/optimum/issues/1604
closed
[ "feature-request", "onnx" ]
2023-12-18T07:18:21Z
2024-02-28T08:52:49Z
3
balajiChundi
huggingface/safetensors
407
Does safetensors save the model's hierarchical structure? Is it similar to ONNX?
If safetensors saves the model's hierarchical structure, how can one access this structure? Is it possible to read it directly like with ONNX?Can I directly load a model from safetensors? If the hierarchical structure of the model is not preserved, does it mean that the original model must be read from config.json?
https://github.com/huggingface/safetensors/issues/407
closed
[ "Stale" ]
2023-12-17T15:04:55Z
2024-02-24T01:45:09Z
3
ZDragonX
huggingface/datasets
6,507
where is glue_metric.py> @Frankie123421 what was the resolution to this?
> @Frankie123421 what was the resolution to this? use glue_metric.py instead of glue.py in load_metric _Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
https://github.com/huggingface/datasets/issues/6507
closed
[]
2023-12-17T09:58:25Z
2023-12-18T11:42:49Z
null
Mcccccc1024
huggingface/peft
1,278
How to add trainable parameters? (bugs in 'modules_to_save')
### System Info Hi, How can I train other weights in the model rather than fix them during lora training? ### Who can help? @BenjaminBossan Hi, I find you are active recently so I @ you here.. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder - [ ] My own task or dataset (give details below) ### Reproduction ``` self.model, self.peft_optimizer, _, self.peft_lr_scheduler = deepspeed.initialize( config=training_args.deepspeed, model=model, model_parameters=optimizers['model_parameters'] if self.training_args.do_train else None, optimizer=hf_optimizer, lr_scheduler=hf_lr_scheduler ) ``` I add the parameters I want to train in `hf_optimizer`, but those parameters still do not change ### Expected behavior the gradient of those parameters added to `hf_optimizer` should not be None
https://github.com/huggingface/peft/issues/1278
closed
[]
2023-12-17T05:34:09Z
2024-01-29T15:03:39Z
null
shawnricecake
huggingface/accelerate
2,262
When I trained with two processes the gradient of the parameters could not be shared and I ended up with two different models. How to solve this problem?
When I trained with two processes the gradient of the parameters could not be shared and I ended up with two different models. Did anyone meet this problem before? How to solve it?
https://github.com/huggingface/accelerate/issues/2262
closed
[]
2023-12-15T13:48:34Z
2024-06-11T12:26:07Z
null
zypsjtu
huggingface/datasets
6,501
OverflowError: value too large to convert to int32_t
### Describe the bug ![image](https://github.com/huggingface/datasets/assets/47747764/f58044fb-ddda-48b6-ba68-7bbfef781630) ### Steps to reproduce the bug just loading datasets ### Expected behavior how can I fix it ### Environment info pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl pip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl done
https://github.com/huggingface/datasets/issues/6501
open
[]
2023-12-15T10:10:21Z
2025-06-27T04:27:14Z
1
zhangfan-algo
huggingface/diffusers
6,178
How to train Stable Diffusion with DDPM?
I want to train Stable Diffusion with DDPM, but I can't find the code in this project. I found a lot of training code elsewhere on the internet, but most of it is distillation code on pre-trained models, not the original DDPM training code. I also tried to implement the original training code myself, but I couldn't get good results. Could you provide me with the code for this part if it's convenient for you?
https://github.com/huggingface/diffusers/issues/6178
closed
[]
2023-12-15T02:43:07Z
2023-12-15T02:54:06Z
null
MenSanYan
huggingface/dataset-viewer
2,208
Add a collection with datasets infos
While working on enabling private datasets (#39) under conditions (isPro, isEnterprise), I thought we missed a place where we control the access to the dataset. I think the first step in the DAG, instead of dataset-config-names, should be more about the dataset characteristics: if it's private or public, maybe if it's gated (not sure if it's useful info), if the user is pro or if the org is enterprise, if the viewer is disabled through the README (see https://github.com/huggingface/datasets-server/issues/2207), if the dataset is in the block list. All that information could go to a new step called `dataset-status` or something similar. The content could be: ```json { "dataset": "namespace/dataset", "private": true, "proUser": false, "enterpriseOrg": true, "disabledFromReadme": false, "gated": false, "blocked": false, } ``` And a second step, called `dataset-enabled`, that would depend on `dataset-status`, and would return: - 200 `{enabled: true}` if all the conditions are met - 404 if we don't want to disclose the existence of the dataset, or if it does not exist - 501 if it's not implemented - 403? 404? if the dataset viewer is not enabled (private dataset, no pro user/enterprise org) Then, the following steps would propagate the error if so, or if 200, will act as currently. I think it's clearer to have two different steps: one to collect the data, another one to take a decision on this basis. We could also have everything in one cache entry, but I think the logic for maintenance would be harder (we would have to add info like: is that dataset private, is the user pro, etc. in the error details, or in the content, etc. to be able to check them regularly)
https://github.com/huggingface/dataset-viewer/issues/2208
closed
[ "question", "refactoring / architecture", "P2" ]
2023-12-14T13:59:42Z
2024-01-11T14:30:03Z
null
severo
huggingface/dataset-viewer
2,207
Backfill job processes datasets with disabled viewer?
If I read the code correctly, the backfill cronjob does not check if the dataset viewer is disabled (`viewer: false` in the README). If we want to implement the dataset viewer for private datasets, under conditions (isPro, isEnterprise), we will have to check these conditions before adding jobs.
https://github.com/huggingface/dataset-viewer/issues/2207
closed
[ "bug", "question", "P2" ]
2023-12-14T13:01:53Z
2024-02-06T16:03:10Z
null
severo
huggingface/huggingface_hub
1,907
How to fix "VBox(children=(HTML(value='<center> <img..." error? When trying login()
### Describe the bug Hello. I am doing like below but it doesn't show enter token panel as supposed to be What could be the reason? ![image](https://github.com/huggingface/huggingface_hub/assets/19240467/d9346706-78f1-47e9-8303-fc108b5aa8e9) Pip freeze is as below ``` alembic @ file:///home/conda/feedstock_root/build_artifacts/alembic_1701459233889/work anyio @ file:///home/conda/feedstock_root/build_artifacts/anyio_1700835416766/work archspec @ file:///home/conda/feedstock_root/build_artifacts/archspec_1699370045702/work argon2-cffi @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi_1692818318753/work argon2-cffi-bindings @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi-bindings_1695386553988/work arrow @ file:///home/conda/feedstock_root/build_artifacts/arrow_1696128962909/work asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1698341106958/work async-generator==1.10 async-lru @ file:///home/conda/feedstock_root/build_artifacts/async-lru_1690563019058/work attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1683424013410/work Babel @ file:///home/conda/feedstock_root/build_artifacts/babel_1698174530262/work beautifulsoup4 @ file:///home/conda/feedstock_root/build_artifacts/beautifulsoup4_1680888073205/work bleach @ file:///home/conda/feedstock_root/build_artifacts/bleach_1696630167146/work blinker @ file:///home/conda/feedstock_root/build_artifacts/blinker_1698890160476/work boltons @ file:///home/conda/feedstock_root/build_artifacts/boltons_1677499911949/work Brotli @ file:///home/conda/feedstock_root/build_artifacts/brotli-split_1695989787169/work cached-property @ file:///home/conda/feedstock_root/build_artifacts/cached_property_1615209429212/work certifi @ file:///home/conda/feedstock_root/build_artifacts/certifi_1700303426725/work/certifi certipy==0.1.3 cffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1696001724357/work charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1698833585322/work colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1666700638685/work comm @ file:///home/conda/feedstock_root/build_artifacts/comm_1691044910542/work conda @ file:///home/conda/feedstock_root/build_artifacts/conda_1699392346065/work conda-libmamba-solver @ file:///home/conda/feedstock_root/build_artifacts/conda-libmamba-solver_1700148543755/work/src conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1691048088238/work conda_package_streaming @ file:///home/conda/feedstock_root/build_artifacts/conda-package-streaming_1691009212940/work cryptography @ file:///home/conda/feedstock_root/build_artifacts/cryptography-split_1701563208210/work debugpy @ file:///home/conda/feedstock_root/build_artifacts/debugpy_1695534290440/work decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work defusedxml @ file:///home/conda/feedstock_root/build_artifacts/defusedxml_1615232257335/work entrypoints @ file:///home/conda/feedstock_root/build_artifacts/entrypoints_1643888246732/work exceptiongroup @ file:///home/conda/feedstock_root/build_artifacts/exceptiongroup_1700579780973/work executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1698579936712/work fastjsonschema @ file:///home/conda/feedstock_root/build_artifacts/python-fastjsonschema_1700055509243/work/dist filelock==3.13.1 fqdn @ file:///home/conda/feedstock_root/build_artifacts/fqdn_1638810296540/work/dist fsspec==2023.12.2 greenlet @ file:///home/conda/feedstock_root/build_artifacts/greenlet_1698243379066/work huggingface-hub==0.19.4 idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1701026962277/work importlib-metadata @ file:///home/conda/feedstock_root/build_artifacts/importlib-metadata_1701632192416/work importlib-resources @ file:///home/conda/feedstock_root/build_artifacts/importlib_resources_1699364556997/work ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1698244021190/work ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1701703101339/work ipython-genutils==0.2.0 ipywidgets==8.1.1 isoduration @ file:///home/conda/feedstock_root/build_artifacts/isoduration_1638811571363/work/dist jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1696326070614/work Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1654302431367/work json5 @ file:///home/conda/feedstock_root/build_artifacts/json5_1688248289187/work jsonpatch @ file:///home/conda/feedstock_root/build_artifacts/jsonpatch_1695536281965/work jsonpointer @ file:///home/conda/feedstock_root/build_artifacts/jsonpointer_1695397236330/work jsonschema @ file:///home/conda/feedstock_root/build_artifacts/jsonschema-meta_1700159890288/work jsonschema-specifications @ file:///home/conda/feedstock_root/build_artifacts/jsonschema-specifications_1701365715051/w
https://github.com/huggingface/huggingface_hub/issues/1907
closed
[ "bug" ]
2023-12-14T11:45:44Z
2025-03-15T08:03:44Z
null
FurkanGozukara
huggingface/unity-api
17
Android support
Great repo! My question is - does it work on Android? I did some research but couldn't find much - except for some comments on [YouTube](https://www.youtube.com/watch?v=Ngmb7l7tO0I) that speech recognition doesn't really work on Android ("_when i export to an a Android Device the text always is "you", no matter what did i say. I don't know if needs another configuration because in the unity editor works fine_"). Could you please clarify? Thank you!
https://github.com/huggingface/unity-api/issues/17
open
[ "question" ]
2023-12-14T11:15:56Z
2024-01-18T10:56:45Z
null
dogadogan
huggingface/alignment-handbook
76
can we inference with lora adapter after running the SFT ?
I trained the model using SFT on a custom dataset using lora config, which produced a Lora adapter, can we infer with it like having a base model and this adapter on top of it, or merge it ?
https://github.com/huggingface/alignment-handbook/issues/76
closed
[]
2023-12-14T10:55:20Z
2023-12-28T07:14:29Z
2
Tejaswi-kashyap-006
huggingface/accelerate
2,251
when a tensor is generated from some_func(A.shape) (where A is a tensor), the generated tensor locates in cpu, not A's device
how to solve it ? I have tried tensor.to(A.device) and tensor.to(accelerator.device), but it seems not to work.
https://github.com/huggingface/accelerate/issues/2251
closed
[]
2023-12-14T09:18:15Z
2023-12-14T14:38:17Z
null
weizhenhuan
huggingface/peft
1,265
When generate outputs, how to get the probility of the outputs? Is there any param to let the model output probility ?
### Feature request xx ### Motivation xx ### Your contribution xx
https://github.com/huggingface/peft/issues/1265
closed
[]
2023-12-14T08:05:34Z
2023-12-14T10:37:19Z
null
ShawnALiu
huggingface/transformers
28,025
How to combine two pretrained model in huggingface transformers?
### Feature request I want to combine two pretrained model(LLAMA and BERT) in a new python class. More specific,The way I've tried is to define a new class c that inherits llama and load bert in c's \_\_init\_\_ function. ![image](https://github.com/huggingface/transformers/assets/88258534/c5428b78-68ec-4cc2-8667-587b62853152) So that I can use c.from_pretrained('llama_ckpt_dir') to load two model together. `model=C.from_pretrained('llama_ckpt_dir',low_cpu_mem_usage=True)` After I use c.save_pretrained(), even the checkpoint keeps total structure of llama and bert ,bert's params are all random initialize(weights Gaussian initialization bias all zero). (I checked this by torch.load the saved c checkpoint and print it out) Sincerely requesting some help, what should be done? ### Motivation Since trainer can be passed only one model at a time, so it seems a good feature that should be concerned for who wants to do things like train two model together? But there is another difficulty that how to deal with two total diffrent tokenizer from bert and llama(even though this is not required for trainer(since tokenizer usually only used by data preprocess), but I hope I can fix this so that I can completely transform c into a total hf model) ### Your contribution I'm not sure what I can help, but I can fully support anything that can contribute to this issue.
https://github.com/huggingface/transformers/issues/28025
closed
[]
2023-12-14T04:45:51Z
2024-01-03T10:26:31Z
null
rangehow
huggingface/chat-ui
631
Can we add full version number/build number on the landingpage?
Can we add full version number/build number or whatever, on the landingpage? To distinguish between different installations. If you go to https://huggingface.co/chat/, it looks like this: ![image](https://github.com/huggingface/chat-ui/assets/1792727/971a2423-6e1f-4e34-944f-2b4450f0263a) If you go to https://huggingfaceh4-zephyr-chat.hf.space/, it looks like this: ![image](https://github.com/huggingface/chat-ui/assets/1792727/cb8f956e-2b12-4f8e-b948-5ac94688bb0a) So the version seems to be the same, but the buttons on the right side seems to indicate that there is differences in the version, i would guess? (if not huggingchat is a custom build?)
https://github.com/huggingface/chat-ui/issues/631
open
[ "enhancement" ]
2023-12-13T10:50:19Z
2023-12-14T14:26:31Z
4
patchie
huggingface/optimum
1,592
Can optimum.bettertransformer supports LLAVA model?
### System Info ```shell Local NVIDIA env: (llava) xuyang@nobisuke:~$ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Fri_Jan__6_16:45:21_PST_2023 Cuda compilation tools, release 12.0, V12.0.140 Build cuda_12.0.r12.0/compiler.32267302_0 Python=3.10.4 Torch==2.0.1+cu117 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) ``` from optimum.bettertransformer import BetterTransformer model = BetterTransformer.transform(model) ``` ### Expected behavior Recently, we sought to apply the optimum.bettertransformer in LLAVA for fine-tuning. The code run successfully and we found that the memory has decreased significantly. However, in https://huggingface.co/docs/optimum/v1.15.0/bettertransformer/overview, we found that LLAVA is not in the support list. Therefore, we want to confirm that can bettertransformer employ for pre-training or fine-tuning in LLAVA now?
https://github.com/huggingface/optimum/issues/1592
closed
[ "bug" ]
2023-12-13T09:08:35Z
2023-12-13T12:37:13Z
1
xiaovhua
huggingface/blog
1,702
How to introduce new alphabets in Whisper fine-tuning
Dear @sanchit-gandhi, I was following your tutorial, [Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper), to fine-tune Whisper with a dataset in the Amharic language. Amharic is used in Whisper training as speech-translation only, [Amharic audio -> corresponding English translation text]. Hence the Amharic alphabets are unseen in Whisper training. The dataset I am trying to fine-tune with is [Amharic audio -> corresponding text in Amharic characters]. It consists of 92.28 hours (32901 instances) for training and 9.12 hours (3139 instances) for the testing set. My data sources are: 1. https://github.com/getalp/ALFFA_PUBLIC/tree/master/ASR/AMHARIC and 2. https://www.findke.ovgu.de/findke/en/Research/Data+Sets/Amharic+Speech+Corpus.html I tried the tiny, base, and small model sizes. In my first run with whisper-small, I observed a bad performance but when tried to play around with some parameters, including the model size, I was unable to run the code even. I am not quite sure how to introduce the Amharic language characters other than giving the corresponding text as I have seen in the Hindi example. I would appreciate your comment regarding the language whose characters were not seen in the Whisper training because it was treated as a speech translation only. Thank you!
https://github.com/huggingface/blog/issues/1702
open
[]
2023-12-13T02:47:31Z
2024-10-02T02:16:12Z
null
mequanent
huggingface/chat-ui
629
Unable to use Azure AD for OpenID signin
Azure AD does not return the `picture` claim for the `profile` scope which results in a Zod validation error and authentication failing with `HTTP 500`: ``` chat-ui-chat-ui-1 | 21:07:21 28|index | ZodError: [ chat-ui-chat-ui-1 | 21:07:21 28|index | { chat-ui-chat-ui-1 | 21:07:21 28|index | "code": "invalid_type", chat-ui-chat-ui-1 | 21:07:21 28|index | "expected": "string", chat-ui-chat-ui-1 | 21:07:21 28|index | "received": "undefined", chat-ui-chat-ui-1 | 21:07:21 28|index | "path": [ chat-ui-chat-ui-1 | 21:07:21 28|index | "picture" chat-ui-chat-ui-1 | 21:07:21 28|index | ], chat-ui-chat-ui-1 | 21:07:21 28|index | "message": "Required" chat-ui-chat-ui-1 | 21:07:21 28|index | } chat-ui-chat-ui-1 | 21:07:21 28|index | ] chat-ui-chat-ui-1 | 21:07:21 28|index | at get error [as error] (file:///app/node_modules/zod/lib/index.mjs:538:31) chat-ui-chat-ui-1 | 21:07:21 28|index | at ZodEffects.parse (file:///app/node_modules/zod/lib/index.mjs:638:22) chat-ui-chat-ui-1 | 21:07:21 28|index | at updateUser (file:///app/build/server/chunks/7-74fde01e.js:34:6) chat-ui-chat-ui-1 | 21:07:21 28|index | at load (file:///app/build/server/chunks/7-74fde01e.js:126:9) chat-ui-chat-ui-1 | 21:07:21 28|index | at process.processTicksAndRejections (node:internal/process/task_queues:95:5) chat-ui-chat-ui-1 | 21:07:21 28|index | at async load_server_data (file:///app/build/server/index.js:1932:18) chat-ui-chat-ui-1 | 21:07:21 28|index | at async file:///app/build/server/index.js:3303:18 { chat-ui-chat-ui-1 | 21:07:21 28|index | issues: [ chat-ui-chat-ui-1 | 21:07:21 28|index | { chat-ui-chat-ui-1 | 21:07:21 28|index | code: 'invalid_type', chat-ui-chat-ui-1 | 21:07:21 28|index | expected: 'string', chat-ui-chat-ui-1 | 21:07:21 28|index | received: 'undefined', chat-ui-chat-ui-1 | 21:07:21 28|index | path: [Array], chat-ui-chat-ui-1 | 21:07:21 28|index | message: 'Required' chat-ui-chat-ui-1 | 21:07:21 28|index | } chat-ui-chat-ui-1 | 21:07:21 28|index | ], chat-ui-chat-ui-1 | 21:07:21 28|index | addIssue: [Function (anonymous)], chat-ui-chat-ui-1 | 21:07:21 28|index | addIssues: [Function (anonymous)], chat-ui-chat-ui-1 | 21:07:21 28|index | errors: [ chat-ui-chat-ui-1 | 21:07:21 28|index | { chat-ui-chat-ui-1 | 21:07:21 28|index | code: 'invalid_type', chat-ui-chat-ui-1 | 21:07:21 28|index | expected: 'string', chat-ui-chat-ui-1 | 21:07:21 28|index | received: 'undefined', chat-ui-chat-ui-1 | 21:07:21 28|index | path: [Array], chat-ui-chat-ui-1 | 21:07:21 28|index | message: 'Required' chat-ui-chat-ui-1 | 21:07:21 28|index | } chat-ui-chat-ui-1 | 21:07:21 28|index | ] chat-ui-chat-ui-1 | 21:07:21 28|index | } ```
https://github.com/huggingface/chat-ui/issues/629
closed
[ "support" ]
2023-12-12T21:22:19Z
2024-02-19T09:39:51Z
8
zacps
huggingface/chat-ui
628
isModelsModalOpen is not defined in ChatIntroduction.svelte probably after recent update ?
Hi getting this error after updating to the latest version : Am Running : { 'chat-ui': '0.6.0', npm: '10.2.4', node: '21.3.0', acorn: '8.11.2', ada: '2.7.4', ares: '1.20.1', base64: '0.5.1', brotli: '1.0.9', cjs_module_lexer: '1.2.2', cldr: '44.0', icu: '74.1', llhttp: '9.1.3', modules: '120', napi: '9', nghttp2: '1.58.0', nghttp3: '0.7.0', ngtcp2: '0.8.1', openssl: '3.0.12+quic', simdutf: '4.0.4', tz: '2023c', undici: '5.27.2', unicode: '15.1', uv: '1.46.0', uvwasi: '0.0.19', v8: '11.8.172.17-node.17', zlib: '1.2.13.1-motley-5daffc7' } ``` > chat-ui@0.6.0 dev > vite dev VITE v4.3.9 ready in 1206 ms ➜ Local: http://localhost:5173/ ➜ Network: use --host to expose (node:1526125) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. (Use `node --trace-deprecation ...` to show where the warning was created) 12:13:23 AM [vite-plugin-svelte] /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:53:7 'isModelsModalOpen' is not defined 12:13:23 AM [vite-plugin-svelte] /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:54:53 'isModelsModalOpen' is not defined 12:13:23 AM [vite-plugin-svelte] /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:64:22 'isModelsModalOpen' is not defined ReferenceError: isModelsModalOpen is not defined at /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:61:8 at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16) at eval (/home/user/public_html/chatui3/src/lib/components/chat/ChatMessages.svelte:75:99) at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16) at eval (/home/user/public_html/chatui3/src/lib/components/chat/ChatWindow.svelte:116:102) at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16) at /home/user/public_html/chatui3/src/routes/+page.svelte:57:25 at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16) at Object.default (/home/user/public_html/chatui3/.svelte-kit/generated/root.svelte:50:42) at eval (/home/user/public_html/chatui3/src/routes/+layout.svelte:203:39) ```
https://github.com/huggingface/chat-ui/issues/628
closed
[ "support" ]
2023-12-12T18:49:31Z
2023-12-24T07:40:42Z
7
DrShivang
huggingface/autotrain-advanced
389
How to disable default used --multi_gpu ?
File "/app/env/lib/python3.10/site-packages/accelerate/commands/launch.py", line 822, in _validate_launch_command raise ValueError("You need to use at least 2 processes to use `--multi_gpu`.") ValueError: You need to use at least 2 processes to use `--multi_gpu`. How to disable this from the default provided params ? Can autotrain be used with the free CPU version ? thank you
https://github.com/huggingface/autotrain-advanced/issues/389
closed
[]
2023-12-12T13:32:03Z
2023-12-15T09:21:52Z
null
FiveTechSoft
huggingface/chat-ui
627
Rlhf data collection feature
Is it possible to add a way to generate multiple drafts for a given input. And then based on what the user picks save that data so that it can be used for rlhf?
https://github.com/huggingface/chat-ui/issues/627
open
[ "enhancement", "front", "back" ]
2023-12-12T13:29:06Z
2023-12-14T08:53:14Z
0
nivibilla
huggingface/transformers
27,974
how to replace the existing token in a tokenizer
### Feature request I have a tokenizer which have lots of preserved tokens like bellow: ``` '<reserved_7>': 100, '<reserved_8>': 101, '<reserved_9>': 102, '<reserved_10>': 103, '<reserved_11>': 104, '<reserved_12>': 105, '<reserved_13>': 106, '<reserved_14>': 107, ``` I want to replace the '<reserved_7>' with '<|im_start|>' and replace '<reserved_8>' with '<|im_end|>' what I want to get is a tokenizer which can act as below: tokenizer.encode('<|im_start|>') => 100 ### Motivation I want to replace the '<reserved_7>' with '<|im_start|>' and replace '<reserved_8>' with '<|im_end|>' ### Your contribution no
https://github.com/huggingface/transformers/issues/27974
closed
[]
2023-12-12T12:59:53Z
2025-05-05T19:18:29Z
null
muziyongshixin
huggingface/chat-ui
623
ChatUI with Docker - Permissions Issue
I'm trying to use the ChatUI space with Docker. I have a private, custom model which I've trained. I want to access it in a private space using Docker ChatUI I seem to be running into permissions errors. Things I've tried: Following the instructions set out here: https://huggingface.co/blog/Llama2-for-non-engineers (I used Llama2 with a custom dataset) Creating it with / without the MongoDB URI Adding an existing secret as the HF_TOKEN Creating a new "HUGGING_FACE_HUB_TOKEN" in my settings and in the new space and using that Addint he new token as a secret in the space where the model was generated Hardcoding the access token in .env.local.template to see if it gives a temp fix (it didn't) Does it matter if I don't have a centralised secret that is explicitly named as "HF_TOKEN"? Error: huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6576f9fe-00986ef531649f933739e793;0d286b3c-5e65-45c1-a1f9-7efea56654dd) Error: DownloadError Repository Not Found for url: https://huggingface.co/api/models/<USERNAME>/<MODELNAME>. Please make sure you specified the correct repo_id and repo_type. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password. 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused Warning: Transient problem: connection refused Will retry in 10 seconds. 59 Warning: retries left.
https://github.com/huggingface/chat-ui/issues/623
open
[ "support" ]
2023-12-12T08:10:31Z
2023-12-28T13:58:22Z
1
aidansys17
huggingface/text-generation-inference
1,332
How can I set log output to local file
### Feature request I want to set the TGI log to file instead of stdout. ### Motivation I want to set the TGI log to file instead of stdout. ### Your contribution how can I use params in command of env variables to set log output to file.
https://github.com/huggingface/text-generation-inference/issues/1332
closed
[ "Stale" ]
2023-12-12T07:54:26Z
2024-01-18T01:46:56Z
null
soulseen
huggingface/alignment-handbook
74
A question about the SFTTrainer (also a theoretical question about SFT in general)
I have a general question about Supervised Fine Tuning (SFT) for Dialogue applications. Should the SFT process use the same LM objective (next-token prediction) that is used in pre-training a language model? The "Dialogue" task is predicting "assistant" tokens, right? Shouldn't the objective be predicting only those tokens? Is one way to do this is to set labels for only assistant tokens and ignore the labels on others? The SFTTrainer [implementation](https://github.com/huggingface/trl/blob/main/trl/trainer/sft_trainer.py#L381) does not set labels - as far as I understand, this leads to "labels" being cloned to "input_ids" and shifted right (within transformers code) leading to using "next-token" prediction objective. More on a philosophical note - if using the same objective as pre-training for SFT, why shouldn't that be called "Fine Tuning" the model (On a dialogue dataset of course) rather than "Supervised Fine Tuning". What am I missing? Is there a reference paper that explains this well? The right approach to do SFT for Dialogue applications? It is not obvious hence the question. For example, the [InstructGPT](https://arxiv.org/abs/2203.02155) paper mentions SFT but mainly redirects to the (seemingly) first attempt at SFT in [this](https://arxiv.org/pdf/2109.10862.pdf) paper which talks about a "Summarization" task but not a "Dialogue" task. In that paper, when human labelers are asked to summarize and then when the paper mentions "Behavioral Cloning" is used to finetune the LLM to adapt to this task, I'd imagine that only "Summary" section is considered label but not the entire prompt/document. Following that principle, for "Dialogue" tasks, intuitively, I'd imagine that only "assistant" turns should be part of labels. (By the way I already asked [this](https://github.com/huggingface/trl/issues/1083) in trl repository as well but not sure which is the best repository to ask the question (this repository is for alignment tasks in which SFT is a step - hence posted here too).
https://github.com/huggingface/alignment-handbook/issues/74
open
[]
2023-12-12T06:54:02Z
2024-01-22T14:34:15Z
3
PradeepKadubandi
huggingface/transformers.js
453
Summarization Parameters not working
### Question I've tried several of the supported summarization models with the code used in the browser extension example. The only one I get any results from in a reasonable time is t5-small. My problem with it is that despite any parameters I try to pass in the result is always same length. I've traced through the code and it appears that the config params get passed in. I've tried max_new_tokens, min_new_tokens, max_length, no joy. I initially started specifying 2.5.3 and last tried just letting cdn handle it, looks like 2.10.x, no joy, same thing. Could someone please provide me with an example of getting, in my case, the t5-small model running a summarization task that implements parameters as to output?
https://github.com/huggingface/transformers.js/issues/453
open
[ "question" ]
2023-12-12T06:21:52Z
2023-12-19T21:52:32Z
null
kwlayman
huggingface/safetensors
400
torch.nn.Module named_parameters() seem to be failing for safetensors
### System Info safetensors==0.4.1 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Reproduction Noticed this issue with the new Mixtral model https://github.com/vllm-project/vllm/issues/2020 Is there any way to fix this with safetensors? ### Expected behavior Load the mixtral model in safe tensor format
https://github.com/huggingface/safetensors/issues/400
closed
[ "Stale" ]
2023-12-11T18:54:06Z
2024-01-17T01:48:50Z
1
0-hero
huggingface/optimum
1,583
Add support for Chatglm2 & qwen onnx models
### Feature request Need to export ChatGLM2 & Qwen models to onnx using hf optimum. ChatGLM2: model-card-> [https://huggingface.co/THUDM/chatglm2-6b](https://github.com/huggingface/optimum/issues/url) Qwen: model-card-> [https://huggingface.co/Qwen/Qwen-7B-Chat](https://github.com/huggingface/optimum/issues/url) ### Motivation I would like to make the process of exporting llm models to onnx simpler. There should be a generic boilerplate code which can export the models to onnx by simply passing hugging_face model_id. ### Your contribution I have this piece of code for the export: I'm using this code to export chatglm2: [https://gist.github.com/manishghop/9be5aee6ed3d7551c751cc5d9f7eb8c3](https://github.com/huggingface/optimum/issues/url) i use it for both chatglm2 & qwen by simply updating model_id. Is there a way to run the inference of these onnx models?
https://github.com/huggingface/optimum/issues/1583
closed
[]
2023-12-11T15:22:59Z
2024-04-24T10:21:48Z
4
manishghop
huggingface/peft
1,247
How to save parameters in prompt_encoder layers in p-tuning?
I want to resume training from checkpoint in p-tuning, but the model only save parameters in prompt_embeddings. <img width="370" alt="image" src="https://github.com/huggingface/peft/assets/58416622/a085224f-32f2-409c-9a51-77c7438bc6a2">
https://github.com/huggingface/peft/issues/1247
closed
[]
2023-12-11T02:44:59Z
2024-01-19T15:03:32Z
null
lyt719
huggingface/optimum-benchmark
102
How to evaluate a model that already exists locally and hasn't been uploaded yet, "model=?"
![微信截图_20231211144439](https://github.com/huggingface/optimum-benchmark/assets/89191003/51008a5a-ddf0-420e-a355-d9170ffb7dd6) i really want to know how to load qwen model, thank you very much
https://github.com/huggingface/optimum-benchmark/issues/102
closed
[]
2023-12-10T08:35:59Z
2024-01-11T08:18:17Z
null
WCSY-YG
huggingface/transformers
27,928
[Question] What is the main difference between "AutoModelForCasualLM" and "PeftModelForCausalLM"?
I also wrote it down in peft repo. However this issue is also related to transformers. So i write my question here again. issue is here in peft(https://github.com/huggingface/peft/issues/1245) Hello, Sorry for naive question. I noticed that the``model.generate()`` function performed differently when inferrence right after train with ```trainer.model``` and after merge and unload. (Every params are the same.) So I checked two different object with simple print function. Difference was the object that contains model. 1. ```model = trainer.model``` ``` PeftModelForCausalLM( (base_model): LoraModel( (model): LlamaForCausalLM( (model): LlamaModel( (embed_tokens): ModulesToSaveWrapper( (original_module): Embedding(32008, 5120) (modules_to_save): ModuleDict( (default): Embedding(32008, 5120) ) ) (layers): ModuleList( (0-39): 40 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=5120, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False) ) (k_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=5120, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False) ) (v_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=5120, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False) ) (o_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=5120, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False) ) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=13824, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=5120, out_features=13824, bias=False) ) (up_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=13824, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(i
https://github.com/huggingface/transformers/issues/27928
closed
[]
2023-12-10T03:10:36Z
2024-02-01T00:49:07Z
null
daehuikim
huggingface/peft
1,245
[Question] What is the main difference between "AutoModelForCasualLM" and "PeftModelForCausalLM"?
Because This is is related to "transformers". Therefore I wrote this question in transformers repo either. issue is here in transformers(https://github.com/huggingface/transformers/issues/27928) Hello, Sorry for naive question. I noticed that the``model.generate()`` function performed differently when inferrence right after train with ```trainer.model``` and after merge and unload. (Every params are the same.) So I checked two different object with simple print function. Difference was the object that contains model. 1. ```model = trainer.model``` ``` PeftModelForCausalLM( (base_model): LoraModel( (model): LlamaForCausalLM( (model): LlamaModel( (embed_tokens): ModulesToSaveWrapper( (original_module): Embedding(32008, 5120) (modules_to_save): ModuleDict( (default): Embedding(32008, 5120) ) ) (layers): ModuleList( (0-39): 40 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=5120, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False) ) (k_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=5120, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False) ) (v_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=5120, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False) ) (o_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=5120, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False) ) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=13824, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=5120, out_features=13824, bias=False) ) (up_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=5120, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=13824, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit
https://github.com/huggingface/peft/issues/1245
closed
[]
2023-12-10T03:08:54Z
2023-12-11T11:15:25Z
null
daehuikim
huggingface/diffusers
6,113
How to use the models from sd_control_collection hf repo in diffusers
How to load/convert the models at https://huggingface.co/lllyasviel/sd_control_collection/tree/main with diffusers? ``` >>> pipe = diffusers.StableDiffusionPipeline.from_single_file("diffusers_xl_canny_full.safetensors") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/.local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 261, in from_single_file pipe = download_from_original_stable_diffusion_ckpt( File "/home/ubuntu/.local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1436, in download_from_original_stable_diffusion_ckpt converted_unet_checkpoint = convert_ldm_unet_checkpoint( File "/home/ubuntu/.local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 426, in convert_ldm_unet_checkpoint new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"] KeyError: 'time_embed.0.weight' ``` Also not able to convert it via hf script: https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_controlnet_to_diffusers.py We are able to run it through https://github.com/AUTOMATIC1111 webui. How can it be used with diffusers?
https://github.com/huggingface/diffusers/issues/6113
closed
[]
2023-12-09T14:11:26Z
2024-06-11T18:22:03Z
null
anilsathyan7
huggingface/tokenizers
1,410
How to create Tokenizer.json?
I have this tokenizer and I want to convert it to **tokenizer.json** format. - added_tokens.json - normalizer.json - special_tokens_map.json - config.json - preprocessor_config.json - vocab.json - merges.txt - pytorch_model.bin Is it possible to replace my tokenizer data with the original **tokenizer.json**? ``` import json j = open('hf/tokenizer.json') data = json.load(j) with open('medium-tokenizer/merges.txt') as f: merges = f.readlines() merges.pop(0) j = open('medium-tokenizer/vocab.json') vocab = json.load(j) j = open('medium-tokenizer/added_tokens.json') added_tokens = json.load(j) j = open('medium-tokenizer/normalizer.json') normalizer = json.load(j) data['added_tokens'] = added_tokens data['normalizer'] = normalizer data['model']['vocab'] = vocab data['model']['merges'] = merges with open("tokenizer.json", "w") as outfile: json.dump(data, outfile) ```
https://github.com/huggingface/tokenizers/issues/1410
closed
[ "Stale" ]
2023-12-08T09:41:18Z
2024-01-14T01:52:39Z
null
kenaii
huggingface/optimum
1,577
Support the ORT of the Stable Diffusion XL inpaint model
### Feature request Hi all. We would like to convert the stable-diffusion-xl-inpaint model below to ONNX and run it using ORT. The conversion to ONNX went well using Optimum's cli, but there doesn't seem to be a Python class for ORT inference. https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1 Is there a way to perform inference on this model with the optimum package? If not, do you have any plans to provide support? Thank you ### Motivation To run sd-xl inpaint model with ORT ### Your contribution I can submit a PR for you if I have something to help
https://github.com/huggingface/optimum/issues/1577
closed
[ "feature-request", "Stale" ]
2023-12-08T09:21:06Z
2025-02-19T02:02:54Z
2
0-chan-kor
huggingface/chat-ui
617
Does Chat-UI support multithreading?
Maybe it depends on node.js, but I want to know the CPU utilization.
https://github.com/huggingface/chat-ui/issues/617
closed
[ "question" ]
2023-12-08T05:36:18Z
2023-12-14T07:30:01Z
null
calycekr
huggingface/chat-ui
615
npm run error (latest git pull)
I created a .env.local as: ``` MONGODB_URL=mongodb://localhost:27017 MONGODB_DB_NAME=chat-ui MONGODB_DIRECT_CONNECTION=false COOKIE_NAME=hf-chat HF_TOKEN= HF_API_ROOT=https://api-inference.huggingface.co/models OPENAI_API_KEY= ``` Then I tried: ``` npm install #everything went fine npm run dev -- --host 0.0.0.0 ``` but I got the error below: ``` (node:770942) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. (Use `node --trace-deprecation ...` to show where the warning was created) 11:47:42 AM [vite] Error when evaluating SSR module /src/lib/server/auth.ts: |- SyntaxError: "undefined" is not valid JSON at JSON.parse (<anonymous>) at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14 at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9) 11:47:42 AM [vite] Error when evaluating SSR module /src/hooks.server.ts: failed to import "/src/lib/server/auth.ts" |- SyntaxError: "undefined" is not valid JSON at JSON.parse (<anonymous>) at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14 at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9) SyntaxError: "undefined" is not valid JSON at JSON.parse (<anonymous>) at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14 at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9) SyntaxError: "undefined" is not valid JSON at JSON.parse (<anonymous>) at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14 at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9) ``` On the browser side, I have error 500 (nice picture)
https://github.com/huggingface/chat-ui/issues/615
closed
[ "support" ]
2023-12-07T10:59:53Z
2024-04-24T12:29:46Z
4
shuther
huggingface/chat-ui
614
Docker build - multiple errors - documentation
I can't find documentation to build it myself; so I tried: `docker-compose build up` But I got multiple errors amoung: > chat-ui/.env: line 23: unexpected character "\"" in variable name "\"PROVIDER_URL\": \"\"," Even `source .env` returned multiple errors; I tried to change the `into a ' with no luck. My goal was to build it and include it into a docker compose.
https://github.com/huggingface/chat-ui/issues/614
open
[ "support" ]
2023-12-07T10:55:04Z
2024-06-01T12:44:18Z
4
shuther
huggingface/text-generation-inference
1,318
how to run tgi installed locally without any UI
### System Info how to run tgi installed locally without any UI? pip install text-generation , giving error: ERROR: No matching distribution found for text-generation ### Information - [ ] Docker - [X] The CLI directly ### Tasks - [X] An officially supported command - [ ] My own modifications ### Reproduction pip install text-generation ### Expected behavior need some help running tgi+my model on cmdline
https://github.com/huggingface/text-generation-inference/issues/1318
closed
[ "Stale" ]
2023-12-07T08:47:13Z
2024-01-13T01:46:40Z
null
poojitharamachandra
huggingface/autotrain-advanced
376
How to a Autotrain Seq2Seq ?
Hi everyone , I'm trying to finetune a Helsinki-NLP/opus-mt-tc-big-ar-en on local arabic of morocco which is called Daraija Arabic , the problem is that I'm unable to use Autotrain I keep getting 500 error code ![Screenshot 2023-12-07 011848](https://github.com/huggingface/autotrain-advanced/assets/112639221/ece3ee15-9f89-44ff-bf51-c5231f1858e7) ![Screenshot 2023-12-07 011912](https://github.com/huggingface/autotrain-advanced/assets/112639221/2dea03ae-afcd-4e86-a7b3-d175ff6bc555) [output.csv](https://github.com/huggingface/autotrain-advanced/files/13593069/output.csv) FYI : I didnt modify Training Parameters (find params to copy-paste [here] area so I dont know if its necessary
https://github.com/huggingface/autotrain-advanced/issues/376
closed
[]
2023-12-07T00:22:46Z
2023-12-08T17:27:57Z
null
Lachkar-Ahmed-Salim
huggingface/autotrain-advanced
375
How to do a Seq2Seq Autotrain ?
https://github.com/huggingface/autotrain-advanced/issues/375
closed
[]
2023-12-07T00:10:33Z
2023-12-11T09:41:24Z
null
Lachkar-Ahmed-Salim
huggingface/alignment-handbook
68
DPO alignment doesn't work on Lora models as suggested
You claim that "[In practice, we find comparable performance for both full and LoRA fine-tuning, with the latter having the advantage of producing small adapter weights that are fast to upload and download from the Hugging Face Hub.](https://github.com/huggingface/alignment-handbook/tree/main/scripts#:~:text=In%20practice%2C%20we%20find%20comparable%20performance%20for%20both%20full%20and%20LoRA%20fine%2Dtuning%2C%20with%20the%20latter%20having%20the%20advantage%20of%20producing%20small%20adapter%20weights%20that%20are%20fast%20to%20upload%20and%20download%20from%20the%20Hugging%20Face%20Hub.)" However, when I try the Lora model DPO-aligned LLM that you have trained, [alignment-handbook/zephyr-7b-dpo-lora](https://huggingface.co/alignment-handbook/zephyr-7b-dpo-lora), I experience a total performance degradation. Here is an example of model output that seems confused: ![image](https://github.com/huggingface/alignment-handbook/assets/3280518/1c5eae99-9641-469a-bb73-b66a26a594d4) Even the training loss indicates that the model has not learned much <img width="773" alt="image" src="https://github.com/huggingface/alignment-handbook/assets/3280518/550451f4-4afb-470c-ace7-71b332bb5087"> Here is the training loss for the full model DPO alignment. ![image](https://github.com/huggingface/alignment-handbook/assets/3280518/902aaf32-0446-4ab1-8e38-28afcd456fed) Would you please do a clarification? Is my observation different from what you have experienced? Thanks
https://github.com/huggingface/alignment-handbook/issues/68
open
[]
2023-12-06T19:12:30Z
2023-12-07T09:43:32Z
1
Abe13
huggingface/alignment-handbook
66
How to specify another GPU to run rather than cuda:0?
I tried to modify the --gpu_ids paramater in recipes/accelerate_configs/multi_gpu.yaml, however, it didn't work, the device was still 'cuda:0'.
https://github.com/huggingface/alignment-handbook/issues/66
closed
[]
2023-12-06T10:48:25Z
2023-12-06T11:13:02Z
null
njupopsicle
huggingface/datasets
6,478
How to load data from lakefs
My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
https://github.com/huggingface/datasets/issues/6478
closed
[]
2023-12-06T09:04:11Z
2024-07-03T19:13:57Z
null
d710055071
huggingface/tokenizers
1,407
How to add byte_fallback tokens?
# Alternative title How to make a tokenizer behaving similarly to Llama ## Background Llama tokenizer considers byte_fallback tokens **not special**. When it decodes, it doesn't remove these tokens other than special tokens (unk, pad, bos, eos). ## What I am trying to do I'm trying to create a tokenizer behaving like Llama. However, I **am only able** to add byte_fallback tokens as **special tokens**. ```python from tokenizers import Tokenizer from tokenizers import decoders, pre_tokenizers from tokenizers.models import BPE from tokenizers.processors import TemplateProcessing from tokenizers.trainers import BpeTrainer from tokenizers import AddedToken from datasets import load_dataset dataset = load_dataset("tapaco") def topaco_generator(): for i in dataset['train']: yield i['paraphrase'] bpe_trainer = BpeTrainer( special_tokens=["<unk>", "<s>", "</s>", "<pad>"] + [f"<0x{i:02X}>" for i in range(256)] # byte_fallback tokens ) tokenizer = Tokenizer(BPE(byte_fallback=True)) tokenizer.pre_tokenizer = pre_tokenizers.Sequence( [pre_tokenizers.Metaspace(), pre_tokenizers.Digits(individual_digits=True)] ) tokenizer.enable_padding(pad_id=3, pad_token="<pad>") tokenizer.post_processor = TemplateProcessing( single="<s> $A </s>", pair="<s> $A </s> $B </s>", special_tokens=[ ("<s>", 1), ("</s>", 2), ], ) tokenizer.decoder = decoders.Sequence( [ decoders.Metaspace(), decoders.ByteFallback(), ] ) # my attempt to add byte_fallback as non-special tokens # tokenizer.add_tokens([AddedToken(content=f"<0x{i:02X}>", special=True, normalized=False) for i in range(256)]) tokenizer.train_from_iterator(topaco_generator(), trainer=bpe_trainer) tokenizer.save("topaco_tokenizer.json") tokenizer = Tokenizer.from_file("topaco_tokenizer.json") text = "I love you more than I can say 🤗" encoded_text = tokenizer.encode(text) print(encoded_text.tokens) # My work around to preverse byte_fallback tokens # and remove other special tokens decoded_text = tokenizer.decode(encoded_text.ids, skip_special_tokens=False) print(decoded_text.removeprefix('<s> ').removesuffix('</s>')) ``` ## Problem No matter how I tried this line `tokenizer.add_tokens([AddedToken(content=f"<0x{i:02X}>", special=True, normalized=False) for i in range(256)])` with different position in my code (before training, after training) and with different parameters of AddedToken, I still can not achieve Llama's behavior.
https://github.com/huggingface/tokenizers/issues/1407
open
[ "bytefallback", "Feature Request" ]
2023-12-06T09:03:35Z
2024-08-27T01:57:04Z
null
dinhanhx
huggingface/transformers.js
432
Cannot download the model from huggingface
Because of the network reason, when using transfomer.js we cannot download the model successful How to set the network proxy for the model download
https://github.com/huggingface/transformers.js/issues/432
open
[ "question" ]
2023-12-06T08:18:58Z
2023-12-10T13:42:50Z
null
wujohns
huggingface/blog
1,677
how to achieve image-text matching of BLIP2
Hi, Thanks to the authors for the works. I am trying to achieve image-text matching of BLIP2, but I didn't find any examples of that. Can you give me some help or tips?
https://github.com/huggingface/blog/issues/1677
open
[]
2023-12-06T07:03:21Z
2023-12-06T07:08:48Z
null
wkqun555
huggingface/diffusers
6,070
How to overload existing class in diffusers
That's just for personal development. I want to write a new class inherited from existing class (e.g. `ControlNetModel`) and I added some new parameters to `__init__` function, but found that the `__init__` function is still the parent's implementation, whether to add the decorator `register_to_config` or not. Hope some advice.
https://github.com/huggingface/diffusers/issues/6070
closed
[]
2023-12-06T06:41:44Z
2024-09-25T14:44:04Z
null
OrangeSodahub
huggingface/diffusers
6,067
How to run the fine_tuned model?
Hi all, I used the instructions given [here](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) to fine_tune the model on dog pictures (as explained in the link). The fine_tuning has finished, and a folder called path-to-save-model has been created (that has the weights of the model). Now how do I use this output? Do I run test_dreambooth.py? (I tried running it but it gives error at "from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402" I appreciate it if someone can please let me know how to use the output of the trained model. Thank you
https://github.com/huggingface/diffusers/issues/6067
closed
[]
2023-12-06T01:01:56Z
2025-04-28T10:32:33Z
null
alireza18878
huggingface/text-generation-inference
1,314
What is the default tokenizer behaviour?
### System Info N/A ### Information - [ ] Docker - [X] The CLI directly ### Tasks - [X] An officially supported command - [ ] My own modifications ### Reproduction I'm trying to understand whether special tokens (i.e. BOS and EOS) are added and suppressed on tokenization and decoding. Encoding: - I searched for add_special_tokens in the repo and I don't see anywhere this is being set to true when tokenizing. So, it seems that there are no EOS tokens automatically added. Decoding: - I searched for skip_special_tokens and it seems that [here](https://github.com/huggingface/text-generation-inference/blob/3238c49121b02432bf2938c6ebfd44f06c5adc2f/server/text_generation_server/models/causal_lm.py#L525) on line 541 that indeed BOS and EOS are being supressed. Is this understanding correct? ### Expected behavior If possible, could the default tokenization strategy be described on the ReadMe so users know what to expect?
https://github.com/huggingface/text-generation-inference/issues/1314
closed
[]
2023-12-05T17:35:05Z
2024-01-19T13:14:13Z
null
RonanKMcGovern
huggingface/chat-ui
609
[Feature Request] Uploading PDFS/Text Files/Images?
I love the search function and it makes the chat feel so much more accurate! I use it mainly as a direct ChatGPT replacment, using code models when needed or normal models for chat. Can we have the option to upload images/pdfs/other files to the chat? the images could be integrated by clip/blip, and the PDF or text files could just be added to the context or summarized and then added? It would be awesome to have! Thank you for all the work made into this project
https://github.com/huggingface/chat-ui/issues/609
open
[]
2023-12-05T12:20:39Z
2024-10-04T01:13:18Z
3
iChristGit
huggingface/trl
1,059
How can I have the evaluation pass in only the response to a prompted/instructed generation into the metric.
I have created the following metric: ```py class MyCustomMetric(Metric): def _info(self): # Returns the MetricInfo that defines the name, description, etc. return datasets.MetricInfo( # This should be a short description of your metric. description="_DESCRIPTION", # You can cite papers, GitHub repositories, etc. citation="_CITATION", # The inputs and outputs your metric expects. # These are used to validate the inputs and outputs of _compute inputs_description="_KWARGS_DESCRIPTION", features=datasets.Features({ 'predictions': datasets.Value('string'), 'references': datasets.Value('string') }) ) def _compute(self, predictions, references): # Here is where you should put your main metric computation logic # Adapt your existing code to fit in here fc_results = [] for idx, example in enumerate(predictions): print(f"Example {idx}: ", end="") post_message = "" # Custom Function Calling metric prompts = None try: generated_arguments, expected_arguments, prompts = json_arguments_from_prompt( references[idx], predictions[idx], INSTRUCTION # {"idx": idx, "epoch": epoch} ) fc_result = fc_metric.run(generated_arguments, expected_arguments) fc_results.append(fc_result) # if save_prompts_path: # # add prompts to dpo_data.json # dpo_data.append({ # "fc_result": fc_result, # **prompts # }) # with open(save_prompts_path, "w") as f: # json.dump(dpo_data, f) except Exception as e: print(f"Error function calling: {e}\n") fc_results.append(0) return fc_results ``` This metric expects the prediction to be generated after passing the instruction. For example I have my prompts in the following format: `<s> [INST] {message} [/INST] {response}` I want the evaluation to receive the `predictions` for response and then compare those with my `references`. To reiterate, the predictions should be generated from the model being passed `<s> [INST] {message} [/INST]`. Currently it seems as if the logits are just generated without any prompt resulting in responses like: ``` predicted_strings: ['Unterscheidung Unterscheidung![: What<<NOP What favorite is to help the patterns climate a following is is a to a topic you\nineited by the >>_> in returnFUNCTIONS>\n the is related, return program should be " the format formatname format format. functionFUNCTION_CALL>FORM>( <</OFIGNCIATED_WITH_USER_USERUNCTION</FUNCTION_CALL_NAME>brUNCTIONSCALL_NAMEGSUMENTS>\nGUMENTS_ASS_THE_FIED_FORM_FORMAT</FUNCTION_CALL_ARGUMENTS> If, respond " " response.\nFUNCTIONS>username": "get",meanalth",",function_ "description": "Get health "input": [root": "string", "properties": {" "}] {"name": "leaf_Results", "description": "Search search list of searchists", on a search query", "parameters": {"type": "array", "properties": {"query": {"type": {"query": {"type": "string" "required": "Search"}} "type": "array" "title": ["query"] "description": "Searchphy Search"}}}, {"name": "getUserending",", "description": "Get a list of trifs that on the tr trending", "parameters": {"type": "object", "properties": {"}}},}</FUNCTIONS>\nUSERFS>\n me the ofif from a cat cat doing</users FUNCTION_CALL_NAME>rootSearchResults</FUNCTION_CALL_NAME>FUNCTION_CALL_ARGUMENTS>{"json": {"query": "cool cat"}}</FUNCTION_CALL_ARGUMENTS></s>��'] ``` after looking through the source code it seems like modifying the `prediction_step` method inside `Trainer` is the way to go.
https://github.com/huggingface/trl/issues/1059
closed
[]
2023-12-04T19:01:34Z
2024-01-12T15:05:10Z
null
CakeCrusher
huggingface/distil-whisper
49
How to make training data?
I have a folder like this: audio_1 transcript_1.txt audio_2 transcript_2.txt how can I make this folder into huggingface dataset?
https://github.com/huggingface/distil-whisper/issues/49
open
[]
2023-12-04T18:44:40Z
2023-12-12T16:51:48Z
null
satani99
huggingface/computer-vision-course
77
Issue with rendering the course
If we try to render the course to preview how our added content looks like, it throws the following error ```bash sarthak@kde:~/Desktop/computer-vision-course$ doc-builder preview computer-vision-course chapters/ --not_python_module Initial build docs for computer-vision-course chapters/ /tmp/tmp0uqdjoxf/computer-vision-course/main/en Building the MDX files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 29/29 [00:00<00:00, 1288.27it/s] Traceback (most recent call last): File "/home/sarthak/anaconda3/bin/doc-builder", line 8, in <module> sys.exit(main()) File "/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main args.func(args) File "/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/commands/preview.py", line 171, in preview_command source_files_mapping = build_doc( File "/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/build_doc.py", line 405, in build_doc sphinx_refs = check_toc_integrity(doc_folder, output_dir) File "/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/build_doc.py", line 460, in check_toc_integrity raise RuntimeError( RuntimeError: The following files are not present in the table of contents: - en/Unit 5 - Generative Models/variational_autoencoders - en/Unit 5 - Generative Models/README - en/Unit 11 - Zero Shot Computer Vision/README - en/Unit 2 - Convolutional Neural Networks/README - en/Unit 1 - Fundamentals/README - en/Unit 8 - 3D Vision, Scene Rendering and Reconstruction/README - en/Unit 4 - Mulitmodal Models/README - en/Unit 9 - Model Optimization/README - en/Unit 6 - Basic CV Tasks/README - en/Unit 7 - Video and Video Processing/README - en/Unit 13 - Outlook/README - en/Unit 3 - Vision Transformers/README - en/Unit 12 - Ethics and Biases/README - en/Unit 10 - Synthetic Data Creation/README Add them to chapters/_toctree.yml. ``` **Explanation:** This is because there have been README files added to each chapter. However, these README files are not present in the `_toctree.yml`. **Why it's important:** Being able to render the course locally is important as it can give us a rough overview of how the content looks like. **Possible solutions could be:** * Remove the README files for the time being * Add them to the toctree and also making sure that if anyone adds any chapter contents they also update the toctree making it easier for others to render the course Open for discussion from other members :v:
https://github.com/huggingface/computer-vision-course/issues/77
open
[ "question" ]
2023-12-04T01:02:22Z
2023-12-08T18:17:19Z
null
sarthak247
huggingface/sentence-transformers
2,363
How to retrieve the epoch of the saved model from model.save ?
Hi, Thank you for the repo. Can anyone help me with retrieving the epoch of the saved model, in both cases where save_best_model=True and save_best_model=False? Thank you ``` model.fit(train_objectives=[(train_dataloader, train_loss)], evaluator=evaluator, epochs=num_epochs, evaluation_steps=1000, warmup_steps=warmup_steps, save_best_model=True, output_path=output_path) model.save(path)```
https://github.com/huggingface/sentence-transformers/issues/2363
closed
[]
2023-12-02T15:25:52Z
2024-01-09T22:16:20Z
null
gowrijsuria
huggingface/transformers.js
426
[Question] feature-extraction discrepancies across different platforms
I'm observing discrepancies in feature-extraction results across different platforms. Here's the code: ```js import { pipeline, env } from '@xenova/transformers' const extractor = await pipeline('feature-extraction', 'Xenova/gte-small', { quantized: false, cache_dir: './.cache', local_files_only: false, }) const text = 'hello' const embedding = await extractor(text, { pooling: 'mean', normalize: true }) const response = Array.from(embedding.data) console.log(JSON.stringify(response, null, 2)) // Node v20 // "@xenova/transformers": "^2.9.0" ``` The results differ between macOS 13 (Apple Silicon/Arm) and Ubuntu 23.1 (Raspberry Pi/Arm). I've tried various configurations (e.g., pooling, normalize, with and without Array.from) and still observe different results. It's worth noting that sequential calls on the same platform produce consistent results. I have a few questions: 1. Is this discrepancy expected due to the nature of float32 precision and rounding, even though the calculations are performed on ARM architecture? 2. Given that the difference is extremely small, could it still impact accuracy in any significant way? [mean-nonorm-mac-01.json](https://github.com/xenova/transformers.js/files/13530082/mean-nonorm-mac-01.json) [mean-nonorm-pi-01.json](https://github.com/xenova/transformers.js/files/13530083/mean-nonorm-pi-01.json) [mean-norm-mac-01.json](https://github.com/xenova/transformers.js/files/13530084/mean-norm-mac-01.json) [mean-norm-pi-01.json](https://github.com/xenova/transformers.js/files/13530086/mean-norm-pi-01.json)
https://github.com/huggingface/transformers.js/issues/426
closed
[ "question" ]
2023-12-01T17:12:04Z
2023-12-05T18:51:03Z
null
devfacet
huggingface/chat-ui
604
"Invalid State: Controller is already closed" error when trying to use chat-ui locally with llama.cpp
HELP NEEDED **What is the issue?** Not able to use chat-ui locally to get the response back when using the llama.cpp as a server. I can load the chat-ui after installing it via npm install and npm run dev. The env.local file is also configured and UI allows to send the request. However, the response never comes back in UI, and 'Sorry, something went wrong. Please try again' is shown. On checking the logs in chat-ui, the error shown is: TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed at new NodeError (node:internal/errors:399:5) at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1036:13) at update (/home/devuser/development/chat-ui-main/src/routes/conversation/[id]/+server.ts:158:20) at eval (/home/devuser/development/chat-ui-main/src/routes/conversation/[id]/+server.ts:168:13) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async Object.start (/home/devuser/development/chat-ui-main/src/routes/conversation/[id]/+server.ts:260:7) { code: 'ERR_INVALID_STATE' I also tested the llama.cpp server response via curl and the response came back correctly, so it's not an issue with llama.cpp. Versions: chat-ui code is latest from master. llama.cpp code is latest from master and build locally. Tried with Node 20 and then with Node 19, but issue still remains. env.local: MONGODB_URL=mongodb://localhost:27017 MONGODB_DB_NAME=chat-ui MONGODB_DIRECT_CONNECTION=false USE_LOCAL_WEBSEARCH=true HF_ACCESS_TOKEN=test MODELS=`[ { "name": "Zephyr", "chatPromptTemplate": "<|system|>\n{{preprompt}}</s>\n{{#each messages}}{{#ifUser}}<|user|>\n{{content}}</s>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{content}}</s>\n{{/ifAssistant}}{{/each}}", "parameters": { "temperature": 0.7, "top_p": 0.95, "repetition_penalty": 1.1, "top_k": 50, "truncate": 1000, "max_new_tokens": 2048, "stop": ["</s>"] }, "endpoints": [ { "url": "http://localhost:8080", "type": "llamacpp" } ] } ]` Am I missing anything in terms of installation steps? Any help here will be appreciated.
https://github.com/huggingface/chat-ui/issues/604
closed
[]
2023-11-30T16:42:06Z
2023-11-30T17:41:19Z
1
ManasInd
huggingface/optimum
1,556
RuntimeError: Cannot infer the task from a local directory yet, please specify the task manually.
### System Info windows 10 - ryzen 3600x - 16 gb ddr4-3000 - python 3.10 - latest optimum inside a venv ### Who can help? _No response_ ### Information When I try to convert a model to openvino using optimum-cli export openvino -m "d:\sdxl\LCMphoton" "d:\sdxl\LCMphotonov" I have this error : RuntimeError: Cannot infer the task from a local directory yet, please specify the task manually. I am converting standard sd1.5 models to lcm with lora locally and want to convert that to openvino. I have local models which are not present on huggingface and it takes forever for me to upload there (only 1-2 megabytes max) Can we somehow use local models that have the same directory structure as hf ? ``` ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) optimum-cli export openvino -m "d:\sdxl\LCMphoton" "d:\sdxl\LCMphotonov" ### Expected behavior I want to be able to convert local models without having to download from huggingface.
https://github.com/huggingface/optimum/issues/1556
closed
[ "bug" ]
2023-11-30T16:09:24Z
2023-12-09T22:37:44Z
2
patientx
huggingface/safetensors
396
[Feature request] How about support async save to disk?
### Feature request How about support async save to disk? ### Motivation the weight or optimizer is vary large for LLMs,so,it will waste a lot of time for tensor from cpu to disk。 If we can support async save to disk, it will be vary helpful. ### Your contribution .
https://github.com/huggingface/safetensors/issues/396
closed
[ "Stale" ]
2023-11-30T02:55:25Z
2024-02-13T01:46:40Z
null
ZHUI
huggingface/transformers.js
424
[Question] Batch inference for vit
It seems like all the tests in the repository related to processors and image models use one image per input. 1. Do the models support feeding a batch of images as input during inference? Is there a speed benefit from this? 2. Are there any other optimization/parallelization tools in transformers.js that I can use to process a set of images? Used model: vit base (google/vit-base-patch16-224-in21k), tiny and small distillations (WinKawaks/vit-tiny-patch16-224), exported in onnx format with optimum
https://github.com/huggingface/transformers.js/issues/424
closed
[ "question" ]
2023-11-29T09:52:16Z
2023-12-05T14:49:36Z
null
arseniymerkulov
huggingface/transformers
27,755
How to inference the model with 200k length context
### Model description I want to test Yi-34B-200k, Although I ran through the model, as the context length increased, OOM appeared, and I wondered how I could test to 200k context length with sufficient GPU resources. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
https://github.com/huggingface/transformers/issues/27755
closed
[]
2023-11-29T07:37:06Z
2024-05-24T07:24:56Z
null
taishan1994
huggingface/transformers.js
423
Not able to load local classification onnx model
Was trying to follow the instruction of this page to load local custom model, but failed to find local path https://huggingface.co/docs/transformers.js/custom_usage the code snippet ` import { env, AutoTokenizer, AutoModelForSequenceClassification } from '@xenova/transformers'; env.useFS = true; env.localModelPath = '/path/to/local/file' env.allowRemoteModels = false; let tokenizer = await AutoTokenizer.from_pretrained('tinybert'); let model = await AutoModelForSequenceClassification.from_pretrained('tinybert'); let inputs = await tokenizer('I love transformers!'); let { logits } = await model(inputs); ` here is the file structure: models └── tinybert ├── config.json ├── onnx │ ├── model.onnx │ └── model_quantized.onnx ├── ort_config.json ├── special_tokens_map.json ├── tokenizer.json ├── tokenizer_config.json └── vocab.txt error: (node:36959) ExperimentalWarning: stream/web is an experimental feature. This feature could change at any time (Use `node --trace-warnings ...` to show where the warning was created) Unable to load from local path "/Users/hzhang14/pete/2023_H1_spam/models/tinybert/tokenizer.json": "ReferenceError: Headers is not defined" Unable to load from local path "/Users/hzhang14/pete/2023_H1_spam/models/tinybert/tokenizer_config.json": "ReferenceError: Headers is not defined" file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/utils/hub.js:462 throw Error(`\`local_files_only=true\` or \`env.allowRemoteModels=false\` and file was not found locally at "${localPath}".`); ^ Error: `local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at "/Users/hzhang14/pete/2023_H1_spam/models/tinybert/tokenizer.json". at getModelFile (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/utils/hub.js:462:27) at async getModelJSON (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/utils/hub.js:575:18) at async Promise.all (index 0) at async loadTokenizer (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/tokenizers.js:52:16) at async Function.from_pretrained (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/tokenizers.js:3890:48) at async file:///Users/hzhang14/pete/2023_H1_spam/js/test.mjs:9:17
https://github.com/huggingface/transformers.js/issues/423
closed
[ "question" ]
2023-11-29T06:40:09Z
2023-11-30T07:27:27Z
null
purezhanghan