repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/datasets
6,267
Multi label class encoding
### Feature request I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels. Here's an example of what I'd like to encode: ``` data = { 'text': ['one', 'two', 'three', 'four'], 'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']] } dataset = Dataset.from_dict(data) dataset = dataset.class_encode_column('labels') ``` I did some digging into the code base to evaluate the feasibility of this (note I'm very new to this code base) and from what I noticed the `ClassLabel` feature is still stored as an underlying raw data type of int so I thought a `MultiLabel` feature could similarly be stored as a Sequence of ints, thus not requiring significant serialization / conversion work to / from arrow. I did a POC of this [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e) and included a simple test case (please excuse all the commented out tests, going for speed of POC here and didn't want to fight IDE to debug a single test). In the test I just assert that `num_classes` is the same to show that things are properly serializing, but if you break after loading from disk you'll see the dataset correct and the dataset feature is as expected. After digging more I did notice a few issues - After loading from disk I noticed type of the `labels` class is `Sequence` not `MultiLabel` (though the added `feature` attribute came through). This doesn't happen for `ClassLabel` but I couldn't find the encode / decode code paths that handle this. - I subclass `Sequence` in `MultiLabel` to leverage existing serialization, but this does miss the custom encode logic that `ClassLabel` has. I'm not sure of the best way to approach this as I haven't fully understood the encode / decode flow for datasets. I suspect my simple implementation will need some improvement as it'll require a significant amount of repeated logic to mimic `ClassLabel` behavior. ### Motivation See above - would like to support multi label class encodings. ### Your contribution This would be a big help for us and we're open to contributing but I'll likely need some guidance on how to implement to fit the encode / decode flow. Some suggestions on tests / would be great too, I'm guessing in addition to the class encode tests (that I'll need to expand) we'll need encode / decode tests.
https://github.com/huggingface/datasets/issues/6267
open
[ "enhancement" ]
2023-09-27T22:48:08Z
2023-10-26T18:46:08Z
7
jmif
huggingface/huggingface_hub
1,698
How to change cache dir?
### Describe the bug by default, all downloaded models are stored on > cache_path = '/root/.cache/huggingface/hub' Is there a way to change this dir to something else? I tried to set "HUGGINGFACE_HUB_CACHE" ``` import os os.environ['HUGGINGFACE_HUB_CACHE'] = '/my_workspace/models_cache' ``` but it doesn't work, ### Reproduction _No response_ ### Logs _No response_ ### System info ```shell - huggingface_hub version: 0.17.2 - Platform: Linux-5.4.0-162-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /root/.cache/huggingface/token - Has saved token ?: True - Who am I ?: adhikjoshi - Configured git credential helpers: - FastAI: N/A - Tensorflow: N/A - Torch: 2.2.0.dev20230922+cu118 - Jinja2: 3.1.2 - Graphviz: N/A - Pydot: N/A - Pillow: 10.0.1 - hf_transfer: N/A - gradio: N/A - tensorboard: N/A - numpy: 1.24.4 - pydantic: 2.3.0 - aiohttp: N/A - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets - HF_TOKEN_PATH: /root/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False ```
https://github.com/huggingface/huggingface_hub/issues/1698
closed
[ "bug" ]
2023-09-27T07:45:30Z
2023-09-27T09:08:34Z
null
adhikjoshi
huggingface/accelerate
2,010
How to set different seed for DDP data sampler for every epoch
Hello there! I am using the following code to build my data loader. ```python data_loader_train = DataLoader( dataset_train, collate_fn=collate_fn, batch_size=cfg.data.train_batch_size, num_workers=cfg.data.num_workers, pin_memory=cfg.data.pin_memory, ) data_loader_train = accelerator.prepare(data_loader_train) ``` I am using DDP for training and I want to set different data sample seed for every epoch, so that different epochs will have different batch data orders. How can I do that?
https://github.com/huggingface/accelerate/issues/2010
closed
[]
2023-09-27T02:46:10Z
2023-09-27T11:32:22Z
null
Mountchicken
huggingface/transformers
26,412
How to run Trainer + DeepSpeed + Zero3 + PEFT
### System Info - `transformers` version: 4.34.0.dev0 - Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.3 - Accelerate version: 0.24.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) ### Who can help? @ArthurZucker and @younesbelkada and @pacman100 and @muellerzr ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction [This script](https://gist.github.com/BramVanroy/f2abb3940111b73ae8923822ef6096dd) is a modification of the official run_clm script. The only additions are the BNB config and PEFT. Yet, I cannot get it to work with a [deepspeed zero3 config](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_falcon_180b_z3.json). Requirements to install: ``` accelerate >= 0.12.0 torch >= 1.3 datasets >= 1.8.0 sentencepiece != 0.1.92 protobuf evaluate scikit-learn trl peft bitsandbytes ``` In the past I have had issues with low_cpu_mem_usage but neither a true/false value seem to get this to work: Command 1: ```sh deepspeed --include="localhost:0,1" run_clm.py \ --model_name_or_path facebook/opt-125m\ --dataset_name wikitext\ --dataset_config_name wikitext-2-raw-v1\ --per_device_train_batch_size 2\ --per_device_eval_batch_size 2\ --do_train\ --do_eval\ --output_dir /tmp/test-clm\ --deepspeed deepspeed_configs/ds_config_zero3.json\ --low_cpu_mem_usage true ``` ==> `ValueError: DeepSpeed Zero-3 is not compatible with `low_cpu_mem_usage=True` or with passing a `device_map`.` Command 2: ```sh deepspeed --include="localhost:0,1" run_clm.py \ --model_name_or_path facebook/opt-125m\ --dataset_name wikitext\ --dataset_config_name wikitext-2-raw-v1\ --per_device_train_batch_size 2\ --per_device_eval_batch_size 2\ --do_train\ --do_eval\ --output_dir /tmp/test-clm\ --deepspeed deepspeed_configs/ds_config_zero3.json\ --low_cpu_mem_usage false ``` ==> `ValueError: weight is on the meta device, we need a `value` to put in on 0.` ### Expected behavior Any option to make this combination of Trainer + DeepSpeed + Zero3 + PEFT work.
https://github.com/huggingface/transformers/issues/26412
open
[ "WIP" ]
2023-09-26T10:31:46Z
2024-01-11T15:40:02Z
null
BramVanroy
huggingface/setfit
423
[Q] How to examine correct/wrong predictions in trainer.evaluate()
Hello, After doing "metrics = trainer.evalute()" as shown in the example code, is there a way to examine which rows in the evaluation data set were predicted correctly? Thanks!
https://github.com/huggingface/setfit/issues/423
closed
[ "question" ]
2023-09-25T23:41:53Z
2023-11-24T13:04:45Z
null
youngjin-lee
huggingface/chat-ui
461
The custom endpoint response doesn't stream even though the endpoint is sending streaming content
@nsarrazin I'm transmitting the streaming response to the chat UI, but it displays all the content simultaneously rather than progressively streaming the text generation part. Can you help me address this issue? Reference: #380
https://github.com/huggingface/chat-ui/issues/461
open
[ "support" ]
2023-09-25T07:43:57Z
2023-10-29T11:21:04Z
2
nandhaece07
huggingface/autotrain-advanced
279
How to run AutoTrain Advanced UI locally
How to run AutoTrain Advanced UI locally 😢
https://github.com/huggingface/autotrain-advanced/issues/279
closed
[]
2023-09-25T07:25:51Z
2024-04-09T03:20:17Z
null
LronDC
huggingface/transformers.js
328
[Question] React.js serve sentence bert in browser keep reporting models not found.
my codes: ```javascript export const useInitTransformers = () => { const init = async () => { // @ts-ignore env.allowLocalModels = false; extractor = await pipeline( "feature-extraction", "Xenova/all-mpnet-base-v2", ); }; return { init }; }; ``` I'm building a frontend with React that can serve sentence bert directly in browser, but no idea why even i add the line `env.allowLocalModels = false` before pipeline loading the model. In the production environment, it's still trying to access model locally `/models/...`, but which will never exists in this usecase. **Is there any way i can bypass this check and directly pull the model from remote?** ![image](https://github.com/xenova/transformers.js/assets/26846727/9b6222d7-cb02-44c1-b4e5-b3ab3f52797e)
https://github.com/huggingface/transformers.js/issues/328
closed
[ "question" ]
2023-09-24T15:51:47Z
2024-10-18T13:30:11Z
null
bianyuanop
huggingface/candle
944
Question: How to tokeninize text for Llama?
Hello everybody, How can I tokenize text to use with Llama? I want to fine-tune Llama on my custom data, so how can I tokenize from a String and then detokenize the logits into a String? I have looked at the Llama example for how to detokenize, but cannot find any clear documentation on how the implementation actually works for outputting results during training. Thanks!
https://github.com/huggingface/candle/issues/944
closed
[]
2023-09-23T18:19:56Z
2023-09-23T23:01:13Z
null
EricLBuehler
huggingface/transformers.js
327
Calling pipeline returns `undefined`. What are possible reasons?
The repository if you need it ▶▶▶ [China Cups](https://github.com/piscopancer/china-cups) ## Next 13.5 / server-side approach Just started digging into your library. Sorry for stupidity. ### `src/app/api/translate/route.ts` 👇 ```ts import { NextRequest, NextResponse } from 'next/server' import { PipelineSingleton } from '@/utils/pipeline' export async function GET(request: NextRequest) { const text = request.nextUrl.searchParams.get('text') if (!text) { return NextResponse.json( { error: 'Missing text', }, { status: 400 }, ) } const translator = await PipelineSingleton.getInstance() const translation = await translator(text) console.log(translation) // undefined return NextResponse.json(translation) } ``` ### `src/utils/pipeline.ts` 👇 This singleton must be fine, I suppose. ```ts import { Pipeline, pipeline } from '@xenova/transformers' import { PretrainedOptions } from '@xenova/transformers/types/models' function DeclarePipeline() { return class PipelineSingleton { static task = 'question-answering' static model = undefined as undefined | string static instance = null as null | Promise<Pipeline> static async getInstance(options?: PretrainedOptions) { if (!this.instance) { this.instance = pipeline(this.task, this.model, options) } return this.instance } } } export const PipelineSingleton = (() => { if (process.env.NODE_ENV !== 'production') { const gl = global as any if (!gl.PipelineSingleton) { gl.PipelineSingleton = DeclarePipeline() } return gl.PipelineSingleton } return DeclarePipeline() })() as ReturnType<typeof DeclarePipeline> ``` ### `src/app/page.tsx`This is how I query it 👇 Btw, no errors occur on this stage ```tsx export default async function HomePage({ searchParams }: THomePage) { const text = 'Hello' const translation = await axios.get(`/translate?text=${text}`).then((res) => res.data()) // const translation = await fetch(`/translate?text=${encodeURIComponent(text)}`).then((res) => res.json()) return <pre>{JSON.stringify(translation)}</pre> ``` ## One more very important thing When I **manually** go to `http://localhost:3000/api/translate?text=Hello` I very happily get this error: ``` ⨯ TypeError: Value is not JSON serializable at serializeJavascriptValueToJSONString (node:internal/deps/undici/undici:1203:15) at Response.json (node:internal/deps/undici/undici:6746:55) at NextResponse.json (webpack-internal:///(rsc)/./node_modules/next/dist/server/web/spec-extension/response.js:66:35) at GET (webpack-internal:///(rsc)/./src/app/api/translate/route.ts:24:95) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async C:\web-dev\next\china-cups\node_modules\next\dist\compiled\next-server\app-route.runtime.dev.js:1:66877 ``` 👆 the browser cannot load this url if text=... is present 😟. 💖
https://github.com/huggingface/transformers.js/issues/327
closed
[ "question" ]
2023-09-23T15:57:24Z
2023-09-24T06:55:08Z
null
piscopancer
huggingface/optimum
1,410
Export TrOCR to ONNX
I was trying to export my fine-tuned TrOCR model to ONNX using following command. I didn't get any errors, but in onnx folder only encoder model is saved. ``` !python -m transformers.onnx --model=model_path --feature=vision2seq-lm onnx/ --atol 1e-2 ``` So, regarding this, I have 2 questions. 1. How to save decoder_model.onnx, so that I can use [this inference script](https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39). 2. If it is not possible to export the decoder model to ONNX, how can I perform inference using encoder_model.onnx? According to my understanding, model.generate() takes time to generate output, while the decode method doesn't consume as much time compared to the generate method. Is there any way to use encoder_model.onnx with the existing decoder model in order to optimize response time? ``` p = processor(image, return_tensors="pt").pixel_values generated_ids = model.generate( p, do_sample=True, top_k=5, top_p=0.1, num_beams=4, num_return_sequences=1, output_scores=True, use_cache=True, return_dict_in_generate=True ) generated_text = processor.batch_decode(generated_ids.sequences, skip_special_tokens=True)[0] ``` Please correct me if this approach to optimize response time is wrong. Thanks.
https://github.com/huggingface/optimum/issues/1410
closed
[ "onnx" ]
2023-09-23T09:19:50Z
2024-10-15T16:21:52Z
2
VallabhMahajan1
huggingface/chat-ui
459
Chats Stop generation button is broken?
whenever I'm using the Chat UI on hf.co/chat, and I press the stop generation button it deletes both the prompt and the response?
https://github.com/huggingface/chat-ui/issues/459
open
[ "support" ]
2023-09-21T19:38:38Z
2023-10-08T00:44:44Z
4
VatsaDev
huggingface/chat-ui
457
Custom Models breaking Chat-ui
Setting a custom model in .env.local is now breaking chat-ui for me. @jackielii @nsarrazin If I start mongo and then run ```npm run dev``` with a .env.local file including only the mongo url, there is no issue. Then I add the following: ``` MODELS=`[ { "name": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", "datasetName": "OpenAssistant/oasst1", "description": "A good alternative to ChatGPT", "websiteUrl": "https://open-assistant.io", "userMessageToken": "<|prompter|>", # This does not need to be a token, can be any string "assistantMessageToken": "<|assistant|>", # This does not need to be a token, can be any string "userMessageEndToken": "<|endoftext|>", # Applies only to user messages. Can be any string. "assistantMessageEndToken": "<|endoftext|>", # Applies only to assistant messages. Can be any string. "preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n", "promptExamples": [ { "title": "Write an email from bullet list", "prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)" }, { "title": "Code a snake game", "prompt": "Code a basic snake game in python, give explanations for each step." }, { "title": "Assist in a task", "prompt": "How do I make a delicious lemon cheesecake?" } ], "parameters": { "temperature": 0.9, "top_p": 0.95, "repetition_penalty": 1.2, "top_k": 50, "truncate": 1000, "max_new_tokens": 1024, "stop": ["<|endoftext|>"] # This does not need to be tokens, can be any list of strings } } ]` ``` and now I get: ``` Unexpected token in JSON at position 424 SyntaxError: Unexpected token in JSON at position 424 at JSON.parse (<anonymous>) at eval (/Users/ronanmcgovern/TR/chat-ui/src/lib/server/models.ts:75:14) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async instantiateModule (file:///Users/ronanmcgovern/TR/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9 ``` The specific line of code being referenced is this: ``` "Based on the conversation history (my previous questions are: {{previousMessages}}), give me an appropriate query to answer my question for google search. You should not say more than query. You should not say any words except the query. For the context, today is {{currentDate}}" + ```
https://github.com/huggingface/chat-ui/issues/457
closed
[ "support" ]
2023-09-21T11:12:42Z
2023-09-21T16:03:30Z
10
RonanKMcGovern
huggingface/datasets
6,252
exif_transpose not done to Image (PIL problem)
### Feature request I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading. Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this can create some inconsistencies (between input bboxes and input images). For now there is no option in datasets.features.Image to specify that. We need to do the following when preparing examples (when preparing images for training, test or inference): ``` from PIL import Image, ImageOps pil = ImageOps.exif_transpose(pil) ``` reference: https://stackoverflow.com/a/63950647/5720150 Is it possible to add this by default to the datasets.feature.Image ? or to add the option to do the ImageOps.exif_transpose? Thank you ### Motivation Prevent having inverted data related to exif metadata that may affect object detection tasks ### Your contribution Changing in datasets.featrues.Image I can help with that.
https://github.com/huggingface/datasets/issues/6252
closed
[ "enhancement" ]
2023-09-21T08:11:46Z
2024-03-19T15:29:43Z
2
rhajou
huggingface/optimum
1,401
BUG: running python file called onnx.py causes circular errors.
### System Info ```shell latest optimum, python 3.10, linux cpu. ``` ### Who can help? @JingyaHuang, @echarlaix, @michaelbenayoun ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) https://github.com/huggingface/optimum/issues/1177 Description of Bug: If I create a py file to run my own scripts, and name it "onnx.py", it wreaks all kinds of havoc. Specifically circular errors. It took me a while to figure it was caused by "onnx.py" being a reserved name. This is the first time I've ever come across such an issue. I'm not sure if other modules prevent these issues by ringfencing their scope to specific folders or namespaces.. or whether it's just bad luck. Is it possible to ringfence this kind of issue by either renaming the internal onnx.py file to something that users would never use OR, customize a validation check that tells user which filenames are reserved, OR at least updating the error message so that users don't need half a day to figure out what's causing the issue? Many thanks ### Expected behavior That either I can use any filename for my script.py (eg. onnx.py) without issues OR There's a really clear error message that states "please do not use the following reserved names for your python scripts: eg1.py, eg2.py, etc" Much appreciated
https://github.com/huggingface/optimum/issues/1401
open
[ "bug" ]
2023-09-21T04:12:49Z
2023-10-05T14:32:40Z
1
gidzr
huggingface/diffusers
5,124
How to fine tune checkpoint .safetensor
### Describe the bug I tried to fine tuning a model from a checkpoint (i.e https://civitai.com/models/119202/talmendoxl-sdxl-uncensored-full-model)I converted the checkpoint to diffuser format using this library: https://github.com/waifu-diffusion/sdxl-ckpt-converter/ The model converted works fine for inference and the training script works fine if I use a standard base i.e.: "stabilityai/stable-diffusion-xl-base-1.0", but I have error when start from converted model ### Reproduction download checkpoint: https://civitai.com/models/119202/talmendoxl-sdxl-uncensored-full-model convert using: https://github.com/waifu-diffusion/sdxl-ckpt-converter/ tstart training with: !accelerate launch train_text_to_image_lora_sdxl.py \ --pretrained_model_name_or_path="/content/drive/MyDrive/talmendoxlSDXL_v11Beta" \ --pretrained_vae_model_name_or_path="madebyollin/sdxl-vae-fp16-fix" \ --dataset_name="$INSTANCE_DIR_PARSED" \ --caption_column="text" \ --resolution=1024 \ --train_batch_size=1 \ --num_train_epochs=$TRAIN_EPOCHS \ --checkpointing_steps=1000000 \ --learning_rate=$LEARNING_RATE \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --seed=42 \ --output_dir="$OUTPUT_DIR" \ --enable_xformers_memory_efficient_attention \ --gradient_checkpointing \ --mixed_precision="fp16" \ --use_8bit_adam ### Logs ```shell You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. {'clip_sample_range', 'dynamic_thresholding_ratio', 'variance_type', 'thresholding'} was not found in config. Values will be initialized to default values. Traceback (most recent call last): File "/content/diffusers/examples/text_to_image/train_text_to_image_lora_sdxl.py", line 1271, in <module> main(args) File "/content/diffusers/examples/text_to_image/train_text_to_image_lora_sdxl.py", line 554, in main text_encoder_one = text_encoder_cls_one.from_pretrained( File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 2740, in from_pretrained raise EnvironmentError( OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory /content/drive/MyDrive/talmendoxlSDXL_v11Beta. Traceback (most recent call last): File "/usr/local/bin/accelerate", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 45, in main args.func(args) File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 979, in launch_command simple_launcher(args) File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 628, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/usr/bin/python3', 'train_text_to_image_lora_sdxl.py', '--pretrained_model_name_or_path=/content/drive/MyDrive/talmendoxlSDXL_v11Beta', '--pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix', '--dataset_name=/content/instancefolder_parsed', '--caption_column=text', '--resolution=1024', '--train_batch_size=1', '--num_train_epochs=1', '--checkpointing_steps=1000000', '--learning_rate=2e-05', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--seed=42', '--output_dir=/content/lora-trained-xl-colab', '--enable_xformers_memory_efficient_attention', '--gradient_checkpointing', '--mixed_precision=fp16', '--use_8bit_adam']' returned non-zero exit status 1. ``` ### System Info - `diffusers` version: 0.21.0.dev0 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Huggingface_hub version: 0.17.2 - Transformers version: 4.33.2 - Accelerate version: 0.21.0 - xFormers version: 0.0.21 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @williamberman, @patrickvonplaten, @sayakpau
https://github.com/huggingface/diffusers/issues/5124
closed
[ "bug", "stale" ]
2023-09-20T22:45:38Z
2023-11-22T15:06:19Z
null
EnricoBeltramo
huggingface/diffusers
5,118
how to use controlnet's reference_only fuction with diffusers??
### Model/Pipeline/Scheduler description can anyone help me to understand how to use controlnet's reference_only fuction with diffusers ### Open source status - [ ] The model implementation is available - [ ] The model weights are available (Only relevant if addition is not a scheduler). ### Provide useful links for the implementation _No response_
https://github.com/huggingface/diffusers/issues/5118
closed
[ "stale" ]
2023-09-20T10:17:53Z
2023-11-08T15:07:34Z
null
sudip550
huggingface/transformers.js
321
[Question] Image Embeddings for ViT
Is it possible to get image embeddings using Xenova/vit-base-patch16-224-in21k model? We use feature_extractor to get embeddings for sentences. Can we use feature_extractor to get image embeddings? ```js const model_id = "Xenova/vit-base-patch16-224-in21k"; const image = await RawImage.read("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg"); const classifier = await pipeline("image-classification", model_id); const { image_embeddings } = await classifier.processor.feature_extractor(image); ```
https://github.com/huggingface/transformers.js/issues/321
closed
[ "question" ]
2023-09-20T01:22:08Z
2024-01-13T01:25:03Z
null
hadminh
huggingface/optimum
1,395
TensorrtExecutionProvider documentation
### System Info ```shell main, docs ``` ### Who can help? @fxmarty ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) The method described in the docs for [TRT engine building](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#tensorrt-engine-build-and-warmup) is outdated, first mentioned [here](https://github.com/huggingface/optimum/issues/842#issuecomment-1568766399), I tested the dynamic shapes method in `optimum-benchmark` [here](https://github.com/huggingface/optimum-benchmark/pull/55#issuecomment-1721180586). ### Expected behavior We can update the docs with this snippet: ```python provider_options = { "trt_engine_cache_enable": True, "trt_engine_cache_path": "tmp/trt_cache_gpt2_example", "trt_profile_min_shapes": "input_ids:1x16,attention_mask:1x16", "trt_profile_max_shapes": "input_ids:1x64,attention_mask:1x64", "trt_profile_opt_shapes": "input_ids:1x32,attention_mask:1x32", } ort_model = ORTModelForCausalLM.from_pretrained( "gpt2", export=True, use_cache=False, provider="TensorrtExecutionProvider", provider_options=provider_options, ) ort_model.generate( input_ids=torch.tensor([[1] * 16]).to("cuda"), max_new_tokens=64-16, min_new_tokens=64-16, pad_token_id=0, eos_token_id=0, ) ``` though it's still not clear to me what's the effect of `trt_profile_opt_shapes`.
https://github.com/huggingface/optimum/issues/1395
open
[ "documentation", "onnxruntime" ]
2023-09-19T09:06:17Z
2023-09-19T09:57:26Z
1
IlyasMoutawwakil
huggingface/transformers.js
317
How to use xenova/transformers in VSCode Extension
Hey guys! I am trying to use xenova/transformers in CodeStory, we roll a vscode extension as well and I am hitting issues with trying to get the import working, here's every flavor of importing the library which I have tried to date. ``` const TransformersApi = Function('return import("@xenova/transformers")')(); const { pipeline, env } = await TransformersApi; ``` ``` const { pipeline, env } = await import('@xenova/transformers') ``` ``` const TransformersApi = require('@xenova/transformers'); const { pipeline, env } = await TransformersApi; ``` I think the crux of the issue is the node environment which VSCode uses which does not allow any of these to work, and I keep getting the deaded: ``` Error [ERR_REQUIRE_ESM]: require() of ES Module /Applications/Aide.app/Contents/Resources/app/extensions/codestory/node_modules/@xenova/transformers/src/transformers.js from /Applications/Aide.app/Contents/Resources/app/extensions/codestory/out/llm/embeddings/sentenceTransformers.js not supported. Instead change the require of transformers.js in /Applications/Aide.app/Contents/Resources/app/extensions/codestory/out/llm/embeddings/sentenceTransformers.js to a dynamic import() which is available in all CommonJS modules. ``` after checking the js code which is generated, it ends up including the require word: ``` __importStar(require('@xenova/transformers')) ``` when I used the first option which was a function I got a very weird error btw: ``` [Extension Host] TypeError: A dynamic import callback was not specified. at new NodeError (node:internal/errors:399:5) at importModuleDynamicallyCallback (node:internal/process/esm_loader:39:9) at eval (eval at <anonymous> (/Applications/Aide.app/Contents/Resources/app/extensions/codestory/out/llm/embeddings/sentenceTransformers.js:46:41), <anonymous>:3:1) ``` this is mostly comping from the node version which VSCode uses itself. Do you guys have any suggestions on what I can do about this? Thanks!
https://github.com/huggingface/transformers.js/issues/317
open
[ "question" ]
2023-09-19T01:35:21Z
2024-07-27T20:36:37Z
null
theskcd
huggingface/candle
894
How to fine-tune Llama?
Hello everybody, I am trying to fine-tune the Llama model, but cannot load the safetensors file. I have modified the training loop for debugging and development: ```rust pub fn run(args: &crate::TrainingCmd, common_args: &crate::Args) -> Result<()> { let config_path = match &args.config { Some(config) => std::path::PathBuf::from(config), None => { let api = hf_hub::api::sync::Api::new().unwrap(); println!("loading the model weights from {}", args.model_id); let api = api.model(args.model_id.clone()); api.get(&args.which_model).unwrap() } }; let device = candle_examples::device(common_args.cpu)?; let config = Config::tiny(); let mut varmap = candle_nn::VarMap::new(); let vb = candle_nn::VarBuilder::from_varmap(&varmap, DType::F32, &device); varmap.load(config_path).unwrap(); /*let cache = Cache::new(false, &config, vb.pp("rot"))?; let model = Llama::load(vb, &cache, config, true)?; let params = candle_nn::ParamsAdamW { lr: args.learning_rate, ..Default::default() }; let mut opt = candle_nn::AdamW::new(varmap.all_vars(), params)?; for (batch_index, batch) in batch_iter.enumerate() { let (inp, tgt) = batch?; let logits = model.forward(&inp, 0)?; let loss = candle_nn::loss::cross_entropy(&logits.flatten_to(1)?, &tgt.flatten_to(1)?)?; opt.backward_step(&loss)?; if batch_index > 0 && batch_index % 1000 == 0 { varmap.save("checkpoint.safetensors")? } }*/ Ok(()) } ``` I realize this error is likely because I cannot use VarMap::load to load such a large safetensors file (as described [here](https://github.com/huggingface/safetensors/blob/main/README.md#benefits)). However, how can I use VarMap (or something else that allows me to modifiy the tensor map) to load the weights? If there is not such a method, how should I implement this myself? Thank you! Eric
https://github.com/huggingface/candle/issues/894
closed
[]
2023-09-18T22:18:04Z
2023-09-21T10:05:57Z
null
EricLBuehler
huggingface/candle
891
How to do fine-tuning?
Hello everybody, I was looking through the Candle examples and cannot seem to find an example of fine-tuning for Llama. It appears the only example present is for training from scratch. How should I fine-tune a pretrained model on my own data? Or, more generally, how should I fine tune a model that it loaded from a safetensor file (and whose VarBuilder is immutable as discussed in #883)? Thanks! Eric
https://github.com/huggingface/candle/issues/891
closed
[]
2023-09-18T18:37:42Z
2024-07-08T15:13:01Z
null
EricLBuehler
huggingface/transformers
26,218
How to manually set the seed of randomsampler generator when training using transformers trainer
### System Info I used a [script](https://github.com/huggingface/transformers/blob/v4.33.0/examples/pytorch/language-modeling/run_clm.py) to continue pre-training the llama2 model. In the second epoch, the loss began to explode, so I chose to reload the checkpoint to continue training, but the loss changes were completely consistent with before, which made me doubt the iteration of the dataset is always consistent. So I tried modifying the [seed.](https://github.com/huggingface/transformers/blob/v4.33.0/examples/pytorch/language-modeling/run_clm.py#L309C33-L309C33) But in the end, my training loss is always consistent, and the state I print randomsampler is always the same. I hope someone can tell me how to solve this problem, including where the seed of this generator is specified. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction transformers==4.33.0 pytorch==1.13.1 accelerate==0.21.0 deepspeed==0.10.0 ### Expected behavior I hope that the sampling of training data set should be different every time.
https://github.com/huggingface/transformers/issues/26218
closed
[]
2023-09-18T14:19:11Z
2023-11-20T08:05:37Z
null
young-chao
huggingface/transformers.js
313
[Question] How to use remote models for automatic-speech-recognition
I have an html file that is ``` <!DOCTYPE html> <html> <body> <script type="module"> import { pipeline,env } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.0'; env.allowLocalModels = false; const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en'); let output = await transcriber('https://xenova.github.io/transformers.js/audio/jfk.wav', { return_timestamps: true }) console.log(output) </script> </body> </html> ``` I'm just trying to load the model, but it seems to be requesting from local url rather than hugging face. How can I enable remote models?
https://github.com/huggingface/transformers.js/issues/313
closed
[ "question" ]
2023-09-18T04:56:52Z
2023-09-18T05:19:00Z
null
LehuyH
huggingface/candle
883
Question: How to properly use VarBuilder?
Hello everybody, I am working on implementing LoRA and want to use the VarBuilder system. However, when I try to get a tensor with get_with_hints, I get a CannotFindTensor Err. To create the Tensor, I do: ```rust vb.pp("a").get_with_hints( ...lora specific shape... "weight", ...lora specific hints... ) ``` However, this fails with the CannotFindTensor error. How can I create the Tensor, or perhaps am I using the API incorrectly? Thanks! Eric
https://github.com/huggingface/candle/issues/883
closed
[]
2023-09-17T20:40:27Z
2023-09-17T21:02:24Z
null
EricLBuehler
huggingface/transformers.js
310
How to load model from the static folder path in nextjs or react or vanilla js?
<!-- QUESTION GOES HERE -->
https://github.com/huggingface/transformers.js/issues/310
closed
[ "question" ]
2023-09-17T14:13:57Z
2023-09-27T08:36:29Z
null
adnankarim
huggingface/safetensors
360
The default file format used when loading the model?
I guess that huggingface loads .safetensor files by default when loading models. Is this mandatory? Can I choose to load files in. bin format? (Because I only downloaded weights in bin format, and it reported an error “ could not find a file in safeTensor format”). I do not find related infomation in docs. Thanks for your help.
https://github.com/huggingface/safetensors/issues/360
closed
[]
2023-09-15T14:56:13Z
2023-09-19T10:34:57Z
1
Kong-Aobo
huggingface/diffusers
5,055
How to download config.json if it is not in the root directory.
Is there any way to download vae for a model where config.json is not in the root directory? ```python vae = AutoencoderKL.from_pretrained("redstonehero/kl-f8-anime2") ``` For example, as shown above, there is no problem if config.json exists in the root directory, but if it does not exist, an error will occur. ```python vae = AutoencoderKL.from_pretrained("hakurei/waifu-diffusion") ``` I would be glad to get your advice.
https://github.com/huggingface/diffusers/issues/5055
closed
[]
2023-09-15T11:37:47Z
2023-09-16T00:15:58Z
null
suzukimain
huggingface/transformers.js
305
[Question] Can I work with Peft models through the API?
Let's say I have the following code in Python. How would I translate that to js? ```` import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "samwit/bloom-7b1-lora-tagger" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) # Load the Lora model model = PeftModel.from_pretrained(model, peft_model_id) ````
https://github.com/huggingface/transformers.js/issues/305
open
[ "question" ]
2023-09-14T21:02:59Z
2023-09-16T00:16:03Z
null
chrisfel-dev
huggingface/diffusers
5,042
How to give number of inference steps to Wuerstchen prior pipeline
**this below working with default DEFAULT_STAGE_C_TIMESTEPS but it always generates with exactly 29 number of prior inference steps** ``` prior_output = prior_pipeline( prompt=prompt, height=height, width=width, num_inference_steps=prior_num_inference_steps, timesteps=DEFAULT_STAGE_C_TIMESTEPS, negative_prompt=negative_prompt, guidance_scale=prior_guidance_scale, num_images_per_prompt=num_images_per_prompt, generator=generator, callback=callback_prior, ) ``` when i make it like below i got this error ``` prior_output = prior_pipeline( prompt=prompt, height=height, width=width, prior_num_inference_steps = prior_num_inference_steps, # timesteps=DEFAULT_STAGE_C_TIMESTEPS, negative_prompt=negative_prompt, guidance_scale=prior_guidance_scale, num_images_per_prompt=num_images_per_prompt, generator=generator, callback=callback_prior, ) ``` `TypeError: WuerstchenPriorPipeline.__call__() got an unexpected keyword argument 'prior_num_inference_steps'` But the documentation showing it??? https://huggingface.co/docs/diffusers/main/en/api/pipelines/wuerstchen `prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 30) — The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. For more specific timestep spacing, you can pass customized prior_timesteps` @sayakpaul @dome272 @patrickvonplaten @williamberman **Here below entire code. what I want is being able to set any number of prior and decoder number of inference steps** ``` prior_output = prior_pipeline( prompt=prompt, height=height, width=width, prior_num_inference_steps = prior_num_inference_steps, # timesteps=DEFAULT_STAGE_C_TIMESTEPS, negative_prompt=negative_prompt, guidance_scale=prior_guidance_scale, num_images_per_prompt=num_images_per_prompt, generator=generator, callback=callback_prior, ) if PREVIEW_IMAGES: for _ in range(len(DEFAULT_STAGE_C_TIMESTEPS)): r = next(prior_output) if isinstance(r, list): yield r prior_output = r decoder_output = decoder_pipeline( image_embeddings=prior_output.image_embeddings, prompt=prompt, num_inference_steps = decoder_num_inference_steps, # timesteps=decoder_timesteps, guidance_scale=decoder_guidance_scale, negative_prompt=negative_prompt, generator=generator, output_type="pil", ).images yield decoder_output ```
https://github.com/huggingface/diffusers/issues/5042
closed
[ "bug" ]
2023-09-14T15:21:31Z
2023-09-20T07:41:19Z
null
FurkanGozukara
huggingface/chat-ui
440
Web Search not working
i have been having this issues where it just searches something but then never shows me the answer it shows max tokens i just keep seeing this first i see the links of the resources but then it does nothing at all ![image](https://github.com/huggingface/chat-ui/assets/108006611/6eefb6a4-426e-408c-85bb-1106161fd481) i just see this and do not even get the model response
https://github.com/huggingface/chat-ui/issues/440
closed
[ "support" ]
2023-09-14T13:50:15Z
2023-09-20T14:16:49Z
5
bilalazhar72
huggingface/chat-ui
438
running the app with websearch fails
Hey after adding the serper api key I'm trying to run the app locally "nmp run dev" and I get an issue related to websearch: ``` [vite]: Rollup failed to resolve import "@xenova/transformers" from "C:/Users/username/chat-ui/src/lib/server/websearch/sentenceSimilarity.ts". This is most likely unintended because it can break your application at runtime. If you do want to externalize this module explicitly add it to `build.rollupOptions.external` error during build: Error: [vite]: Rollup failed to resolve import "@xenova/transformers" from "C:/Users/username/chat-ui/src/lib/server/websearch/sentenceSimilarity.ts". This is most likely unintended because it can break your application at runtime. If you do want to externalize this module explicitly add it to `build.rollupOptions.external` at viteWarn (file:///C:/Users/username/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:48142:27) at onRollupWarning (file:///C:/Users/username/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:48174:9) at onwarn (file:///C:/Users/username/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:47902:13) at file:///C:/Users/username/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:24152:13 at Object.logger [as onLog] (file:///C:/Users/username/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:25825:9) at ModuleLoader.handleInvalidResolvedId (file:///C:/Users/rachel_shalom/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:24738:26) at file:///C:/Usersusername/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:24698:26 ``` how do I externelize this module and should I? anyone had this issue?
https://github.com/huggingface/chat-ui/issues/438
closed
[ "support" ]
2023-09-14T11:21:35Z
2023-09-14T12:08:00Z
2
RachelShalom
huggingface/diffusers
5,032
How to unfuse_lora only the first one after I have added multiple lora?
base.load_lora_weights("models/safetensors/SDXL/国风插画SDXL.safetensors") base.fuse_lora(lora_scale=.7) base.load_lora_weights("models/safetensors/SDXL/sd_xl_offset_example-lora_1.0.safetensors") base.fuse_lora(lora_scale=.8) Now, When I execute unfuse_lora() only the most recent one has been unfuse . so,how to unfuse '国风插画SDXL.safetensors' or unfuse all lora weights
https://github.com/huggingface/diffusers/issues/5032
closed
[ "stale" ]
2023-09-14T08:10:46Z
2023-10-30T15:06:34Z
null
yanchaoguo
huggingface/optimum
1,384
Documentation Request: Table or heuristic for Ortmodel Method to Encoder/Decoder to .onnx File to Task
### Feature request Hi there Could you provide either a table (where explicit rules apply - see attached image), or a heuristic, so I can tell which ML models, optimised file types, with which tasks, apply to which inference methods and inference tasks? The example table below will help to clarify, and isn't necessarily prescriptive, because I may have mixed some concepts. In case you mention, yes - I'm aware that it's possible to run a pipeline with the wrong model, and an error message will spit out all the accepted architectures/models (roberta, gpt, etc) for a method type. However, a) this is very time-consuming, hit and miss, and b) these 'lists' don't explain the relationships to the underlying architectures and files.. (ie. model_merged, encoder-decoder, encoder only, decoder only, that result from the pytorch, safetensor files.) For example, will all models exported/optimised for text-generation always be encoder-decoder and always use the ORTSeq2SeqModel method (for illustrative purposes), or will this depend on a combination of the original model architecture and the task applied during optimisation, which may result in one or more usable methods for inference? It's a massive learning curve for me, but seems it would be relatively straightforward to someone who works with this stuff . It probably just needs to go from peoples' heads into a document. Thanks muchly! it'll be a massive time saver and help with conceptual understanding. ### Motivation I'm trying to understand how to mix and match the models, optimisations, tasks, and inference methods.. Been trawling HF, ONNX, and general information but cannot find anything like this that exists, and would save a BUNCH of testing trial and error time. (like I've wasted directly and indirectly almost a week of trialling and there's probably very simple rules for this) Part of the time wasted has been selecting models and running CLI command to optimise/quantize for a task, only to discover I have no idea with ORTModel method to use, as these don't relate to task but model architecture instead (or a combination of both), and brute forcing an understanding with testing and trying to come up with my own heuristics. Maybe this type of knowledge is assumed? but for newbs like me it's extremely daunting and feels like I may be trying to re-invent the wheel. ### Your contribution (table for illustrative purposes.. the dummy data is wrong.. ) ![method-task-model-llm-matrix](https://github.com/huggingface/optimum/assets/83053994/d25adf44-8cff-4a63-a5c7-312636f1dbaf)
https://github.com/huggingface/optimum/issues/1384
closed
[ "Stale" ]
2023-09-14T01:45:38Z
2025-04-24T02:11:24Z
4
gidzr
huggingface/optimum
1,379
Can't use bettertransformer to train vit?
### System Info ```shell Traceback (most recent call last): File "test_bettertransformer_vit.py", line 95, in <module> main() File "test_bettertransformer_vit.py", line 92, in main test_train_time() File "test_bettertransformer_vit.py", line 86, in test_train_time out_vit = model(pixel_values).last_hidden_state File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/vit/modeling_vit.py", line 587, in forward encoder_outputs = self.encoder( File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/vit/modeling_vit.py", line 413, in forward layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/.local/lib/python3.8/site-packages/optimum/bettertransformer/models/encoder_models.py", line 1186, in forward raise NotImplementedError( NotImplementedError: Training and Autocast are not implemented for BetterTransformer + ViT. Please open an issue. ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) def test_train_time(): model = ViTModel.from_pretrained(model_pth).to('cuda') processor = ViTImageProcessor.from_pretrained(model_pth) pixel_values=clip_process(processor, pic_pth).cuda() if args.flash: model = model.to_bettertransformer() model.train() begin_time = time.time() for i in range(args.nums): out_vit = model(pixel_values).last_hidden_state print('use flash: {}, train vit time {:.2f}'.format(args.flash, time.time() - begin_time)) ### Expected behavior none
https://github.com/huggingface/optimum/issues/1379
closed
[ "bug" ]
2023-09-13T12:49:53Z
2025-02-20T08:38:26Z
1
lijiaoyang
huggingface/text-generation-inference
1,015
how to text-generation-benchmark through the local tokenizer
The command i run in docker is ``` text-generation-benchmark --tokenizer-name /data/checkpoint-5600/ ``` The error log is ``` 2023-09-12T11:22:01.245495Z INFO text_generation_benchmark: benchmark/src/main.rs:132: Loading tokenizer 2023-09-12T11:22:01.245966Z INFO text_generation_benchmark: benchmark/src/main.rs:141: Downloading tokenizer 2023-09-12T11:22:31.270784Z WARN cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:564: ETAG fetch failed for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json, retrying in 1957 milliseconds... 2023-09-12T11:23:03.228297Z WARN cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:564: ETAG fetch failed for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json, retrying in 2202 milliseconds... 2023-09-12T11:23:35.430766Z WARN cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:564: ETAG fetch failed for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json, retrying in 4671 milliseconds... 2023-09-12T11:24:10.102170Z ERROR cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:555: Max retries exceeded for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Model \"/data/checkpoint-5600/\" on the Hub doesn't have a tokenizer"', benchmark/src/main.rs:153:78 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace Aborted (core dumped) ``` I notice `Downloading tokenizer` in error log, and i feel very strange about it beacause `/data/checkpoint-5600/` is my local model path. So i find the src code as following: https://github.com/huggingface/text-generation-inference/blob/1f69fb9ed4fb91fe0bb9b94edda5729c67e6f02a/benchmark/src/main.rs#L134-L154 But i notice that only `tokenizer_config.json` in my local model path but no `tokenizer.json`. And i see that it is the same is as the hub model, for example https://huggingface.co/openlm-research/open_llama_7b_v2/tree/main Then i want to bypass by renaming `tokenizer_config.json` to `tokenizer.json` in my local model path, it still doesn't work: ``` 2023-09-12T11:29:52.461487Z INFO text_generation_benchmark: benchmark/src/main.rs:132: Loading tokenizer 2023-09-12T11:29:52.462513Z INFO text_generation_benchmark: benchmark/src/main.rs:138: Found local tokenizer thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error("expected `,` or `}`", line: 2, column: 18)', benchmark/src/main.rs:139:69 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace Aborted (core dumped) ``` Finally i want to know the `tokenizer_config.json` and `tokenizer.json` expressed here are the same thing?
https://github.com/huggingface/text-generation-inference/issues/1015
closed
[ "Stale" ]
2023-09-12T12:10:41Z
2024-06-07T09:39:32Z
null
jessiewiswjc
huggingface/autotrain-advanced
260
How to create instruction dataset (Q&A) for fine-tuning from PDFs?
https://github.com/huggingface/autotrain-advanced/issues/260
closed
[]
2023-09-12T02:54:07Z
2023-12-18T15:31:13Z
null
mahimairaja
huggingface/transformers.js
295
[Question] Issue with deploying model to Vercel using NextJS and tRPC
Hi I'm trying to deploy my model to Vercel via NextJS and tRPC and have the .cache folder generated using the postinstall script ``` // @ts-check let fs = require("fs-extra"); let path = require("path"); async function copyXenovaToLocalModules() { const paths = [["../../../node_modules/@xenova", "../node_modules/@xenova"]]; for (const pathTuple of paths) { const [src, dest] = [ path.join(__dirname, pathTuple[0]), path.join(__dirname, pathTuple[1]), ]; await fs.remove(dest).catch(() => {}); await fs.copy(src, dest).catch(() => {}); // Create .cache folder for dest paths const cacheDir = path.join(dest, "transformers", ".cache"); await fs.mkdir(cacheDir).catch(() => {}); } } copyXenovaToLocalModules(); ``` When I run this, I get the following error: ``` env { backends: { onnx: { wasm: [Object], webgl: {}, logLevelInternal: 'warning' }, tfjs: {} }, __dirname: '/vercel/path0/packages/api/node_modules/@xenova/transformers', version: '2.5.4', allowRemoteModels: true, remoteHost: 'https://huggingface.co/', remotePathTemplate: '{model}/resolve/{revision}/', allowLocalModels: true, localModelPath: '/vercel/path0/packages/api/node_modules/@xenova/transformers/models/', useFS: true, useBrowserCache: false, useFSCache: true, cacheDir: '/vercel/path0/packages/api/node_modules/@xenova/transformers/.cache/', useCustomCache: false, customCache: null } An error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/vercel'] { errno: -2, code: 'ENOENT', syscall: 'mkdir', path: '/vercel' } An error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/vercel'] { errno: -2, code: 'ENOENT', syscall: 'mkdir', path: '/vercel' } An error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/vercel'] { errno: -2, code: 'ENOENT', syscall: 'mkdir', path: '/vercel' } ``` Can someone help me with this?
https://github.com/huggingface/transformers.js/issues/295
closed
[ "question" ]
2023-09-11T11:13:11Z
2023-09-12T15:23:17Z
null
arnabtarwani
huggingface/transformers.js
291
[Question] Using transformers.js inside an Obsidian Plugin
I'm trying to run transfomer.js inside of Obsidian but running into some errors: <img width="698" alt="Screenshot 2023-09-10 at 3 05 43 PM" src="https://github.com/xenova/transformers.js/assets/11430621/a6b4b83e-6a1e-44bb-9a46-c3966d058146"> This code is triggering the issues: ```js class MyClassificationPipeline { static task = "text-classification"; static model = "Xenova/distilbert-base-uncased-finetuned-sst-2-english"; static instance = null; static async getInstance(progress_callback = null) { if (this.instance === null) { // Dynamically import the Transformers.js library console.log('before import') let { pipeline, env } = await import("@xenova/transformers"); console.log('after import') // NOTE: Uncomment this to change the cache directory // env.cacheDir = './.cache'; this.instance = pipeline(this.task, this.model, { progress_callback, }); } return this.instance; } } export default MyClassificationPipeline; // Comment out this line if you don't want to start loading the model as soon as the server starts. // If commented out, the model will be loaded when the first request is received (i.e,. lazily). // MyClassificationPipeline.getInstance(); ``` [Link to source](https://github.com/different-ai/obsidian-ml/blob/master/embeddings.js) [These are the lines that are calling the code above](https://github.com/different-ai/obsidian-ml/blob/0bd169c6e0c3f385e7238a78c585932fe0320bc9/hello.js#L27-L29) Context about Obsidian plugins: - Obsidian plugin is just a single imported js file. - Most of the time it's bundled using esbuild. In my case, this is [my esbuild setup](https://github.com/different-ai/obsidian-ml/blob/master/esbuild.config.mjs) ---- How should I be tackling this, what would be the recommended way to bundle transformer.js?
https://github.com/huggingface/transformers.js/issues/291
open
[ "question" ]
2023-09-10T22:12:07Z
2024-04-30T13:52:06Z
null
benjaminshafii
huggingface/candle
807
How to use the kv_cache?
Hi, how would I use the kv_cache? Let's say I want a chat like type of thing, how would I save the kv_cache and load it so that all the tokens won't have to be computed again?
https://github.com/huggingface/candle/issues/807
closed
[]
2023-09-10T21:39:31Z
2025-11-22T23:18:58Z
null
soupslurpr
huggingface/transformers
26,061
How to perform batch inference?
### Feature request I want to pass a list of tests to model.generate. text = "hey there" inputs = tokenizer(text, return_tensors="pt").to(0) out = model.generate(**inputs, max_new_tokens=184) print(tokenizer.decode(out[0], skip_special_tokens=True)) ### Motivation I want to do batch inference. ### Your contribution Testing
https://github.com/huggingface/transformers/issues/26061
closed
[]
2023-09-08T20:59:37Z
2023-10-23T16:04:20Z
null
ryanshrott
huggingface/text-generation-inference
998
How to insert a custom stop symbol, like </s>?
### Feature request nothing ### Motivation nothing ### Your contribution nothing
https://github.com/huggingface/text-generation-inference/issues/998
closed
[]
2023-09-08T07:06:08Z
2023-09-08T07:13:38Z
null
babytdream
huggingface/safetensors
355
Safe tensors cannot be easily freed!
### System Info Hi, I am using the safetensors for loading Falcon-180B model. I am loading the ckpts one by one on CPU, and then try to remove the tensors by simply calling `del` function. However, I am seeing that CPU memory keeps increasing until it runs out of memory and system crashes (I am also calling `gc.collect()` after deleting tensors). Is there any good way to release the safetensor memory. Thanks, Reza ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Reproduction ``` from safetensors.torch import load_file sd_ = load_file(ckpt_path) lens = len(sd_.keys()) for _ in range(lens): data = sd_.popitem() del data del sd_ gc.collect() ``` ### Expected behavior release the memory after calling `gc.collect()`
https://github.com/huggingface/safetensors/issues/355
closed
[ "Stale" ]
2023-09-07T22:13:15Z
2024-08-30T10:22:01Z
4
RezaYazdaniAminabadi
huggingface/transformers.js
285
The generate API always returns the same number of tokens as output nomatter what is min_tokens
Here is the code I am trying ```js import { pipeline } from '@xenova/transformers'; import { env } from '@xenova/transformers'; let generator = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M'); let output = await generator('write a blog on Kubernetes?', { max_new_tokens: 512,min_new_tokens:512,min_length:300 }); console.log(output) ``` So no matter whatever is min_new_tokens or min_length (even if I try one of them only), output just remains same length
https://github.com/huggingface/transformers.js/issues/285
closed
[ "bug" ]
2023-09-07T13:30:39Z
2023-09-17T21:57:14Z
null
allthingssecurity
huggingface/chat-ui
430
Server does not support event stream content error for custom endpoints
is there anyone faced the issue such as "Server does not support event stream content" when parsing the custom endpoint results. what is the solution for this error? In order to reproduce the issue, User enter prompts saying "how are you" -> call goes to custom endpoint -> Endpoint returns response as string -> error popsup "Server does not support event stream content"
https://github.com/huggingface/chat-ui/issues/430
closed
[]
2023-09-07T10:01:18Z
2023-09-15T00:01:56Z
3
nandhaece07
huggingface/sentence-transformers
2,300
How to convert embedding vector to text ?
I use the script below to convert text to embeddings ``` model = SentenceTransformer('all-MiniLM-L6-v2') embeddings = model.encode(text) ``` But how to convert embeddings to text ?
https://github.com/huggingface/sentence-transformers/issues/2300
closed
[]
2023-09-07T09:19:22Z
2025-09-01T11:44:34Z
null
chengzhen123
huggingface/transformers.js
283
[Question] Model type for tt/ee not found, assuming encoder-only architecture
Reporting this as requested by the warning message, but as a question because I'm not entirely sure if it's a bug: ![image](https://github.com/xenova/transformers.js/assets/1167575/f40d5935-01b4-442e-802b-ed5fd7a774b7) Here's the code I ran: ```js let quantized = false; // change to `true` for a much smaller model (e.g. 87mb vs 345mb for image model), but lower accuracy let { AutoProcessor, CLIPVisionModelWithProjection, RawImage, AutoTokenizer, CLIPTextModelWithProjection } = await import('https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.4/dist/transformers.min.js'); let imageProcessor = await AutoProcessor.from_pretrained('Xenova/clip-vit-base-patch16'); let visionModel = await CLIPVisionModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16', {quantized}); let tokenizer = await AutoTokenizer.from_pretrained('Xenova/clip-vit-base-patch16'); let textModel = await CLIPTextModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16', {quantized}); function cosineSimilarity(A, B) { if(A.length !== B.length) throw new Error("A.length !== B.length"); let dotProduct = 0, mA = 0, mB = 0; for(let i = 0; i < A.length; i++){ dotProduct += A[i] * B[i]; mA += A[i] * A[i]; mB += B[i] * B[i]; } mA = Math.sqrt(mA); mB = Math.sqrt(mB); let similarity = dotProduct / (mA * mB); return similarity; } // get image embedding: let image = await RawImage.read('https://i.imgur.com/RKsLoNB.png'); let imageInputs = await imageProcessor(image); let { image_embeds } = await visionModel(imageInputs); console.log(image_embeds.data); // get text embedding: let texts = ['a photo of an astronaut']; let textInputs = tokenizer(texts, { padding: true, truncation: true }); let { text_embeds } = await textModel(textInputs); console.log(text_embeds.data); let similarity = cosineSimilarity(image_embeds.data, text_embeds.data); console.log(similarity); ```
https://github.com/huggingface/transformers.js/issues/283
closed
[ "question" ]
2023-09-07T05:01:34Z
2023-09-08T13:17:07Z
null
josephrocca
huggingface/safetensors
354
Is it possible to append to tensors along a primary axis?
### Feature request it would be really cool to be able to append to a safetensor file so you can continue to add data along, say, a batch dimension ### Motivation for logging data during train runs that can be visualized from an external tool. something like a live application that lazily loads the saved data. this is super useful for reinforcement learning ### Your contribution i could submit a PR if necessary.
https://github.com/huggingface/safetensors/issues/354
closed
[ "Stale" ]
2023-09-06T17:54:56Z
2023-12-11T01:48:44Z
2
verbiiyo
huggingface/huggingface_hub
1,643
We couldn't connect to 'https://huggingface.co/' to load this model and it looks like distilbert-base-uncased is not the path to a directory conaining a config.json file. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
### System Info Hello, I have been using hugging face transformers with a lot of success. I have been able to create many successful fine-tuned pre-trained text classification models using various HF transformers and have been using HF integration with SageMaker in a SageMaker conda_pytorch_310 notebook. my code looks like this: ```!pip install "transformers==4.17.0" "datasets[s3]==1.18.4" --upgrade``` ``` tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)``` Yesterday I was able to successfully download, fine tune and make inferences using distilbert-base-uncased, and today I am getting: ```OSError: We couldn't connect to 'https://huggingface.co/' to load this model and it looks like mattmdjaga/segformer_b2_clothes is not the path to a directory conaining a config.json file. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.``` Looking through the traceback I see: ```HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/mattmdjaga/segformer_b2_clothes/resolve/main/config.json During handling of the above exception, another exception occurred:``` .... ```File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/file_utils.py:2052, in _raise_for_status(request) 2050 raise RevisionNotFoundError((f"404 Client Error: Revision Not Found for url: {request.url}"))-> 2052 request.raise_for_status()``` I have tried many different models, both text classification and non-text classification and getting the same error. This worked yesterday and nothing has changed since then. I also have confirmed that nothing has changed on our end to cause this error ,and confirmed all the model names. Any insights would be appreciated! @Wauplin ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) ### Expected behavior model successfully downloads
https://github.com/huggingface/huggingface_hub/issues/1643
closed
[]
2023-09-06T17:18:45Z
2023-09-07T15:51:12Z
null
a-rhodes-vcu
huggingface/setfit
417
Passing multiple evaluation metrics to SetFitTrainer
Hi there, after reading the docs I find that one can easily get the f1 score or accuracy by passing the respective string as the `metric` argument to the trainer. However, how can I get both or even other metrics, such as f1_per_class? Thanks :)
https://github.com/huggingface/setfit/issues/417
closed
[ "question" ]
2023-09-06T11:38:08Z
2023-11-24T13:31:08Z
null
fhamborg
huggingface/optimum
1,357
[RFC] MusicGen `.to_bettertransformer()` integration
### Feature request Add support for MusicGen Better Transformer integration. MusicGen is composed of three sub-models: 1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5 2. MusicGen decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations. The pre-trained MusicGen models use the BART decoder structure 3. Audio codec: used to encode an audio prompt to use as prompt tokens, and recover the audio waveform from the audio tokens predicted by the decoder. The pre-trained MusicGen models use the [EnCodec model](https://huggingface.co/docs/transformers/main/model_doc/encodec) => the text encoder uses the T5 attention module, and the MusicGen decoder uses the BART attention module. Thus, there are no extra attention layers we need to add to optimum. The audio codec is not transformer based, so we don't need to export it to better transformer. The question is simply how to get the integration working with the sub-model structure. The config file for MusicGen is nested in the same way as the model structure, containing sub-configs for each of the three components: https://huggingface.co/docs/transformers/main/model_doc/musicgen#transformers.MusicgenConfig => this means that the text encoder config is accessed as `config.text_encoder`, and the text encoder model as `model.text_encoder`. Likewise, the MusicGen decoder config is accessed as `config.decoder`, and the text encoder model as `model.decoder`. We need to export the pairs of {models, configs} to their better transformer counterparts, e.g. {`model.text_encoder`, `config.text_encoder`} -> `better_transformer_text_encoder`, and {`model.decoder`, `config.decoder`} -> `better_transformer_decoder`. Ideally, we'd like to be able to export the entire model to better transformer in one go: ```python from transformers import MusicgenForConditionalGeneration model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") model = model.to_bettertransformer() ``` However. we can't simply export {`model`, `config`} like this, since the top-level config does not contain the config attributes for the sub-models. It's just a place-holder for the sub-model configs. A simple workaround is to export the text encoder and decoder separately: ```python from transformers import MusicgenForConditionalGeneration model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") model.text_encoder = model.text_encoder.to_bettertransformer() model.decoder = model.decoder.to_bettertransformer() ``` => but this diverges from the better transformer API ### Motivation ~9M MusicGen [downloads](https://huggingface.co/models?search=facebook/musicgen) per month -> huge interest in running the model! ### Your contribution Happy to help with the integration!
https://github.com/huggingface/optimum/issues/1357
closed
[]
2023-09-06T10:25:50Z
2024-01-10T17:31:44Z
1
sanchit-gandhi
huggingface/diffusers
4,906
How to check whether the image is flagged as inappropriate automated?
Is there a way to know whether the generated image (without seeing it) was flagged as inappropriate?
https://github.com/huggingface/diffusers/issues/4906
closed
[]
2023-09-05T17:51:07Z
2023-09-07T05:49:46Z
null
sarmientoj24
huggingface/diffusers
4,905
How to convert pretrained SDXL .safetensors model to diffusers folder format
As SDXL is gaining adoption, more and more community based models pop up that that are just saved as a .safetensors file. E.g the popular Realistic Vision: https://civitai.com/models/139562?modelVersionId=154590 When running train_dreambooth_lora_sdxl.py, the training script expects the diffusers folder format to accelerate text encoder, unet etc. As far as I know, there is no possible way to use `StableDiffusionXLPipeline.from_single_file()` to do the same. Is there a way to convert a SDXL 1.0 fine-tuned .safetensors file to the diffusers folder format? I found this but it doesn't seem to be applicable to SDXL scripts/convert_lora_safetensor_to_diffusers.py.
https://github.com/huggingface/diffusers/issues/4905
closed
[]
2023-09-05T17:01:27Z
2023-09-06T09:55:54Z
null
agcty
huggingface/transformers.js
280
[Question] How to run multiple pipeline or multiple modal?
<!-- QUESTION GOES HERE --> I am trying to transcribe from audio source and need to do multi language translation. I had tried transcribing using Xenova/whisper- and and take text input and feed in to "Xenova/m2m100_418M" modal but due to multiple pipeline it's failed. Is there any way to achieve this?
https://github.com/huggingface/transformers.js/issues/280
closed
[ "question" ]
2023-09-05T11:33:44Z
2023-11-01T11:32:15Z
null
sundarshahi
huggingface/optimum
1,346
BetterTransfomer Support for the GPTBigCode model
### Feature request is it possible to support GPTBigCode with BetterTransformer? https://huggingface.co/docs/transformers/model_doc/gpt_bigcode ### Motivation A very popular Decoder model for Code. ### Your contribution hope you can achieve it. Thanks.
https://github.com/huggingface/optimum/issues/1346
closed
[]
2023-09-04T16:52:56Z
2023-09-08T14:51:17Z
5
amarazad
huggingface/chat-ui
426
`stream` is not supported for this model
Hello Eperts, Trying to run https://github.com/huggingface/chat-ui by providing models like EleutherAI/pythia-1b, gpt2-large. With all these models, there is this consitent error {"error":["Error in `stream`: `stream` is not supported for this model"]} Although I can see that hosted inference API for these models are working well from their hugging face pages like this: https://huggingface.co/gpt2-large Could someone please help?
https://github.com/huggingface/chat-ui/issues/426
open
[ "question", "models" ]
2023-09-02T05:30:47Z
2023-12-24T16:39:21Z
null
newUserForTesting
huggingface/diffusers
4,871
How to run "StableDiffusionXLPipeline.from_single_file"?
I got an error when I ran the following code and it got an error on the line "pipe = StableDiffusionXLPipeline." and how to solve it? notes: I don't have a model refiner, I just want to run a model with a DIffuser XL ``` from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline import torch pipe = StableDiffusionXLPipeline.from_single_file( "/content/model/model.safetensors", torch_dtype=torch.float16).to("cuda") image = pipe( prompt, negative_prompt=negative_prompt, width=Width, height=Height, guidance_scale=7, target_size=(1024,1024), original_size=(4096,4096), num_inference_steps=25 ).images[0] ``` ``` /usr/local/lib/python3.10/dist-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-2-67122e524ae5>](https://localhost:8080/#) in <cell line: 4>() 2 import torch 3 ----> 4 pipe = StableDiffusionXLPipeline.from_single_file( 5 "/content/model/model.safetensors", torch_dtype=torch.float16).to("cuda") 6 1 frames [/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py](https://localhost:8080/#) in download_from_original_stable_diffusion_ckpt(checkpoint_path, original_config_file, image_size, prediction_type, model_type, extract_ema, scheduler_type, num_in_channels, upcast_attention, device, from_safetensors, stable_unclip, stable_unclip_prior, clip_stats_path, controlnet, load_safety_checker, pipeline_class, local_files_only, vae_path, vae, text_encoder, tokenizer, config_files) 1564 ) 1565 else: -> 1566 pipe = pipeline_class( 1567 vae=vae, 1568 text_encoder=text_model, TypeError: StableDiffusionXLPipeline.__init__() got an unexpected keyword argument 'safety_checker' ```
https://github.com/huggingface/diffusers/issues/4871
closed
[]
2023-09-01T22:42:25Z
2023-09-09T03:35:53Z
null
Damarcreative
huggingface/optimum
1,334
Enable CLI export of decoder-only models without present outputs
### Feature request Currently `optimum-cli export onnx` only supports exporting text-generation models with present outputs (`--task text-generation`) or with past+present outputs (``--task text-generation-with-past`). It would be useful to be able to export a variant without any caching structures if they will not be used. Example of how `--task text-generation` is not sufficient for this usecase: <details> ``` optimum-cli export onnx --model facebook/opt-125m --task text-generation TEST ... Validating ONNX model TEST/decoder_model.onnx... -[✓] ONNX model output names match reference model (present.7.key, present.2.key, present.3.key, present.2.value, present.3.value, present.10.value, logits, present.8.key, present.0.value, present.10.key, present.1.key, present.1.value, present.11.key, present.9.value, present.6.value, present.4.value, present.7.value, present.5.value, present.5.key, present.8.value, present.9.key, present.4.key, present.6.key, present.0.key, present.11.value) - Validating ONNX Model output "logits": -[✓] (2, 16, 50272) matches (2, 16, 50272) -[x] values not close enough, max diff: 3.719329833984375e-05 (atol: 1e-05) - Validating ONNX Model output "present.0.key": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.0.value": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.1.key": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.1.value": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.2.key": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.2.value": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.3.key": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.3.value": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.4.key": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[x] values not close enough, max diff: 1.8358230590820312e-05 (atol: 1e-05) - Validating ONNX Model output "present.4.value": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.5.key": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.5.value": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.6.key": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.6.value": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.7.key": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.7.value": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.8.key": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.8.value": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.9.key": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.9.value": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.10.key": -[✓] (2, 12, 16, 64) matches (2, 12, 16, 64) -[✓] all values close (atol: 1e-05) - Validating ONNX Model output "present.10.value": -[✓] (2, 12, 16, 64) matches
https://github.com/huggingface/optimum/issues/1334
closed
[]
2023-09-01T15:56:27Z
2023-09-13T11:43:36Z
3
mgoin
huggingface/transformers.js
274
[Question] How to convert to ONNX a fine-tuned model
Hi, we're playing with this library to see if it can be useful for our project. I find it very easy and well done (congratulations). The idea is not to use it directly as a frontend library but via node.js. We've tried scripting a model directly from HF (google/flan-t5-small) and it worked but we're having trouble using a fine-tuned model. Here what we tried. We fine-tuned a model (again google/flan-t5-small) and then converted it using the onnx script (in README.md). The script generated the following files: ``` onnx/decoder_model_quantized.onnx onnx/decoder_model.onnx onnx/encoder_model_quantized.onnx onnx/encoder_model.onnx config.json generation_config.json quantize_config.json special_tokens_map.json spice.model tokenizer_config.json tokenizer.json ``` But when we tried to use it it gave us this error: `local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at ./models/google/flan-t5-small-2/onnx/decoder_model_merged_quantized.onnx Some advice or useful doc/link? Thanks
https://github.com/huggingface/transformers.js/issues/274
open
[ "question" ]
2023-09-01T15:27:21Z
2023-09-01T16:12:12Z
null
mrddter
huggingface/datasets
6,203
Support loading from a DVC remote repository
### Feature request Adding support for loading a file from a DVC repository, tracked remotely on a SCM. ### Motivation DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible through the `DVCFileSystem`. I have a Gitlab repository where multiple files are tracked using DVC and stored in a GCP bucket. I would like to be able to load these files using `datasets` directly using an URL. My goal is to write a generic code that abstracts the storage layer, such that my users will only have to pass in an `fsspec`-compliant URL and the corresponding files will be loaded. ### Your contribution I managed to instantiate a `DVCFileSystem` pointing to a Gitlab repo from a `fsspec` chained URL in [this pull request](https://github.com/iterative/dvc/pull/9903) to DVC. ```python from fsspec.core import url_to_fs fs, _ = url_to_fs("dvc::https://gitlab.com/repository/group/my-repo") ``` From now I'm not sure how to continue, it seems that `datasets` expects the URL to be fully qualified like so: `dvc::https://gitlab.com/repository/group/my-repo/my-folder/my-file.json` but this fails because `DVCFileSystem` expects the URL to point to the root of an SCM repo. Is there a way to make this work with `datasets`?
https://github.com/huggingface/datasets/issues/6203
closed
[ "enhancement" ]
2023-09-01T14:04:52Z
2023-09-15T15:11:27Z
4
bilelomrani1
huggingface/optimum
1,328
Documentation for OpenVINO missing half()
### System Info ```shell N/A ``` ### Who can help? @echarlaix ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (minimal, reproducible, runnable) The documentation for OpenVINO is missing information does not have any information about using `half()` to run models on GPU. The docs used to have this information, but it was removed. Is this not required anymore? I.e. perhaps `model.to("GPU")` does this automatically? If so, how would one run on GPU with FP32 precision? ### Expected behavior half() documented with a small example
https://github.com/huggingface/optimum/issues/1328
closed
[ "bug" ]
2023-08-31T20:44:28Z
2023-08-31T20:46:34Z
1
ngaloppo
huggingface/autotrain-advanced
249
How to save model locally after sft
I am wondering how to save model locally after sft
https://github.com/huggingface/autotrain-advanced/issues/249
closed
[]
2023-08-31T14:59:04Z
2023-08-31T17:01:44Z
null
Diego0511
huggingface/chat-ui
425
Is it possible to modify it so that .env.local environment variables are set at runtime?
Currently for every different deployment of Chat-UI it is required to rebuild the Docker image with different .env.local environment variables. Is it theoretically possible to have it so that 1 image can be used for all deployments, but with different secrets passed at runtime? What environment variables and for what reason are truly needed at build time for Chat-UI to function? In #204 it says `HF_ACCESS_TOKEN` is needed at build time, but what if we use `OPENID` authentication instead? Is there anything else blocking this type of use case?
https://github.com/huggingface/chat-ui/issues/425
open
[ "enhancement", "back", "hacktoberfest" ]
2023-08-31T12:55:17Z
2024-03-14T20:05:38Z
4
martinkozle
huggingface/text-generation-inference
959
How to enter the docker image to modify the environment
### System Info dokcer image: ghcr.io/huggingface/text-generation-inference:1.0.2 ### Information - [X] Docker - [ ] The CLI directly ### Tasks - [ ] An officially supported command - [X] My own modifications ### Reproduction I want to enter the image to modify the environment,like: tiktoken. `docker run -it ghcr.io/huggingface/text-generation-inference:1.0.2 /bin/bash` I get: error: unexpected argument '/bin/bash' found Usage: text-generation-launcher [OPTIONS] ### Expected behavior no error thx!
https://github.com/huggingface/text-generation-inference/issues/959
closed
[]
2023-08-31T11:14:13Z
2023-08-31T20:12:55Z
null
Romaosir
huggingface/safetensors
352
Attempt to convert `PygmalionAI/pygmalion-2.7b` to `safetensors`
### System Info - `transformers` version: 4.32.1 - Platform: Linux-5.15.0-1039-gcp-x86_64-with-glibc2.31 - Python version: 3.9.5 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.3 - Accelerate version: 0.20.3 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Information - [ ] The official example scripts - [X] My own modified scripts ### Reproduction Hey guys I am trying to save the `PygmalionAI/pygmalion-2.7b` weights to `safetensors`. Based on [this thread](https://github.com/huggingface/text-generation-inference/issues/922#issuecomment-1698942643) I have manually downloaded the [weights](https://huggingface.co/PygmalionAI/pygmalion-2.7b/resolve/main/pytorch_model.bin) and tried to run the following: ``` weights = torch.load("pytorch_model.bin") weights = {k: v.clone().contiguous() for k, v in weights.items()} save_file(weights, "model.safetensors") ``` and everything went well. However, when trying to load the model I encounter the following issue: ``` AttributeError: 'NoneType' object has no attribute 'get' ``` I inspected the files and can't figure out what goes wrong... I have pushed everything to `https://huggingface.co/JulesBelveze/pygmalion-2.7b-safetensors` Any recommendation on how to proceed would be awesome 🤓 Cheers! ### Expected behavior Expecting the following code snippet to load properly load the model (and not throw the above error) ``` from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("JulesBelveze/pygmalion-2.7b-safetensors") ```
https://github.com/huggingface/safetensors/issues/352
closed
[ "Stale" ]
2023-08-31T10:25:19Z
2023-12-11T01:48:45Z
2
JulesBelveze
huggingface/autotrain-advanced
246
how to load the fine-tuned model in the local?
hi thz for your super convenient package makes easier for cookies like me to fine-tune a new model. However, as a cookie, I dont really know how to load my fine-tuned model and apply. I was fine-tuning in Google colab and download on my PC but know how to call it out? thz bro
https://github.com/huggingface/autotrain-advanced/issues/246
closed
[]
2023-08-31T08:15:11Z
2023-12-18T15:31:11Z
null
kennyluke1023
huggingface/diffusers
4,849
how to use multiple GPUs to train textual inversion?
I train the textual inversion fine tuning cat toy example from [here](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion) my env: diffusers: 0.20.0 torch: 1.12.1+cu113 accelerate: 0.22.0 train script, as follow: ``` CUDA_VISIBLE_DEVICES="0,1,2,3" python -u textual_inversion.py --pretrained_model_name_or_path=$MODEL_NAME --train_data_dir=$DATA_DIR --learnable_property="object" --placeholder_token="<cat-toy>" --initializer_token="toy" --resolution=512 --train_batch_size=1 --gradient_accumulation_steps=4 --max_train_steps=3000 --learning_rate=5.0e-04 --scale_lr --lr_scheduler="constant" --lr_warmup_steps=0 --output_dir="textual_inversion_cat" ``` But it only trained in cuda:0, Is there any way to solve the problem of training on a multi gpus?Thanks.
https://github.com/huggingface/diffusers/issues/4849
closed
[]
2023-08-31T02:56:39Z
2023-09-11T01:07:49Z
null
Adorablepet
huggingface/chat-ui
423
AI response appears without user message, then both appear after refresh.
I was experimenting with my own back-end and was wanting to get a feel for the interface. Here is what my code looks like: ```py import json import random from fastapi import FastAPI, Request from fastapi.responses import Response, StreamingResponse app = FastAPI() async def yielder(): yield "data:" + json.dumps( { "details": { "finish_reason": "length", "generated_tokens": 1, "seed": None, }, "generated_text": "what is happening", "token": {"id": random.randrange(0, 2**32), "logprob": -0.34, "special": False, "text": "it's alive!"}, },separators=(',', ':') ) + "\n\n\n" @app.post("/generate") @app.post("/") async def generate(request: Request): reqj = await request.json() print(reqj) return StreamingResponse( yielder(), media_type="text/event-stream", headers={"Content-Type": "text/event-stream"}, ) ``` Upon sending a message, "hi", I get this: ![image](https://github.com/huggingface/chat-ui/assets/40547702/f3751e35-81a0-4a2d-8e85-2063b3df41c0) After refreshing the page, everything is rendered properly: ![image](https://github.com/huggingface/chat-ui/assets/40547702/b18ce772-b0a4-4959-8d96-346a79aebe6d) What's going on? Here is what I used as a reference, which was recommended to me on the HF Discord: [link](https://github.com/gururise/openai_text_generation_inference_server/blob/main/server.py) Thanks in advance.
https://github.com/huggingface/chat-ui/issues/423
closed
[]
2023-08-30T19:04:14Z
2023-09-13T19:44:23Z
5
konst-aa
huggingface/datasets
6,195
Force to reuse cache at given path
### Describe the bug I have run the official example of MLM like: ```bash python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name togethercomputer/RedPajama-Data-1T \ --dataset_config_name arxiv \ --per_device_train_batch_size 10 \ --preprocessing_num_workers 20 \ --validation_split_percentage 0 \ --cache_dir /project/huggingface_cache/datasets \ --line_by_line \ --do_train \ --pad_to_max_length \ --output_dir /project/huggingface_cache/test-mlm ``` it successfully runs and at my cache folder has `cache-1982fea76aa54a13_00001_of_00020.arrow`..... `cache-1982fea76aa54a13_00020_of_00020.arrow ` as tokenization cache of `map` method. And the cache works fine every time I run the command above. However, when I switched to jupyter notebook (since I do not want to load datasets every time when I changed other parameters not related to the dataloading). It is not recognizing the cache files and starts to re-run the entire tokenization process. I changed my code to ```python tokenized_datasets = raw_datasets["train"].map( tokenize_function, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=[text_column_name], load_from_cache_file=True, desc="Running tokenizer on dataset line_by_line", # cache_file_names= {"train": "cache-1982fea76aa54a13.arrow"} cache_file_name="cache-1982fea76aa54a13.arrow", new_fingerprint="1982fea76aa54a13" ) ``` it still does not recognize the previously cached files and trying to re-run the tokenization process. ### Steps to reproduce the bug use jupyter notebook for dataset map function. ### Expected behavior the map function accepts the given cache_file_name and new_fingerprint then load the previously cached files. ### Environment info - `datasets` version: 2.14.4.dev0 - Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
https://github.com/huggingface/datasets/issues/6195
closed
[]
2023-08-30T18:44:54Z
2023-11-03T10:14:21Z
2
Luosuu
huggingface/trl
713
How to use custom evaluate function with multi-gpu deepspeed
I am trying to use `deepspeed` multi-gpu training with `SFTTrainer` for a hh-rlhf. My modified trainer looks something like this ```python class SFTCustomEvalTrainer(SFTTrainer): def evaluate( self, eval_dataset = None, ignore_keys = None, metric_key_prefix: str = "eval", ): breakpoint() .... custom eval code ``` However, I only want to run one instance of evaluate on the 0th GPU. When using `--nproc_per_node 2`, I get two processes entering the breakpoint in customized `evaluate` function. How can I restrict deepspeed to only use one GPU for evaluation and multi-gpu for training?
https://github.com/huggingface/trl/issues/713
closed
[]
2023-08-30T17:33:40Z
2023-11-10T15:05:23Z
null
abaheti95
huggingface/optimum
1,323
Optimisation and Quantisation for Translation models / tasks
### Feature request Currently, the opimisation and quantisation functions look for mode.onnx in a folder, and will perform opt and quant on those files. When exporting a translation targeted ONNX, multiple files for encoding and decoding, and these can't be optimised or quantised. I've tried a hacky approach to change names of each of these files and then applying opt and quant, and this fails. I suspect it's more than just namings. Is it possible to optimise and quant translation ONNX files in future? ### Motivation I would like to get smaller more efficient translation models ### Your contribution Nothing really that I can contribute to building the solution, as I don't have that level of experience and understanding.
https://github.com/huggingface/optimum/issues/1323
closed
[]
2023-08-30T06:36:17Z
2023-09-29T00:47:39Z
2
gidzr
huggingface/datasets
6,193
Dataset loading script method does not work with .pyc file
### Describe the bug The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file. While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ? ### Steps to reproduce the bug 1. Create a dataset loading script to read the custom data. 2. compile the code to make sure that .pyc file is created 3. Delete the loading script and re-run the code. Usually, python should make use of complied .pyc files. However, in this case, the dataset library errors out with the message that it's unable to find the data loader loading script. ### Expected behavior The code should make use of .pyc file and run without any error. ### Environment info NA
https://github.com/huggingface/datasets/issues/6193
open
[]
2023-08-29T19:35:06Z
2023-08-31T19:47:29Z
3
riteshkumarumassedu
huggingface/transformers.js
270
[Question] How to stop warning log
I am using NodeJS to serve a translation model. There are so many warning log when translation processing. How to stop this? `2023-08-29 23:04:32.061 node[3167:31841] 2023-08-29 23:04:32.061977 [W:onnxruntime:, graph.cc:3490 CleanUnusedInitializersAndNodeArgs] Removing initializer '/model/decoder/layers.2/encoder_attn_layer_norm/Constant_output_0'. It is not used by any node and should be removed from the model. 2023-08-29 23:04:32.061 node[3167:31841] 2023-08-29 23:04:32.061987 [W:onnxruntime:, graph.cc:3490 CleanUnusedInitializersAndNodeArgs] Removing initializer '/model/decoder/layers.0/encoder_attn_layer_norm/Constant_output_0'. It is not used by any node and should be removed from the model. 2023-08-29 23:04:32.062 node[3167:31841] 2023-08-29 23:04:32.061997 [W:onnxruntime:, graph.cc:3490 CleanUnusedInitializersAndNodeArgs] Removing initializer '/model/decoder/layers.4/self_attn_layer_norm/Constant_1_output_0'. It is not used by any node and should be removed from the model.`
https://github.com/huggingface/transformers.js/issues/270
open
[ "question" ]
2023-08-29T16:08:41Z
2025-08-02T15:48:45Z
null
tuannguyen90
huggingface/chat-ui
420
Error: ENOSPC: System limit for number of file watchers reached
Error: ENOSPC: System limit for number of file watchers reached, watch '/home/alvyn/chat-ui/vite.config.ts' at FSWatcher.<computed> (node:internal/fs/watchers:247:19) at Object.watch (node:fs:2418:34) at createFsWatchInstance (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50470:17) at setFsWatchListener (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50517:15) at NodeFsHandler._watchWithNodeFs (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50672:14) at NodeFsHandler._handleFile (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50736:23) at NodeFsHandler._addToNodeFs (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50978:21) at async file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:51973:21 at async Promise.all (index 1) Emitted 'error' event on FSWatcher instance at: at FSWatcher._handleError (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:52169:10) at NodeFsHandler._addToNodeFs (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50986:18) at async file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:51973:21 at async Promise.all (index 1) { errno: -28, syscall: 'watch', code: 'ENOSPC', path: '/home/alvyn/chat-ui/vite.config.ts', filename: '/home/alvyn/chat-ui/vite.config.ts' }
https://github.com/huggingface/chat-ui/issues/420
closed
[ "support" ]
2023-08-29T14:54:49Z
2023-09-20T15:11:26Z
2
alvynabranches
huggingface/transformers.js
268
[Question] Chunks from transcription always empty text
This example works fine: ![image](https://github.com/xenova/transformers.js/assets/216566/970c3828-8fbf-4539-843d-a96554c72f4b) But ATM I am sending Float32 to the worker here (i also confirm the audio is valid by playing it back) https://github.com/quantuminformation/coherency/blob/main/components/audio-recorder.js#L104 But after transcribing here: https://github.com/quantuminformation/coherency/blob/main/worker.js#L140 my chunks only contain `""` ![image](https://github.com/xenova/transformers.js/assets/216566/04588e73-2ee5-4f39-a145-f4e87c392ba1) ![image](https://github.com/xenova/transformers.js/assets/216566/febe2809-0fa7-4e21-8b71-d5724a391644) any ideas where my setup is going wrong?
https://github.com/huggingface/transformers.js/issues/268
open
[ "question" ]
2023-08-29T13:49:00Z
2023-11-04T19:48:30Z
null
quantuminformation
huggingface/diffusers
4,831
How to preview the image during generation,any demo for gradio?
How to preview the image during generation,any demo for gradio?
https://github.com/huggingface/diffusers/issues/4831
closed
[]
2023-08-29T13:32:07Z
2023-08-30T15:31:31Z
null
wodsoe
huggingface/transformers.js
267
[Question] multilingual-e5-* models don't work with pipeline
I just noticed that the `Xenova/multilingual-e5-*` model family doesn't work in the transformers.js pipeline for feature-extraction with your (@xenova) onnx versions on HF. My code throws an error. ```Javascript import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.4'; async function allocatePipeline() { let pipe = await pipeline("feature-extraction", "Xenova/multilingual-e5-small"); let out = await pipe("I love transformers", { pooling: 'mean', normalize: false }); document.getElementById("output").innerHTML = out.data; } allocatePipeline(); ``` Live example [here](https://geo.rocks/minimal-transformersjs-example-gte). ``` Uncaught (in promise) Error: An error occurred during model execution: "Missing the following inputs: token_type_ids. at transformers@2.5.4:70:5612 at y (transformers@2.5.4:70:5971) at M (transformers@2.5.4:70:8450) at transformers@2.5.4:70:10792 at Function.forward (transformers@2.5.4:70:10799) at Function._call (transformers@2.5.4:70:10675) at Function.e [as model] (transformers@2.5.4:88:508) at Function._call (transformers@2.5.4:73:1424) at Function._call (transformers@2.5.4:73:6152) at e (transformers@2.5.4:88:508) ``` However, HF user Supabase converted the models differently so that they are actually usable with the pipeline, e.g. [gte-small](https://huggingface.co/Supabase/gte-small#javascript). I noticed that Supabase added the vocab.txt file - is it possible that this or other files are missing in your versions or is there a more complex reason for this? I'm pretty interested in the gte family as they are the most performant small models currently available (according to the MTEB leaderboard).
https://github.com/huggingface/transformers.js/issues/267
closed
[ "question" ]
2023-08-29T12:39:26Z
2023-08-30T12:05:02Z
null
do-me
huggingface/transformers
25,803
[Model] How to evaluate Idefics Model's ability with in context examples?
Hi the recent release of Idefics-9/80B-Instruct model is superbly promising! We would like to evaluate them on a customized benchmarks with in context examples. May I ask how should I arrange the prompt template, especially for `instruct` version? We had some problems previously when evaluating the model on single images, the model will ramble and wont stop, but managed to resolve them somehow. For single image we use the template to evaluate instruct version model. ``` User:<fake_token_around_image><image><fake_token_around_image>{prompt} Assistant: ``` Would it be perfectly correct (matching your training template?) or do you have better recommendation. Sorry we have a customized pipeline so it's not easy to adopt your designed `IdeficsProcessor`. 😭 Also we migrate the code on `image_attention_mask` with ``` # supporting idefics processing def get_formatted_prompt(prompt: str="", in_context_prompts: list = []) -> str: # prompts = [ # "User:", # "https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-in-grass-1593184777.jpg", # "Describe this image.\nAssistant: An image of two kittens in grass.\n", # "User:", # "http://images.cocodataset.org/train2017/000000190081.jpg", # "Describe this image.\nAssistant:", # ] # prompts = f"User:<fake_token_around_image><image><fake_token_around_image>{prompt} Assistant:<answer>" prompts = f"User:<fake_token_around_image><image><fake_token_around_image>{prompt} Assistant:" return prompts def get_image_attention_mask(output_input_ids, max_num_images, tokenizer, include_image=True): # image_attention_mask, _ = image_attention_mask_for_packed_input_ids(output_input_ids, tokenizer) # image_attention_mask = incremental_to_binary_attention_mask(image_attention_mask, num_classes=max_num_images) if include_image: image_attention_mask, _ = image_attention_mask_for_packed_input_ids(output_input_ids, tokenizer) image_attention_mask = incremental_to_binary_attention_mask( image_attention_mask, num_classes=max_num_images ) else: # in full language mode we set the image mask to all-0s image_attention_mask = torch.zeros( output_input_ids.shape[0], output_input_ids.shape[1], 1, dtype=torch.bool ) return image_attention_mask lang_x = self.tokenizer( [ get_formatted_prompt(question, []), ], return_tensors="pt", ) image_attention_mask = get_image_attention_mask(lang_x['input_ids'], 1, self.tokenizer) ``` I have read all related blogs and docs but still got confused about the usage of `<end_of_utterance>`. Is it used to break the in context examples with query example? My guess is ``` User:<fake_token_around_image><image><fake_token_around_image>{in_context_prompt} Assistant: {in_context_answer} <end_of_utterance> User:<fake_token_around_image><image><fake_token_around_image>{prompt} Assistant: ``` Besides, very curious that the model would generate the normal `<end_of_utterance>` at the last of sentence instead of normal llama's `<|endofchunk|>`?
https://github.com/huggingface/transformers/issues/25803
closed
[]
2023-08-28T19:39:02Z
2023-10-11T08:06:48Z
null
Luodian
huggingface/chat-ui
417
CodeLlama Instruct Configuration
Hello Guys, Could you guide me in the right direction to get the configuration of the Code Llama Instruct model right? I have this config so far: ``` { "name": "Code Llama", "endpoints": [{"url": "http://127.0.0.1:8080"}], "description": "Programming Assistant", "userMessageToken": "[INST]", "assistantMessageToken": "[/INST]", "parameters": { "temperature": 0.9, "top_p": 0.95, "repetition_penalty": 1.2, "top_k": 50, "truncate": 1000, "max_new_tokens": 1048 } } ``` The model starts with the "right" output, but then it produces garbage. I am running the TGI backend. Thx!
https://github.com/huggingface/chat-ui/issues/417
open
[ "support", "models" ]
2023-08-28T13:42:09Z
2023-09-13T18:17:50Z
9
schauppi
huggingface/transformers.js
265
Unexpected token
I added this code to my React project. ``` import { pipeline } from "@xenova/transformers"; async function sentimentAnalysis() { // Allocate a pipeline for sentiment-analysis let pipe = await pipeline("sentiment-analysis"); let out = await pipe("I love transformers!"); console.log(out); } sentimentAnalysis(); ``` I am surprised the docs don't tell me to download a model, so I think this code will auto-download it... anyway I get this issue... ./node_modules/@xenova/transformers/src/env.js 38:84 Module parse failed: Unexpected token (38:84) File was processed with these loaders: * ./node_modules/babel-loader/lib/index.js You may need an additional loader to handle the result of these loaders. | | var RUNNING_LOCALLY = FS_AVAILABLE && PATH_AVAILABLE; > var __dirname = RUNNING_LOCALLY ? path.dirname(path.dirname(url.fileURLToPath(import.meta.url))) : './'; | | // Only used for environments with access to file system Seems like I need access to the filesystem... but that can't be right because this runs in the browser ... ?
https://github.com/huggingface/transformers.js/issues/265
closed
[ "question" ]
2023-08-28T13:34:42Z
2023-08-28T16:00:10Z
null
patrickinminneapolis
huggingface/diffusers
4,814
How to add more weight to the text prompt in ControlNet?
Hi, I want to know if there is a quick way of adding more weight to the text prompt in ControlNet during inference. If so, which parameter needs to be changed? Thanks,
https://github.com/huggingface/diffusers/issues/4814
closed
[ "stale" ]
2023-08-28T13:05:16Z
2023-10-30T15:07:45Z
null
miquel-espinosa
huggingface/autotrain-advanced
239
how to start without " pip install autotrain-advanced"
Dear, Thanks for your work. After installing through `pip`, running **`autotrain llm --train --project_name my-llm --model luodian/llama-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 3 --trainer sft`** can achieve fine-tuning on your own data. If I want to run the project from source code for fine-tuning, which function should I start from? That is, from which function do the `autotrain` and `llm` parameters come from? Best,
https://github.com/huggingface/autotrain-advanced/issues/239
closed
[]
2023-08-28T10:02:37Z
2023-12-18T15:30:42Z
null
RedBlack888
huggingface/datasets
6,186
Feature request: add code example of multi-GPU processing
### Feature request Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work for me out-of-the-box. Let's say you have a PyTorch model that can do translation, and you have multiple GPUs. In that case, you'd like to duplicate the model on each GPU, each processing (translating) a chunk of the data in parallel. Here's how I tried to do that: ``` from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from multiprocess import set_start_method import torch import os dataset = load_dataset("mlfoundations/datacomp_small") tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") # put model on each available GPU # also, should I do it like this or use nn.DataParallel? model.to("cuda:0") model.to("cuda:1") set_start_method("spawn") def translate_captions(batch, rank): os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count()) texts = batch["text"] inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt").to(model.device) translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["eng_Latn"], max_length=30 ) translated_texts = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) batch["translated_text"] = translated_texts return batch updated_dataset = dataset.map(translate_captions, with_rank=True, num_proc=2, batched=True, batch_size=256) ``` I've personally tried running this script on a machine with 2 A100 GPUs. ## Error 1 Running the code snippet above from the terminal (python script.py) resulted in the following error: ``` Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 125, in _main prepare(preparation_data) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 289, in run_path return _run_module_code(code, init_globals, run_name, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 96, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/niels/python_projects/datacomp/datasets_multi_gpu.py", line 16, in <module> set_start_method("spawn") File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 247, in set_start_method raise RuntimeError('context has already been set') RuntimeError: context has already been set ``` ## Error 2 Then, based on [this Stackoverflow answer](https://stackoverflow.com/a/71616344/7762882), I put the `set_start_method("spawn")` section in a try: catch block. This resulted in the following error: ``` File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/dataset_dict.py", line 817, in <dictcomp> k: dataset.map( File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2926, in map with Pool(nb_of_missing_shards, initargs=initargs, initializer=initializer) as pool: File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 215, in __init__ self._repopulate_pool() File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 306, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 329, in _repopulate_pool_static w.start() File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/process.py", line 121, in start self._popen = self._Popen(self) File "/home/niels/anaconda3/envs/datacomp/l
https://github.com/huggingface/datasets/issues/6186
closed
[ "documentation", "enhancement" ]
2023-08-28T10:00:59Z
2024-10-07T09:39:51Z
18
NielsRogge
huggingface/autotrain-advanced
238
How to Train Consecutively Using Checkpoints
Hi, I've been using your project and it's been great. I'm a complete beginner in the field of AI, so sorry for such a basic question. Is there a way to train consecutively with checkpoints? Thank you!
https://github.com/huggingface/autotrain-advanced/issues/238
closed
[]
2023-08-28T08:31:30Z
2023-12-18T15:30:42Z
null
YOUNGASUNG
huggingface/transformers.js
264
[Question] TypeScript rewrite
<!-- QUESTION GOES HERE --> Hi Joshua. I found your idea is extremely exciting. I am a frontend developer who has worked on TypeScript professionally for three years. Would you mind me doing a TypeScript re-write, so this npm package can have a better DX. If I successfully transform the codebase into TypeScript and pass all the tests, would you mind merging it into main? I just forked this repo. https://github.com/Lantianyou/transformers.js
https://github.com/huggingface/transformers.js/issues/264
open
[ "question" ]
2023-08-28T08:29:06Z
2024-04-27T12:05:24Z
null
Lantianyou
huggingface/text-generation-inference
934
How to use fine tune model in text-generation-inference
Hi Team I fine tune the llama 2 13b model and using merge_and_upload() functionality, I merge the model. How I can use this merge model using text-generation-inference. **Following command given an error** ![image](https://github.com/huggingface/text-generation-inference/assets/7765864/22e51673-4a4f-47ba-9b06-158ec7812951) **Error** ![image](https://github.com/huggingface/text-generation-inference/assets/7765864/00f219ea-0483-4496-af11-9ce9d949a7d2)
https://github.com/huggingface/text-generation-inference/issues/934
closed
[]
2023-08-28T07:36:25Z
2023-08-28T08:53:28Z
null
chintanshrinath
huggingface/peft
869
How to correctly use Prefixing Tuning?
### System Info peft 0.5.0 transformers 4.32.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder - [ ] My own task or dataset (give details below) ### Reproduction ``` model = AutoModelForSeq2SeqLM.from_pretrained('bigscience/T0pp', load_in_8bit=True) model = prepare_model_for_int8_training(model) config = PrefixTuningConfig( task_type=TaskType.SEQ_2_SEQ_LM, num_virtual_tokens=100, token_dim=model.config.hidden_size, num_transformer_submodules=1, num_attention_heads=model.config.num_heads, num_layers=model.config.num_layers, encoder_hidden_size=1792, ) model = get_peft_model(model, config) ``` ### Expected behavior I'm assuming `num_layers`, `num_attention_heads`, and `token_dim` need to match the base model. In the sample `num_transformer_submodules` is 1. But encoder-decoder has two transformers right? Should this be 2? When I run the code above I got ``` File "/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 551, in forward position_bias = position_bias + mask # (batch_size, n_heads, seq_length, key_length) RuntimeError: The size of tensor a (3) must match the size of tensor b (103) at non-singleton dimension 3 ``` When I print out the shape of `position_bias` and `mask`. `mask` has 100 more tokens than `position_bias` seems like on the decoder side. It's also taking in the prefix embeddings
https://github.com/huggingface/peft/issues/869
closed
[]
2023-08-27T18:03:06Z
2024-11-05T09:49:01Z
null
Vincent-Li-9701
huggingface/transformers
25,783
How to re-tokenize the training set in each epoch?
I have a special tokenizer which can tokenize the sentence based on some propability distribution. For example, 'I like green apple' ->'[I],[like],[green],[apple]'(30%) or '[I],[like],[green apple]' (70%). Now in the training part, I want the Trainer can retokenize the dataset in each epoch. How can I do so?
https://github.com/huggingface/transformers/issues/25783
closed
[]
2023-08-27T16:23:25Z
2023-09-01T13:01:43Z
null
tic-top
huggingface/optimum
1,318
Is it possible to compile pipeline (with tokenizer) to ONNX Runtime?
### Feature request Is it possible to compile the entire pipeline, tokenizer and transformer, to run with ONNX Runtime? My goal is to remove the `transformers` dependency entirely for runtime, to reduce serverless cold start. ### Motivation I could not find any examples, and could not make this work, so I wonder if compiling tokenizer with ONNX is possible at all. ### Your contribution I could try implementing this, or add an example to documentation if this is possible already.
https://github.com/huggingface/optimum/issues/1318
open
[ "feature-request", "onnxruntime" ]
2023-08-26T17:57:52Z
2023-08-28T07:58:13Z
1
j-adamczyk
huggingface/trl
695
Reward is getting lower and lower with each epoch, What can be the issue in training?
Hello, I am trying to optimize a T5 fine-tuned model for text generation task. At the moment, I am using BLEU score (between two texts) as a reward function. Before the optimization with PPO, model is able to produce an average BLEU score of 35% however with ppo, after each epoch, the reward is reducing so far. What is something I am doing wrong or should look into as I am new to RL? as the goal of PPO is to improve the reward or atleast make it more than the original bleu score of 35% that we got before model was optimized with PPO. this is my code: ```from transformers import AutoModelForSeq2SeqLM #loading the fine-tuned model active_model=AutoModelForSeq2SeqLMWithValueHead.from_pretrained('small_gen_clean_prem/') ref_model = AutoModelForSeq2SeqLMWithValueHead.from_pretrained('small_gen_clean_prem/') batch_size = 200 config = PPOConfig( batch_size=batch_size, learning_rate=1.41e-5, mini_batch_size=16, gradient_accumulation_steps=1 #if I set to more than 1, I get empty tensors error ) ppo_trainer = PPOTrainer(config, active_model, ref_model, tokenizer) generation_kwargs = { "min_length": -1, "top_k": 0.0, "top_p": 1.0, "do_sample": True, "pad_token_id": tokenizer.eos_token_id } output_min_length = 4 output_max_length = 512 output_length_sampler = LengthSampler(output_min_length, output_max_length)` score_all=[] for i in range(20): input_tensors=[] output_tensors=[] score_=[] for data in valid_dataset: query_txt =data['input'] query_tensor = tokenizer.encode(query_txt, return_tensors="pt").to(device) input_tensors.append(query_tensor.squeeze(0)) desired_txt = data['ground_truth'] print('desired text\n:',desired_txt) response_tensor = ppo_trainer.generate([item for item in query_tensor], return_prompt=False,length_sampler=output_length_sampler, **generation_kwargs) response_txt = tokenizer.decode(response_tensor[0], skip_special_tokens=True, max_new_tokens=512) output_tensors.append(response_tensor[0].squeeze(0)) score = sentence_bleu([response_txt.split(),desired_txt.split()]) score_.append(score) reward = [torch.FloatTensor([score]) for score in score_] score_all.append(np.mean(score_)) train_stats = ppo_trainer.step(input_tensors,output_tensors,reward) ``` In the graph attached, y-axis is average mean score in each epoch. <img width="377" alt="scores_ppo" src="https://github.com/huggingface/trl/assets/25576435/a07c26d9-46a8-432e-bf07-60eaaa0aeedc">
https://github.com/huggingface/trl/issues/695
closed
[]
2023-08-26T00:22:04Z
2023-11-01T15:06:14Z
null
sakinafatima
huggingface/dataset-viewer
1,733
Add API fuzzer to the tests?
Tools exist, see https://openapi.tools/
https://github.com/huggingface/dataset-viewer/issues/1733
closed
[ "question", "tests" ]
2023-08-25T21:44:10Z
2023-10-04T15:04:16Z
null
severo
huggingface/diffusers
4,778
[Discussion] How to allow for more dynamic prompt_embed scaling/weighting/fusion?
We have a couple of issues and requests for the community that ask for the possibility to **dynamically** change certain knobs of Stable Diffusion that are applied at **every denoising step**. - 1. **Prompt Fusion**. as stated [here](https://github.com/huggingface/diffusers/issues/4496). To implement prompt fusion in a general way we need to give the user the possibility to define some kind of "prompt" scheduler where every denoising timestep can receive a different `prompt_embeds` and `negative_prompt_embeds`. => A very obvious way to allow for this would be to allow passing a list of list of prompts and list of list of `prompt_embeddings` - 2. **Dynamic prompt weighting**. A1111 and InvokeAI both have functionalities that allow to weight the prompt embeddings differently at each timestep. InvokeAI has this implemented in `compel` via a `conditioning_scheduler` see here: https://github.com/damian0815/compel/blob/d15e883bbbfae5b3fbd8d60065aa330c99a662b4/src/compel/compel.py#L93 Such a scheduler could for example allow the user to not just define a unique `prompt_embedding` condition (e.g. putting more word on a certain word), but also allowing to dynamically change that condition during the course of denoising. This is also asked by SD.Next (cc @vladmandic). => Here we have a couple of options, the simplest is probably to just allow passing a list of `prompt_embeddings` assuming that the user just takes care of the prompt weighting themselves. We could then also nicely integrate this with `compel`. - 3. **Dynamic `guidance_scale` / `cfg` weighting**. Many people have found that a `cfg` scheduling works really well for `SDXL`. It's related to 2. as it's also a knob to tweak text embeddings weights over the course of inference but it's much more global where as 2. is can be more condition specific. This is also related to https://github.com/huggingface/diffusers/pull/4569#issuecomment-1678667625 which proposes dynamic scaling. => Here we could solve this by allowing the user to provide a list of `guidance_scales`. In addition we could maybe introduce something like `guidance_scaling_type="static/dynamic" to allow for #4569 **Overall**: => It's not too difficult to make these features work, but it'll require some very good docs about `prompt_embeds` and `negative_prompt_embeds`. We also have to think about edge cases like SDXL which has two text encoders. We also have to think about how this can be applied to other models such as Kandinsky, IF. Curios to hear your thoughts here. Also would love to discuss a design proposal of how we can better support things in a coherent, library-wide design @sayakpaul @williamberman @yiyixuxu @DN6
https://github.com/huggingface/diffusers/issues/4778
closed
[ "stale" ]
2023-08-25T10:03:17Z
2023-11-09T21:42:39Z
null
patrickvonplaten
huggingface/transformers.js
260
[Question] CDN download for use in a worker
Is there a way to get this to work inside a worker: ```html <script type="module"> import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.3'; </script> ``` I noticed you do this: ```js import { pipeline, env } from "@xenova/transformers"; ``` I'm trying to avoid any node modules for this project I am on
https://github.com/huggingface/transformers.js/issues/260
closed
[ "question" ]
2023-08-24T18:24:51Z
2023-08-29T13:57:19Z
null
quantuminformation
huggingface/notebooks
428
How to load idefics fine tune model for inference?
Hi, recently I fine tune idefics model with peft. I am not able to load the model. Is there any way to load the model with peft back for inference?
https://github.com/huggingface/notebooks/issues/428
open
[]
2023-08-24T13:39:22Z
2024-04-25T10:39:55Z
null
imrankh46
huggingface/peft
857
How to load fine tune IDEFICS model with peft for inference?
### Feature request Request for IDEFICS model. ### Motivation I fine tune IDEFICS on custom dataset, but when I load they showing error. ### Your contribution Add class like AutoPeftModelforVisionTextToText() class, to easily load the model.
https://github.com/huggingface/peft/issues/857
closed
[]
2023-08-24T12:34:44Z
2023-09-01T15:46:50Z
null
imrankh46
huggingface/datasets
6,176
how to limit the size of memory mapped file?
### Describe the bug Huggingface datasets use memory-mapped file to map large datasets in memory for fast access. However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over the limit, memory cannot be allocated), however, when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed. So is there a way to explicitly limit the size of memory mapped file? ### Steps to reproduce the bug python >>> from datasets import load_dataset >>> dataset = load_dataset("c4", "en", streaming=True) ### Expected behavior In a normal environment, this will not have any problem. However, when the system allocates a portion of the memory to the program and when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed. ### Environment info linux cluster with SGE(Sun Grid Engine)
https://github.com/huggingface/datasets/issues/6176
open
[]
2023-08-24T05:33:45Z
2023-10-11T06:00:10Z
null
williamium3000
huggingface/autotrain-advanced
225
How to make inference the model
When I launch **autotrain llm --train --project_name my-llm --model meta-llama/Llama-2-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft** I have this output ![autoTrainDoubt](https://github.com/huggingface/autotrain-advanced/assets/30750249/ac813d13-d4a4-43f4-901a-372fdaec045b) **I have two questions.** **1.-** The output is telling that the training is finished, however I only watch the log of 1 epoch. **Is there any way to see the 'training loss' param of the 3 epochs**. **2.-** After training, I try to make inferece with Text-generation-Inference HF application. However I have an error because config.json is not in the model folder. The output model is this. **Why is not present this file? Should I do something more?**. ![autoTrainDoubt1](https://github.com/huggingface/autotrain-advanced/assets/30750249/2af0d7fe-526c-4646-aa64-9adf6d70632f)
https://github.com/huggingface/autotrain-advanced/issues/225
closed
[]
2023-08-23T20:24:23Z
2023-12-18T15:30:40Z
null
amgomezdev
huggingface/autotrain-advanced
223
How to use captions with Dreambooth?
I'm trying to train an SDXL model with Dreambooth using captions for each image (I have found that this made quite a difference when training for style with the 1.5 model). How can I achieve that using autotrain? If I understand [this line](https://github.com/huggingface/autotrain-advanced/blob/main/src/autotrain/trainers/dreambooth/main.py#L290C13-L290C13) correctly, it will pick it up if it's in the file name, is that right? And if yes, how does it play together with the specified prompt?
https://github.com/huggingface/autotrain-advanced/issues/223
closed
[]
2023-08-23T15:32:16Z
2023-12-18T15:30:39Z
null
MaxGfeller
huggingface/trl
677
how to run reward_trainer.py
ValueError: Some specified arguments are not used by the HfArgumentParser: ['-f', '/Users/samittan/Library/Jupyter/runtime/kernel-32045810-5e16-48f4-8d44-c7a7f975f8a4.json']
https://github.com/huggingface/trl/issues/677
closed
[]
2023-08-23T09:39:52Z
2023-11-02T15:05:32Z
null
samitTAN
huggingface/chat-ui
412
preprompt not being injected for Llama 2
1. When I alter the preprompt for a Llama 2 type model, it appears to have no impact. It's as though the preprompt is not there. Sample config for .env.local: ``` MODELS=`[ { "name": "Trelis/Llama-2-7b-chat-hf-function-calling", "datasetName": "Trelis/function_calling_extended", "description": "function calling Llama-7B-chat", "websiteUrl": "https://research.Trelis.com", "preprompt": "Respond in French to all questions", "userMessageToken": "[INST]", "assistantMessageToken": "[/INST]", "parameters": { "temperature": 0.01, "top_p": 0.95, "repetition_penalty": 1.2, "top_k": 50, "truncate": 1000, "max_new_tokens": 1024 }, "endpoints": [{ "url": "http://127.0.0.1:8080" }] } ]` ``` Other notes: - The same model responds to changes in system message when run in colab. - Here, with chat-ui, I'm running with a tgi server. - Llama-chat has weird templating whereby the first system and user have to be wrapped in INST. The best that can be done with the default templating is just to separately wrap the system message and each user input in [INST] and [/INST]. That said, I don't think that deviation should be significant enough to mean that the preprompt is ignored... but maybe it is OR maybe I'm making some other mistake?
https://github.com/huggingface/chat-ui/issues/412
closed
[ "support", "models" ]
2023-08-23T09:15:24Z
2023-09-18T12:48:07Z
7
RonanKMcGovern