repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
βŒ€
user
stringlengths
2
28
huggingface/distil-whisper
16
How to use ONNX model?
Hello there, I'm interested in using the ONNX model, as I saw that you are providing the weights for it. I tried to use it with `optimum` library, but didn't manage to make it work. Could someone indicate in which direction I should look into? Thank you so much for this repository and the work you put into it. It really helps!! ### Note: here is what I tried ``` from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor import torch from optimum.onnxruntime import ORTModelForSpeechSeq2Seq device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v2" model = ORTModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, encoder_file_name=f"encoder_model.onnx" ) ``` Here is the error: ``` RuntimeError: Too many ONNX model files were found in distil-whisper/distil-large-v2, specify which one to load by using the encoder_file_name argument. ```
https://github.com/huggingface/distil-whisper/issues/16
open
[]
2023-11-03T11:51:44Z
2023-11-07T07:36:50Z
null
H-G-11
huggingface/dataset-viewer
2,049
Retry jobs that finish with `ClientConnection` error?
Maybe here: https://github.com/huggingface/datasets-server/blob/f311a9212aaa91dd0373e5c2d4f5da9b6bdabcb5/chart/env/prod.yaml#L209 Internal conversation on Slack: https://huggingface.slack.com/archives/C0311GZ7R6K/p1698224875005729 Anyway: I'm wondering if we can have the error now that the dataset scripts are disabled by default.
https://github.com/huggingface/dataset-viewer/issues/2049
closed
[ "question", "improvement / optimization", "P2" ]
2023-11-03T11:28:19Z
2024-02-06T17:29:45Z
null
severo
huggingface/transformers.js
377
GPU Acceleration to increase performance
Do we have any option to use GPU to increase performance of model loading and detection? As currently in Object Detection it's taking around 10 seconds. If we want to do this on GPU, can we do that? Running below lines through web worker, increases overall UI experience but not increases any performance. ``` const model = await pipeline("object-detection", "Xenova/detr-resnet-50"); const result = await model(img, { threshold: 0.9 }); ``` Can we use GPU for that?
https://github.com/huggingface/transformers.js/issues/377
closed
[ "question" ]
2023-11-03T07:44:05Z
2024-10-18T13:30:08Z
null
milind-yadav
huggingface/distil-whisper
11
[Speculative Decoding] How to run speculative decoding for batch_size > 1?
Transformers 4.35 only supports speculative decoding for batch size == 1. In order to use speculative decoding for batch size > 1, please make sure to use this branch: https://github.com/huggingface/transformers/pull/26875 To do so, you need to install transformers as follows: ``` pip install git+https://github.com/huggingface/transformers.git@assistant_decoding_batch ``` and then you can run: ```py from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor import torch from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 assistant_model_id = "distil-whisper/distil-large-v2" assistant_model = AutoModelForCausalLM.from_pretrained( assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) assistant_model.to(device) model_id = "openai/whisper-large-v2" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, generate_kwargs={"assistant_model": assistant_model}, torch_dtype=torch_dtype, chunk_length_s=15, batch_size=4, device=device, ) dataset = load_dataset("distil-whisper/librispeech_long", "default", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` The PR will be merged to Transformers soon. **Note**: Given the "speculative" nature of assistant decoding (*a.k.a* speculative decoding), it is not recommended to make use of speculative decoding for batch sizes higher than 4 as this might actually lead to the transcription pipeline being slower compared to just using the teacher model. Confer with Table 22 of [the paper](https://arxiv.org/pdf/2311.00430.pdf).
https://github.com/huggingface/distil-whisper/issues/11
open
[]
2023-11-02T14:19:55Z
2024-10-03T13:12:22Z
null
patrickvonplaten
huggingface/chat-ui
542
Request: more clarity on JSON response from custom models
Note: duplicate from https://huggingface.co/spaces/huggingchat/chat-ui/discussions/309, not sure which is the proper place to post. I followed the guide chat-ui to deploy a version in gcp, and I love the chat interface. I would love to hook it up to one of my custom models, so I specified ``` "endpoints": [{"url": "[http://127.0.0.1:8000"}]](http://127.0.0.1:8000"%7D%5D/) } ]` ``` for MODELS as suggested. I receive the message that has been posted in the web interface at my endpoint, but I am unable to send back the proper json response. So far, in python, I do: ``` response_content = [ { "generated_text": "Please show this response." } ] response = make_response(jsonify(response_content)) return response ``` It is received in the chat-ui code (confirmed by injecting console.log statements), but it doesn't show in the browser conversation. Can someone please clarify what json (content, headers, whatever is needed) I need to send from my custom model endpoint as a response to the chat-ui interface? Or if this is the wrong place to ask, tell me where I should ask?
https://github.com/huggingface/chat-ui/issues/542
open
[ "support" ]
2023-11-02T10:31:53Z
2023-11-03T19:44:02Z
1
thubreg
huggingface/distil-whisper
8
Where is the model?
Link to HF leads to empty files section.
https://github.com/huggingface/distil-whisper/issues/8
closed
[]
2023-11-02T08:47:23Z
2023-11-02T17:31:08Z
null
lkmdhertg
huggingface/candle
1,241
How to reduce memory usage of backpropagation?
I implemented the [tiny NeRF example](https://github.com/bmild/nerf/blob/master/tiny_nerf.ipynb) using `candle` here: https://github.com/laptou/nerfy/blob/fc50dbd61c4012d1f12f556a72474b59a8b3c158/examples/tiny_nerf.rs The example, which is written using TensorFlow, runs fine on my laptop. My `candle` implementation consumes all available memory on my laptop, which crashes my desktop session if I use CPU and errors out with a CUDA memory allocation error if I use the GPU. I'm running on a laptop with 32 GB of RAM, 32 GB of swap, and an RTX A3000 w/ 12 GB of VRAM. I'm barely able to run it on CPU if I decrease the hidden layer size from 256 to 64. ![image](https://github.com/huggingface/candle/assets/14832331/683d4361-9ccb-4f04-939e-67e0f3ba0414) I tracked the memory allocations using `heaptrack`, and it seems like most of them are related to keeping track of the operations for backpropagation. Can you spot any obvious issues in my implementation that are causing it to consume so much memory? Is there a way that I can disable or reduce this behavior in some parts of the code to reduce the amount of memory that it uses?
https://github.com/huggingface/candle/issues/1241
open
[]
2023-11-02T03:38:32Z
2025-09-10T05:14:01Z
null
laptou
huggingface/candle
1,240
Demo showing how to load in candle computer vision model using webcam
``` use anyhow::Result; // Automatically handle the error types use opencv::{ prelude::*, videoio, highgui }; // Note, the namespace of OpenCV is changed (to better or worse). It is no longer one enormous. fn main() -> Result<()> { // Note, this is anyhow::Result // Open a GUI window highgui::named_window("window", highgui::WINDOW_FULLSCREEN)?; // Open the web-camera (assuming you have one) let mut cam = videoio::VideoCapture::new(0, videoio::CAP_ANY)?; let mut frame = Mat::default(); // This array will store the web-cam data // Read the camera // and display in the window loop { cam.read(&mut frame)?; highgui::imshow("window", &frame)?; let key = highgui::wait_key(1)?; if key == 113 { // quit with q break; } } Ok(()) } ``` Here is a basic example of opening Qt using Opencv-rust. It would be great to have a working example using this alongside candle! Open to submitting this as a pr in any of the example folders.
https://github.com/huggingface/candle/issues/1240
open
[]
2023-11-02T03:38:19Z
2023-11-02T06:24:11Z
null
bazylhorsey
huggingface/candle
1,239
How inference on a new model, have to hand written model.rs manually?
Just wonder if there scripts convert a pth or onnx to candle format maybe?
https://github.com/huggingface/candle/issues/1239
closed
[]
2023-11-02T03:32:11Z
2023-11-02T07:03:54Z
null
lucasjinreal
huggingface/safetensors
375
How do I load the tensors in Rust?
Hi, I am unable to find good documentation to read the weights in rust. I want to write gpt2 from scratch, and want to be able to load the HF weights. Since, I only plan to use the ndarray library, I want to be able to load the FP32 tensors somehow. Please help. In python I do: ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM import safetensors tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") safetensors.torch.save_model(model, 'gpt2_weights.st') ``` I want to use some code like this in rust (which is currently incorrect because safetensors doesn't have a Reader) and I am unable to figure out the API. ```rust use safetensors::Reader; use std::error::Error; fn main() -> Result<(), Box<dyn Error>> { let reader = Reader::from_file("gpt2_weights.st")?; for (name, tensor) in reader.tensors() { println!("Tensor name: {}", name); let tensor = tensor?; println!("Shape: {:?}", tensor.shape()); } Ok(()) } ```
https://github.com/huggingface/safetensors/issues/375
closed
[ "Stale" ]
2023-11-02T02:11:11Z
2024-01-02T01:48:31Z
5
arunpatro
huggingface/safetensors
374
safetensor.*.save_file the parameter name to set the incoming tensors change from "tensors" to "tensor_dict"
### Feature request In Jax, torch, and paddle is: > tensors (Dict[str, torch.Tensor]) β€” The incoming tensors. Tensors need to be contiguous and dense. Check: https://huggingface.co/docs/safetensors/api/torch#safetensors.torch.save In Numpy: > tensor_dict (Dict[str, np.ndarray]) β€” The incoming tensors. Tensors need to be contiguous and dense. Check: https://huggingface.co/docs/safetensors/api/numpy#safetensors.numpy.save_file Is there a reason to change the name between frameworks? ### Motivation Improve the documentation. ### Your contribution I can submit a PR if that helps!
https://github.com/huggingface/safetensors/issues/374
closed
[ "Stale" ]
2023-11-02T00:41:14Z
2024-01-02T01:48:32Z
2
csaybar
huggingface/safetensors
373
Stream load models (load model larger than system memory)
### Feature request I'm not very familiar with the details, but I'd like to load a 20GB model while having only 8 GB system memory. Currently, safetensors loads the entire model into system memory. Is it possible to load models incrementally/as a stream? Related: https://github.com/turboderp/exllama/issues/245 https://github.com/huggingface/safetensors/issues/67 Possibly related (writing is different from reading): https://github.com/huggingface/safetensors/issues/291 ### Motivation Using swap requires unnecessary wear on SSDs. And it's silly to read a model from disk, just to write it back to disk as a swap, and then read it again from disk. Alternatively, the model should be saved in a format that can be streamed directly to memory? Similarly, it's silly to require X amount of system memory to be available for just a few seconds while loading a large model. ### Your contribution Unqualified to contribute.
https://github.com/huggingface/safetensors/issues/373
closed
[ "Stale" ]
2023-11-01T16:14:18Z
2024-01-03T01:48:07Z
6
erikschul
huggingface/text-embeddings-inference
59
how to resolve this compile error?
### System Info cargo 1.73.0 (9c4383fb5 2023-08-26) gcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2) cuda 11.8 v100 ``` "-Wl,-Bdynamic" "-llayernorm" "-lcudart" "-lstdc++" "-lcuda" "-lnvrtc" "-lcurand" "-lcublas" "-lcublasLt" "-lssl" "-lcrypto" "-lgcc_s" "-lutil" "-lrt" "-lpthread" "-lm" "-ldl" "-lc" "-Wl,--eh-frame-hdr" "-Wl,-z,noexecstack" "-L" "/home/luoweichao/.rustup/toolchains/1.73.0-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-o" "/home/luoweichao/text-embeddings-inference/target/release/deps/text_embeddings_router-0345b2604448f561" "-Wl,--gc-sections" "-pie" "-Wl,-z,relro,-z,now" "-Wl,-O1" "-nodefaultlibs" = note: /opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld: /home/luoweichao/text-embeddings-inference/target/release/build/candle-layer-norm-3b4dbfa3d047ac72/out/liblayernorm.a(ln_api.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a PIE object; recompile with -fPIC /opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld: final link failed: nonrepresentable section on output collect2: error: ld returned 1 exit status error: could not compile `text-embeddings-router` (bin "text-embeddings-router") due to previous error error: failed to compile `text-embeddings-router v0.3.0 (/home/luoweichao/text-embeddings-inference/router)`, intermediate artifacts can be found at `/home/luoweichao/text-embeddings-inference/target`. ``` ### Information - [ ] Docker - [X] The CLI directly ### Tasks - [ ] An officially supported command - [ ] My own modifications ### Reproduction cargo install --path router -F candle-cuda-volta --no-default-features ### Expected behavior build successfully!
https://github.com/huggingface/text-embeddings-inference/issues/59
closed
[]
2023-10-31T11:35:02Z
2023-11-02T07:52:18Z
null
kingder
huggingface/optimum
1,497
about LCM onnx model
Hi! can someone please tell how we can use the LCM model in onnx? i see u guys made an script to run it in onnx, but what about the model? can we simply use the normal stable diffusion script onnx conversation for lcm model too? or we have to wait someone make an conversation script? or could someone upload onnx converted of LCM model on huggingface and share it with us please? kind regards ### Who can help? @echarlaix
https://github.com/huggingface/optimum/issues/1497
closed
[ "bug" ]
2023-10-31T08:57:16Z
2024-01-04T14:21:54Z
6
Amin456789
huggingface/dataset-viewer
2,038
How to pass single quote in /filter endpoint "where" parameter?
See `https://huggingface.co/datasets/albertvillanova/lm_en_dummy2/viewer/default/train?f[meta][value]='{'file': 'file_4.txt'}'` From `https://datasets-server.huggingface.co/filter?dataset=albertvillanova/lm_en_dummy2&config=default&split=train&where=meta='{'file': 'file_4.txt'}'`, we get: ``` {"error":"Parameter 'where' is invalid"} ``` We want to search he value `{'file': 'file_4.txt'}` in the column `meta`
https://github.com/huggingface/dataset-viewer/issues/2038
closed
[ "bug", "documentation", "P1" ]
2023-10-30T22:21:24Z
2023-11-02T17:22:54Z
null
severo
huggingface/datasets
6,364
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
Hi, I am trying to load a local csv dataset(similar to explodinggradients_fiqa) using load_dataset. When I try to pass features, I am facing the mentioned issue. CSV Data sample(golden_dataset.csv): Question | Context | answer | groundtruth "what is abc?" | "abc is this and that" | "abc is this " | "abc is this and that" ``` import csv # built it based on https://huggingface.co/datasets/explodinggradients/fiqa/viewer/ragas_eval?row=0 mydict = [ {'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]}, {'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]}, {'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]} ] fields = ['question', 'contexts', 'answer', 'ground_truths'] with open('golden_dataset.csv', 'w', newline='\n') as file: writer = csv.DictWriter(file, fieldnames = fields) writer.writeheader() for row in mydict: writer.writerow(row) ``` Retrieved dataset: DatasetDict({ train: Dataset({ features: ['question', 'contexts', 'answer', 'ground_truths'], num_rows: 1 }) }) Code to reproduce issue: ``` from datasets import load_dataset, Features, Sequence, Value encode_features = Features( { "question": Value(dtype='string', id=0), "contexts": Sequence(feature=Value(dtype='string', id=1)), "answer": Value(dtype='string', id=2), "ground_truths": Sequence(feature=Value(dtype='string',id=3)), } ) eval_dataset = load_dataset('csv', data_files='/golden_dataset.csv', features = encode_features ) ``` Error trace: ``` --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1925, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1924 _time = time.time() -> 1925 for _, table in generator: 1926 if max_shard_size is not None and writer._num_bytes > max_shard_size: File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:192, in Csv._generate_tables(self, files) 189 # Uncomment for debugging (will print the Arrow table size and elements) 190 # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}") 191 # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows))) --> 192 yield (file_idx, batch_idx), self._cast_table(pa_table) 193 except ValueError as e: File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:167, in Csv._cast_table(self, pa_table) 165 if all(not require_storage_cast(feature) for feature in self.config.features.values()): 166 # cheaper cast --> 167 pa_table = pa.Table.from_arrays([pa_table[field.name] for field in schema], schema=schema) 168 else: 169 # more expensive cast; allows str <-> int/float or str to Audio for example File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:3781, in pyarrow.lib.Table.from_arrays() File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:1449, in pyarrow.lib._sanitize_arrays() File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/array.pxi:354, in pyarrow.lib.asarray() File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:551, in pyarrow.lib.ChunkedArray.cast() File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/compute.py:400, in cast(arr, target_type, safe, options, memory_pool) 399 options = CastOptions.safe(target_type) --> 400 return call_function("cast", [arr], options, memory_pool) File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:572, in pyarrow._compute.call_function() File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:367, in pyarrow._compute.Function.call() File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status() File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status() ArrowNotImplementedError: Unsupported cast from string to list using function cast_list The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[57], line 1 ----> 1 eval_dataset = load_dataset('csv', data_files='/golden_dataset.csv
https://github.com/huggingface/datasets/issues/6364
closed
[]
2023-10-30T20:14:01Z
2023-10-31T19:21:23Z
2
divyakrishna-devisetty
huggingface/diffusers
5,575
How to set the "transformer_in" layer's hidden size in LoRA training?
### Describe the bug I modify the code for text-to-image [lora](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) as Figure 1, <img width="908" alt="image" src="https://github.com/huggingface/diffusers/assets/52530394/0639998b-8106-49d9-8761-c58014095e7e"> However, in 3D UNet there is a "transformer_in" layer that does not exist in 2D UNet. So I add "transformer_in" process in the code. And I set the "hidden_size" to be "unet.config.block_out_channels[0]" following the 3D UNet's definition as [this link](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_3d_condition.py) Figure 2: <img width="641" alt="image" src="https://github.com/huggingface/diffusers/assets/52530394/c23efadc-e22d-4bd9-aa3f-7e69bd83a7c2"> But the there is a shape error as Figure 3 ![image](https://github.com/huggingface/diffusers/assets/52530394/7a279e34-2af8-4409-8e93-606f61fd506f) : ### Reproduction Load a 3D UNet. Adapt the LoRA codes as Figure 1. ### Logs _No response_ ### System Info - `diffusers` version: 0.21.4 - Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - PyTorch version (GPU?): 2.0.1 (True) - Huggingface_hub version: 0.18.0 - Transformers version: 4.26.0 - Accelerate version: 0.23.0 - xFormers version: 0.0.22.post7 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sayakpaul @patrickvonplaten @DN6 @yiyi
https://github.com/huggingface/diffusers/issues/5575
closed
[ "bug", "stale" ]
2023-10-30T03:44:32Z
2024-01-10T15:07:20Z
null
lxycopper
huggingface/diffusers
5,574
How to train a part of UNet attention parameters with LoRA
### Describe the bug I adapt the LoRA training code in # to train my model. And I only want to update the parameters in "down block", so I comment out the code for other attention blocks: <img width="909" alt="image" src="https://github.com/huggingface/diffusers/assets/52530394/6b204ad8-e201-43b0-ab97-5d29a936e3c8"> However, I got an error at this line "unet.set_attn_processor(lora_attn_procs)" as shown in the code: <img width="1009" alt="image" src="https://github.com/huggingface/diffusers/assets/52530394/0d914626-fcbc-40a5-a254-8bc5f258fbdf"> ### Reproduction comment out the code for other attention blocks as my first figure. ### Logs _No response_ ### System Info diffusers 0.21.4 python 3.10.13 Ubuntu 18 ### Who can help? @sayakpaul @patr
https://github.com/huggingface/diffusers/issues/5574
closed
[ "bug", "stale" ]
2023-10-30T02:58:07Z
2023-12-08T15:05:16Z
null
lxycopper
huggingface/transformers.js
372
[Question] onnxruntime_binding.node issue on mac electron app
Hi, I'm getting this error on an intel macbook running an electron forge app: ``` (node:63267) UnhandledPromiseRejectionWarning: Error: Cannot find module '../bin/napi-v3/darwin/x64/onnxruntime_binding.node' Require stack: - /Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js - /Users/sam/Desktop/electron-forge-react-typescript-tailwind/node_modules/electron/dist/Electron.app/Contents/Resources/default_app.asar/main.js - at Module._resolveFilename (node:internal/modules/cjs/loader:963:15) at n._resolveFilename (node:electron/js2c/browser_init:2:109411) at Module._load (node:internal/modules/cjs/loader:811:27) at f._load (node:electron/js2c/asar_bundle:2:13330) at Module.require (node:internal/modules/cjs/loader:1035:19) at require (node:internal/modules/cjs/helpers:102:18) at ./node_modules/@xenova/transformers/node_modules/onnxruntime-node/dist/binding.js (/Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js:229:1) at __webpack_require__ (/Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js:83093:42) at ./node_modules/@xenova/transformers/node_modules/onnxruntime-node/dist/backend.js (/Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js:153:19) ``` I check the path ```../bin/napi-v3/darwin/x64/onnxruntime_binding.node``` and it does exist in node_modules. So I'm not sure what's going on/whether this is a bug.
https://github.com/huggingface/transformers.js/issues/372
closed
[ "question" ]
2023-10-28T00:34:05Z
2023-11-01T21:56:19Z
null
samlhuillier
huggingface/transformers
27,107
How to export a Marian model in rust ?
Most models based on Marian are also available in rust, such as : Helsinki-NLP/opus-mt-en-roa Is it possible to do this using transformers ? Did you asssit Helsinki-NLP in exporting the models to Rust ?
https://github.com/huggingface/transformers/issues/27107
closed
[]
2023-10-27T13:01:13Z
2023-12-05T08:03:53Z
null
flutter-painter
huggingface/chat-ui
535
API format?
ok, so this may be a dumb question, but i am not sure where else to ask it. So if we use this repo to deploy our app on HF, what is the format of the API parameters for calling our space?
https://github.com/huggingface/chat-ui/issues/535
closed
[]
2023-10-26T21:56:22Z
2023-10-27T15:01:57Z
3
silvacarl2
huggingface/diffusers
5,538
Why is the pipeline_stable_diffusion_upscale.py file not using the encoder-decoder latent?
### Describe the bug There is no training script for pipeline_stable_diffusion_upscale.py because the authors chose not to utilize the latent domain for the Super-resolution task. Additionally, the U-Net implemented in pipeline_stable_diffusion_upscale.py only accepts 7 channels. How is this achieved? ### Reproduction None ### Logs _No response_ ### System Info None ### Who can help? [AnasHXH](https://github.com/AnasHXH)
https://github.com/huggingface/diffusers/issues/5538
closed
[ "question", "stale" ]
2023-10-26T10:47:10Z
2023-12-08T15:05:44Z
null
AnasHXH
huggingface/chat-ui
534
Login issue with Google OpenID
I set up google OpenID for my chatUI. I have set the scope to openId and ./auth/userinfo.profile in OAuth Consent Screen. I tried to log the data shared by google to the app and it was the following { sub: '****', picture: 'https://lh3.googleusercontent.com/****', email: 'shagun@****', email_verified: true, hd: '*****' } As you can see, the name is not being shared and hence I am getting an error as Name is a required field. How can I fix this? Note: Google shares name for some accounts and for them it does not. This is my first time working with OpenID so any help will be appreciated.
https://github.com/huggingface/chat-ui/issues/534
closed
[]
2023-10-26T10:00:05Z
2023-10-26T10:49:36Z
3
shagunhexo
huggingface/candle
1,185
Question: How to create a Var from MmapedSafetensors
Hello everybody, I was wondering how to create a Var instance from an `MMapedSafetensors` `TensorView`. I have tried using `candle_core::Var::from_slice(tensor.data(), tensor.shape(), &device)?`, but I get the error: `Error: Shape mismatch, got buffer of size 90177536 which is compatible with shape [11008, 4096]`. Is there a better way to do this? In addition, I notice the buffer is of type `u8`, which is definitely not the data type the safetensors should be decoded as. Where can I find how `VarBuilder` does this? **In summary, I have 2 questions:** - How to decode a `TensorView` into a `Var`? - Or, if the above is not feasible, how does `VarBuilder` do this?
https://github.com/huggingface/candle/issues/1185
closed
[]
2023-10-26T09:41:37Z
2023-10-26T11:26:29Z
null
EricLBuehler
huggingface/datasets
6,353
load_dataset save_to_disk load_from_disk error
### Describe the bug datasets version: 2.10.1 I `load_dataset `and `save_to_disk` sucessfully on windows10( **and I `load_from_disk(/LLM/data/wiki)` succcesfully on windows10**), and I copy the dataset `/LLM/data/wiki` into a ubuntu system, but when I `load_from_disk(/LLM/data/wiki)` on ubuntu, something weird happens: ``` load_from_disk('/LLM/data/wiki') File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1874, in load_from_disk return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/dataset_dict.py", line 1309, in load_from_disk dataset_dict[k] = Dataset.load_from_disk( File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1543, in load_from_disk fs_token_paths = fsspec.get_fs_token_paths(dataset_path, storage_options=storage_options) File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py", line 610, in get_fs_token_paths chain = _un_chain(urlpath0, storage_options or {}) File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py", line 325, in _un_chain cls = get_filesystem_class(protocol) File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/registry.py", line 232, in get_filesystem_class raise ValueError(f"Protocol not known: {protocol}") ValueError: Protocol not known: /LLM/data/wiki ``` It seems that something went wrong on the arrow file? How can I solve this , since currently I can not save_to_disk on ubuntu system ### Steps to reproduce the bug datasets version: 2.10.1 ### Expected behavior datasets version: 2.10.1 ### Environment info datasets version: 2.10.1
https://github.com/huggingface/datasets/issues/6353
closed
[]
2023-10-26T03:47:06Z
2024-04-03T05:31:01Z
5
brisker
huggingface/text-embeddings-inference
43
How to add custom python file for pretrained model on TEI server?
### System Info I am pretty new to this space. Please help. I have made a python file with pre-trained model, which generates embeddings. What I want is to - 1. Create a docker image of Python file 2. Run it on TEI server? How can we do this? ### Information - [ ] Docker - [ ] The CLI directly ### Tasks - [ ] An officially supported command - [X] My own modifications ### Reproduction Need to host a custom python file( which runs a sentence embedding model) on TEI server ### Expected behavior NA
https://github.com/huggingface/text-embeddings-inference/issues/43
open
[]
2023-10-25T16:09:52Z
2023-10-25T17:57:46Z
null
cken21
huggingface/llm-vscode
100
How to generate the response from locally hosted end point in vscode?
Hi, I managed to plug the llm-vcode extension to point to the locally running endpoint. Now when I am selected the content like as below: # function to sum 2 numbers in python then Cmd+shif+a > llm: show code attribution My local endpoint invokes and give the relevant response as well in below format `{ "details": { "best_of_sequences": [ { "finish_reason": "length", "generated_text": "test", "generated_tokens": 1, "prefill": [ { "id": 0, "logprob": -0.34, "text": "test" } ], "seed": 42, "tokens": [ { "id": 0, "logprob": -0.34, "special": false, "text": "test" } ], "top_tokens": [ [ { "id": 0, "logprob": -0.34, "special": false, "text": "test" } ] ] } ], "finish_reason": "length", "generated_tokens": 1, "prefill": [ { "id": 0, "logprob": -0.34, "text": "test" } ], "seed": 42, "tokens": [ { "id": 0, "logprob": -0.34, "special": false, "text": "test" } ], "top_tokens": [ [ { "id": 0, "logprob": -0.34, "special": false, "text": "test" } ] ] }, "generated_text": "test" }` "generated_text": value is replaced with actual response with python sum function After 200, I can see the anything related to generated code in vscode. Please suggest to how to I can get generated response in vscode itself.
https://github.com/huggingface/llm-vscode/issues/100
open
[ "stale" ]
2023-10-25T15:55:40Z
2023-11-25T01:46:01Z
null
dkaus1
huggingface/tokenizers
1,375
Question: what is the add_special_tokens parameter of Tokenizer::encode?
As stated above, what does the parameter add_special_tokens do? Does it add bos/eos tokens? Thanks!
https://github.com/huggingface/tokenizers/issues/1375
closed
[]
2023-10-25T09:55:55Z
2023-10-25T18:43:54Z
null
EricLBuehler
huggingface/candle
1,173
Question: what is the add_special_tokens parameter of Tokenizer::encode?
As stated above, what does the parameter add_special_tokens do? Does it add bos/eos tokens? Thanks!
https://github.com/huggingface/candle/issues/1173
closed
[]
2023-10-25T09:30:01Z
2023-10-25T09:55:42Z
null
EricLBuehler
huggingface/dataset-viewer
2,009
Are URLs in rows response sanitized?
see https://github.com/huggingface/moon-landing/pull/7798#discussion_r1369813236 (internal) > Is "src" validated / sanitized? > if not there is a potential XSS exploit here (you can inject javascript code in an image src) > Are S3 object names sanitized? If no, it should be the case in dataset-server side
https://github.com/huggingface/dataset-viewer/issues/2009
closed
[ "question", "security", "P1" ]
2023-10-24T15:10:29Z
2023-11-21T15:39:13Z
null
severo
huggingface/chat-ui
528
Websearch error in proxy
I'm developing in a proxy environment, I'm guessing it's because **websearch module can't import the model(Xenova/gte-small) from huggingface.** I don't want to use websearch, but it tries to load the gte-small model anyway, and I get an error. ``` 11:36:36 AM [vite] Error when evaluating SSR module /src/lib/server/websearch/sentenceSimilarity.ts: |- TypeError: fetch failed at fetch (/home/dev/chat-ui/node_modules/undici/index.js:110:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async getModelFile (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24) at async getModelJSON (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18) at async Promise.all (index 1) at async loadTokenizer (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16) at async AutoTokenizer.from_pretrained (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48) at async Promise.all (index 0) at async loadItems (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2110:5) at async Proxy.pipeline (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2056:19) 11:36:36 AM [vite] Error when evaluating SSR module /src/lib/server/websearch/runWebSearch.ts: failed to import "/src/lib/server/websearch/sentenceSimilarity.ts" |- TypeError: fetch failed at fetch (/home/dev/chat-ui/node_modules/undici/index.js:110:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async getModelFile (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24) at async getModelJSON (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18) at async Promise.all (index 1) at async loadTokenizer (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16) at async AutoTokenizer.from_pretrained (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48) at async Promise.all (index 0) at async loadItems (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2110:5) at async Proxy.pipeline (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2056:19) 11:36:36 AM [vite] Error when evaluating SSR module /home/dev/chat-ui/src/routes/conversation/[id]/+server.ts: failed to import "/src/lib/server/websearch/runWebSearch.ts" |- TypeError: fetch failed at fetch (/home/dev/chat-ui/node_modules/undici/index.js:110:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async getModelFile (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24) at async getModelJSON (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18) at async Promise.all (index 1) at async loadTokenizer (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16) at async AutoTokenizer.from_pretrained (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48) at async Promise.all (index 0) at async loadItems (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2110:5) at async Proxy.pipeline (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2056:19) ``` 1. Is there a workaround to downloading the model directly? 2. Need Improve: the proxy related code. 3. Need Improve: add option to turn off websearch initialization. (
https://github.com/huggingface/chat-ui/issues/528
closed
[ "enhancement", "support", "websearch" ]
2023-10-24T03:53:25Z
2023-11-15T15:44:01Z
6
calycekr
huggingface/candle
1,165
How do I raise 2 to the power of a tensor?
How do I write: ```python x = 2 ** (y * z) ``` Where `y` is an integer and `z` is a tensor? I tried to use `powf`, but it only works with float arguments.
https://github.com/huggingface/candle/issues/1165
closed
[]
2023-10-23T22:13:28Z
2023-10-24T04:28:23Z
null
laptou
huggingface/candle
1,163
how to modify the contents of a Tensor?
what is the `candle` equivalent of this? ```python t[2, :] *= 2; ```
https://github.com/huggingface/candle/issues/1163
closed
[]
2023-10-23T19:58:50Z
2023-10-24T04:28:10Z
null
laptou
huggingface/transformers.js
367
[Question] How to include ort-wasm-simd.wasm with the bundle?
How can I include ort-wasm-simd.wasm with the bundle? I'm using this on an app that needs to be able to run offline, so I'd like to package this with the lib. I'm also running this on web worker, so that file gets requested 1+n times per user session when the worker starts. <img width="725" alt="image" src="https://github.com/xenova/transformers.js/assets/1594723/39f7fc6e-0914-4b40-a3bc-aa17ed53851c">
https://github.com/huggingface/transformers.js/issues/367
closed
[ "question" ]
2023-10-23T04:54:16Z
2023-10-26T08:27:28Z
null
mjp0
huggingface/autotrain-advanced
310
How to determine the LMTrainingType ? chat or generic mode?
It is said that there are two modes (chat and generic), but I cannot find a way to determine it.
https://github.com/huggingface/autotrain-advanced/issues/310
closed
[]
2023-10-21T14:28:59Z
2023-11-26T04:31:08Z
null
qiaoqiaoLF
huggingface/datasets
6,324
Conversion to Arrow fails due to wrong type heuristic
### Describe the bug I have a list of dictionaries with valid/JSON-serializable values. One key is the denominator for a paragraph. In 99.9% of cases its a number, but there are some occurences of '1a', '2b' and so on. If trying to convert this list to a dataset with `Dataset.from_list()`, I always get `ArrowInvalid: Could not convert '1' with type str: tried to convert to int64`, presumably because pyarrow tries to convert the keys to integers. Is there any way to circumvent this and fix dtypes? I didn't find anything in the documentation. ### Steps to reproduce the bug * create a list of dicts with one key being a string of an integer for the first few thousand occurences and try to convert to dataset. ### Expected behavior There shouldn't be an error (e.g. some flag to turn off automatic str to numeric conversion). ### Environment info - `datasets` version: 2.14.5 - Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35 - Python version: 3.9.18 - Huggingface_hub version: 0.17.3 - PyArrow version: 13.0.0 - Pandas version: 2.1.1
https://github.com/huggingface/datasets/issues/6324
closed
[]
2023-10-20T23:20:58Z
2023-10-23T20:52:57Z
2
jphme
huggingface/transformers.js
365
[Question] Headers not defined
Hi friends! Neither headers nor fetch seems to be getting resolved.. trying to run this on a nodejs application... file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:201 return fetch(urlOrPath, { headers }); ^ TypeError: fetch is not a function at getFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:201:16) at getModelFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:468:30) at async getModelJSON (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:542:18) at async Promise.all (index 0) at async loadTokenizer (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:52:16) at async Function.from_pretrained (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:3826:48) at async Promise.all (index 0) at async loadItems (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2193:5) at async pipeline (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2139:19) at async Server.<anonymous> (/home/rajesh/code/ai/js/invoice/inv.js:65:24) ------- Unable to load from local path "/home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/models/Xenova/distilbert-base-uncased-finetuned-sst-2-english/tokenizer.json": "ReferenceError: Headers is not defined" Unable to load from local path "/home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/models/Xenova/distilbert-base-uncased-finetuned-sst-2-english/tokenizer_config.json": "ReferenceError: Headers is not defined" Unable to load from local path "/home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/models/Xenova/distilbert-base-uncased-finetuned-sst-2-english/config.json": "ReferenceError: Headers is not defined" file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:188 const headers = new Headers(); ^ ReferenceError: Headers is not defined at getFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:188:25) at getModelFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:468:30) at async getModelJSON (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:542:18) at async Promise.all (index 0) at async loadTokenizer (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:52:16) at async Function.from_pretrained (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:3826:48) at async Promise.all (index 0) at async loadItems (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2193:5) at async pipeline (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2139:19)
https://github.com/huggingface/transformers.js/issues/365
closed
[ "question" ]
2023-10-20T16:29:28Z
2023-11-22T06:15:35Z
null
trilloc
huggingface/sentence-transformers
2,335
How to get individual token embeddings of a sentence from sentence transformers
How to get individual token embeddings of a sentence from sentence transformers
https://github.com/huggingface/sentence-transformers/issues/2335
closed
[]
2023-10-20T06:49:00Z
2023-12-18T16:21:32Z
null
pradeepdev-1995
huggingface/safetensors
371
Non-blocking `save_file`
### Feature request Add the option to make calls to `safetensors.*.save_file` non-blocking to allow execution to continue while large tensors / models are being saved. ### Motivation I'm writing a script a bulk compute embeddings however I am getting poor GPU utilisation due to time spent saving to disk with `safetensors`. It would be nice if saving was non-blocking to allow execution to continue. ### Your contribution I am unsure how this would work, but could give it a try if someone pointed me to the relevant code and some high level steps. Happy to defer to more experienced developers~ One issue I can see with this feature is how to deal with tensors being changed after the call to `save_file` but before saving is actually complete. A copy would work, but maybe not appropriate for large models / tensors.
https://github.com/huggingface/safetensors/issues/371
closed
[ "Stale" ]
2023-10-20T05:42:47Z
2023-12-11T01:48:39Z
1
vvvm23
huggingface/huggingface_hub
1,767
Request: discerning what the default model is when using `InferenceClient` without a `model`
When doing something like the below: ```python client = InferenceClient() # NOTE: no model specified client.feature_extraction("hi") ``` It would be cool to know what model is being used behind the scenes. How can one figure this out programmatically? I am thinking there may be a need for a new `InferenceClient` method resembling the following: ```python def get_default_model(task: str) -> str: """Get the model's name used by default for the input task.""" ```
https://github.com/huggingface/huggingface_hub/issues/1767
closed
[ "enhancement", "good first issue" ]
2023-10-19T20:56:53Z
2023-11-08T13:47:14Z
null
jamesbraza
huggingface/diffusers
5,457
What is function of `attention_mask` in `get_attention_scores`?
What is function of `attention_mask` in `get_attention_scores`? I guess it is used to ignore some value when calculating the attention map I can not find a example in diffusers library that actually use this `attention_mask`. Could you provide an example on how to use it? https://github.com/huggingface/diffusers/blob/e5168588864d72a4dca37e90318c6b11da0eaaf1/src/diffusers/models/attention_processor.py#L454
https://github.com/huggingface/diffusers/issues/5457
closed
[ "stale" ]
2023-10-19T18:14:38Z
2023-11-28T15:05:41Z
null
g-jing
huggingface/accelerate
2,068
How to use cpu_offload function, attach_align_device_hook function,
attach_align_device_hook is called in the cpu_offload function. How is skip_keys used in attach_align_device_hook ? def attach_align_device_hook( module: torch.nn.Module, execution_device: Optional[torch.device] = None, offload: bool = False, weights_map: Optional[Mapping] = None, offload_buffers: bool = False, module_name: str = "", skip_keys: Optional[Union[str, List[str]]] = None, preload_module_classes: Optional[List[str]] = None, ): I wonder what the role of skip keys is?I see this function in diffusers inference stable-diffusion using enable_sequential_cpu_offload. What I want to achieve is to adjust some of the stable-diffusion submodules to run in the gpu, so that the vram occupancy can be controlled.
https://github.com/huggingface/accelerate/issues/2068
closed
[]
2023-10-19T10:25:07Z
2023-11-26T15:06:04Z
null
LeonNerd
huggingface/accelerate
2,067
how to automatically load state dict from memory to a multi-gpu device?
``` Python config_dict = AutoConfig.from_pretrained(model_config, device_map="auto") model = AutoModelForCausalLM.from_config(config_dict) raw_state_dict = torch.load(args.model_path, map_location="cpu") state_dict = convert_ckpt(raw_state_dict) model.load_state_dict(state_dict, strict=False) ``` `model.load_state_dict(state_dict, strict=False)` only loads state dict on a single gpu, even when `device_map="auto"` is set by `AutoConfig`. Additionally, the `load_checkpoint_and_dispatch` func only accepts a file path as the `checkpoint` parameter. Is there any way to automatically load state dict from memory to a multi-gpu device?
https://github.com/huggingface/accelerate/issues/2067
closed
[]
2023-10-19T05:57:39Z
2023-12-22T15:06:31Z
null
tlogn
huggingface/accelerate
2,064
How to use `gather_for_metrics()` with decoder-generated strings to compute rouge score?
I am fine-tuning an encoder-decoder model and during the validation step, using the `.generate` method to generate tokens from the decoder that are subsequently decoded into strings (in this case classes). These generations are occurring across 8 GPUs and I am using Accelerate to manage the distribution. My hope was to append these strings to lists, and pass the lists to `gather_for_metrics()` on each GPU to get a "master list" of predictions and references, added to the rouge metric and then computed: ```python predictions, references = accelerator.gather_for_metrics( (predictions, references) ) rouge_metric.add_batch( predictions=predictions, references=references, ) rouge_score = rouge_metric.compute(rouge_types=["rougeL"], use_aggregator=True)["rougeL"] ``` After encountering some strange errors, i noticed that `gather_for_metrics()` will [only interact with tensors](https://huggingface.co/docs/accelerate/v0.19.0/en/package_reference/accelerator#accelerate.Accelerator.gather_for_metrics) And from what I can tell, you cannot create a torch.Tensor with string members. How do the accelerate folks recommend using `gather_for_metrics()` with decoder-generated strings?
https://github.com/huggingface/accelerate/issues/2064
closed
[ "solved" ]
2023-10-18T19:25:29Z
2023-12-25T15:07:03Z
null
plamb-viso
huggingface/transformers.js
364
[Question] Error in getModelJSON with React
Hey, I am trying to transcribe audio to speech using transformers.js. I tried two ways 1. https://huggingface.co/docs/transformers.js/api/pipelines#pipelinesautomaticspeechrecognitionpipeline 2. https://huggingface.co/docs/transformers.js/tutorials/react But seem to get an error like this ![image](https://github.com/xenova/transformers.js/assets/67155124/bfa37f1b-6b57-42f9-8792-8542fc2fc958) Files for your reference: https://filebin.net/88munmsfk4u0127m Please do let me know if I am doing something wrong or what is the best way using ReactJS
https://github.com/huggingface/transformers.js/issues/364
closed
[ "question" ]
2023-10-18T16:57:20Z
2024-01-24T19:54:17Z
null
ajaykrupalk
huggingface/transformers.js
363
[Question] Build step process for Vercel
Hi, I am currently in the process of trying to deploy to Vercel using Nextjs. I am using pnpm as my package manager and have put the model in the public folder. I hit this error, when building occurs, is there something necessary post install just as #295 has done? I don't understand why this step is necessary ``` An error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/var/task/node_modules/.pnpm/@xenova+transformers@2.6.2/node_modules/@xenova/transformers/.cache' ```
https://github.com/huggingface/transformers.js/issues/363
open
[ "question" ]
2023-10-18T00:27:18Z
2024-04-06T06:23:06Z
null
kyeshmz
huggingface/setfit
432
[Q] How to ensure reproducibility
Can someone explain how to ensure reproducibility of a pre-trained model ("sentence-transformers/paraphrase-mpnet-base-v2")? I thought that the result would be reproducible because SetFitTrainer() has a default random seed in its constructor, but found that it was not the case. SetFitTrainer source code indicates that "to ensure reproducibility across runs, I need to use [`~SetTrainer.model_init`] function to instantiate the model". But, I don't understand what it entails. Is there an example that I can follow? Any help would be highly appreciated. Thanks,
https://github.com/huggingface/setfit/issues/432
closed
[]
2023-10-17T23:47:46Z
2023-12-06T13:19:54Z
null
youngjin-lee
huggingface/chat-ui
519
.env.local prepromt env variable with multi lines
Hi I have a prepromt which is basically a 2 shorts inference. very long text ( 1200 lines like) that I want to add as a prepromts, but the env. file does not allow a multi line text as a variable any idea how to handle this?
https://github.com/huggingface/chat-ui/issues/519
open
[]
2023-10-17T18:34:30Z
2023-11-07T13:11:21Z
6
RachelShalom
huggingface/optimum
1,459
nougat to onnx
### Feature request I would like to do the transformation of the [nougat](https://huggingface.co/facebook/nougat-base) model to onnx, is it possible to do it through optimum? ### Motivation Nougat is a [Donut](https://huggingface.co/docs/transformers/model_doc/donut) model trained to transcribe scientific PDFs into an easy-to-use markdown format.
https://github.com/huggingface/optimum/issues/1459
closed
[]
2023-10-17T10:03:15Z
2024-08-27T06:16:17Z
3
arvisioncode
huggingface/diffusers
5,416
How to correctly implement a class-conditional model
Hi, I'd like to implement a DDPM that is class-conditioned, but not conditioned on anything else (no text), using `UNet2DConditionModel`. I'm training from scratch. I'm calling the model with `noise_pred = model(noisy_images, timesteps, class_labels=class_labels, return_dict=False)[0]`, but I get the error `UNet2DConditionModel.forward() missing 1 required positional argument: 'encoder_hidden_states'`. However, when I set `encoder_hidden_states` to `None`, I get `TypeError: AttnDownBlock2D.forward() got an unexpected keyword argument 'scale'`. I'm not sure what `encoder_hidden_states` should be set to since I'm only using class conditioning. Thanks!
https://github.com/huggingface/diffusers/issues/5416
closed
[]
2023-10-16T20:53:41Z
2023-10-16T21:02:39Z
null
nickk124
huggingface/chat-ui
511
ChatUI on HuggingFace Spaces errors out with PermissionError: [Errno 13] Permission denied
When I try following the below two tutorials I hit the same error, where the container code tries to create a directory and fails due to permission issues on the host tutorials: 1. https://huggingface.co/docs/hub/spaces-sdks-docker-chatui#chatui-on-spaces 2. https://huggingface.co/blog/Llama2-for-non-engineers Note: I have set the env vars `HUGGING_FACE_HUB_TOKEN` and in a prior attempt `HF_TOKEN` as well. <img width="1252" alt="Screenshot 2023-10-16 at 1 25 04 AM" src="https://github.com/huggingface/chat-ui/assets/9070365/6a54c653-ed30-4bcf-af83-80dc04ce2bc1"> stack trace on hugging face space ``` Traceback (most recent call last): File "/opt/conda/bin/text-generation-server", line 8, in <module> sys.exit(app()) File "/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py", line 131, in download_weights utils.download_and_unload_peft( File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/peft.py", line 38, in download_and_unload_peft os.makedirs(model_id, exist_ok=True) File "/opt/conda/lib/python3.9/os.py", line 215, in makedirs makedirs(head, exist_ok=exist_ok) File "/opt/conda/lib/python3.9/os.py", line 225, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: 'skrelan' ```
https://github.com/huggingface/chat-ui/issues/511
open
[ "support", "spaces" ]
2023-10-16T08:29:06Z
2023-12-17T02:58:52Z
3
Skrelan
huggingface/candle
1,105
How to run a model in Fp16?
EDIT: Never mind, see below comment
https://github.com/huggingface/candle/issues/1105
closed
[]
2023-10-16T03:32:16Z
2023-10-18T19:40:54Z
null
joeyballentine
huggingface/candle
1,104
How to load .pth file weights?
I've been experimenting with candle and re-implementing ESRGAN in it. I ended up needing to convert a couple .pth files I have into .safetensors format in python in order to load them into the VarBuilder. I saw on the docs you say this supports loading pytorch weights directly though, but there does not seem to be an example on how to do that. I looked into the pickle module included in the library and got as far as being able to read the weights into a pickle format with TensorInfo, but then I got stuck trying to convert those to tensors and get it in a format VarBuilder would accept. An example on how to either load these weights or convert them to safetensors format in rust would be great, thanks!
https://github.com/huggingface/candle/issues/1104
open
[]
2023-10-16T03:29:53Z
2023-10-19T22:01:42Z
null
joeyballentine
huggingface/datasets
6,303
Parquet uploads off-by-one naming scheme
### Describe the bug I noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored? <img width="425" alt="image" src="https://github.com/huggingface/datasets/assets/1981179/3ffa2144-7c9a-446f-b521-a5e9db71e7ce"> The `-SSSSS-of-NNNNN` seems to be used widely across the codebase. The section that creates the part in my screenshot is here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5287 There are also some edits to this section in the single commit branch. ### Steps to reproduce the bug 1. Upload a dataset that requires at least two parquet files in it 2. Observe the naming scheme ### Expected behavior The couple options here are of course **1. keeping it as is** **2. Starting the index at 1:** train-00001-of-00002-{hash}.parquet train-00002-of-00002-{hash}.parquet **3. My preferred option** (which would solve my specific issue), dropping the total entirely: train-00000-{hash}.parquet train-00001-{hash}.parquet This also solves an issue that will occur with an `append` variable for `push_to_hub` (see https://github.com/huggingface/datasets/issues/6290) where as you add a new parquet file, you need to rename everything in the repo as well. However, I know there are parts of the repo that use 0 as the starting file or may require the total, so raising the question for discussion. ### Environment info - `datasets` version: 2.14.6.dev0 - Platform: macOS-14.0-arm64-arm-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.18.0 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
https://github.com/huggingface/datasets/issues/6303
open
[]
2023-10-14T18:31:03Z
2023-10-16T16:33:21Z
4
ZachNagengast
huggingface/diffusers
5,392
How to train an unconditional latent diffusion model ?
It seems that there is only one available unconditional LDM model (CompVis/ldm-celebahq-256). ```python pipeline = LDMPipeline.from_pretrained("CompVis/ldm-celebahq-256") ``` How can I train this unconditional model on my own dataset? The LDM model includes the training of both `VQModel` and `UNet2DModel`, but the [official training examples](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/train_unconditional.py) seem not to be fully applicable.
https://github.com/huggingface/diffusers/issues/5392
closed
[]
2023-10-14T03:32:34Z
2024-02-16T08:59:49Z
null
Rashfu
huggingface/safetensors
368
Streaming weights into a model directly?
### Feature request Hi! I'm curious whether there is a way to stream model weights from disk into the on-GPU model directly? That is, [I see](https://huggingface.co/docs/safetensors/speed#gpu-benchmark) that by settings `os.environ["SAFETENSORS_FAST_GPU"] = "1"` and using `load_file`, you can stream the weights themselves from disk to GPU. But if I understand correctly, one still has to wait for all of the weights to be moved to GPU before they can subsequently be loaded into the model itself: first load the weights to GPU by some means (possibly streaming), then `model.load(weights)`, schematically. Is there a way to overlap the loading-into-model step with the streaming from disk? Is something like that possible? Or already implemented somewhere? ### Motivation Faster model loading. ### Your contribution I don't know `rust`, but would be happy to contribute `python`-side. Just not sure if the request is feasible.
https://github.com/huggingface/safetensors/issues/368
closed
[ "Stale" ]
2023-10-13T15:21:33Z
2023-12-11T01:48:41Z
1
garrett361
huggingface/huggingface_hub
1,734
Docs request: what is loaded/loadable?
When working with `get_model_status`: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.get_model_status It tells you if the model is loadable and/or loaded. The question is, what does this mean? - What does "loaded" mean... what is it loaded into? - If something isn't loaded, but is loadable, how can one load it?
https://github.com/huggingface/huggingface_hub/issues/1734
closed
[]
2023-10-13T04:59:47Z
2023-10-17T14:18:11Z
null
jamesbraza
huggingface/trl
868
What is the difference of these two saved checkpoints in sft_llama2 example?
I am trying to understand this https://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama_2/scripts/sft_llama2.py#L206C1-L206C1 `trainer.model.save_pretrained(output_dir)` seems already saves the base+lora model to the "final_checkpoint". Then what is doing here `model = model.merge_and_unload()` and save it again to "final_merged_checkpoint"? ``` trainer.save_model(script_args.output_dir) output_dir = os.path.join(script_args.output_dir, "final_checkpoint") trainer.model.save_pretrained(output_dir) # Free memory for merging weights del base_model torch.cuda.empty_cache() model = AutoPeftModelForCausalLM.from_pretrained(output_dir, device_map="auto", torch_dtype=torch.bfloat16) model = model.merge_and_unload() output_merged_dir = os.path.join(script_args.output_dir, "final_merged_checkpoint") model.save_pretrained(output_merged_dir, safe_serialization=True) ```
https://github.com/huggingface/trl/issues/868
closed
[]
2023-10-13T04:31:57Z
2023-10-30T17:15:35Z
null
Emerald01
huggingface/blog
1,577
How to use mAP metric for object detection task?
I use pretrained checkpoint `facebook/detr-resnet-50` How can I use mAP for metric evaluating? ``` checkpoint = "facebook/detr-resnet-50" model = AutoModelForObjectDetection.from_pretrained( checkpoint, ..., ignore_mismatched_sizes=True, ) metric = evaluate.load('repllabs/mean_average_precision') def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model=model, args=training_args, data_collator=collate_fn, train_dataset=dataset["train"].with_transform(transform_aug_ann), eval_dataset=dataset["test"].with_transform(transform_aug_ann), compute_metrics=compute_metrics, tokenizer=image_processor, ) ``` I tried this way, but I have some errors here
https://github.com/huggingface/blog/issues/1577
open
[]
2023-10-12T13:58:52Z
2023-12-04T12:01:33Z
null
IamSVP94
huggingface/accelerate
2,051
Accelerate Examples: What is expected to print on terminal?
### System Info ```Shell - `Accelerate` version: 0.23.0 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - Numpy version: 1.26.0 - PyTorch version (GPU?): 1.13.1 (True) - PyTorch XPU available: False - PyTorch NPU available: False - System RAM: 1007.69 GB - GPU type: NVIDIA A100-SXM4-40GB - `Accelerate` default config: - compute_environment: LOCAL_MACHINE - distributed_type: MULTI_GPU - mixed_precision: fp16 - use_cpu: False - debug: False - num_processes: 2 - machine_rank: 0 - num_machines: 1 - gpu_ids: 3,4 - rdzv_backend: static - same_network: True - main_training_function: main - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`) - [ ] My own task or dataset (give details below) ### Reproduction I was trying to run a simple example (`nlp_example.py`) to kind of perform the equivalent of a hello world task in accelerate, but unfortunately, I'm uncertain as to whether it's working correctly, and I'm somewhat embarrassed to have to post this issue ticket to seek assistance. πŸ˜… I ran `$ python examples/nlp_example.py --cpu ` and got this output: ```bash Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. You're using a BertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. ``` I believe the program continues to run after the above message is printed because control of the terminal's prompt isn't returned to me. There isn't a tqdm bar, progress bar, or signs of life of some sort to indicate that the example was running. Would be great if someone who has some success at running any basic accelerate example scripts to chime in πŸ™‚ ### Expected behavior Signs of life of some sort to indicate that the example is running fine.
https://github.com/huggingface/accelerate/issues/2051
closed
[]
2023-10-12T13:50:40Z
2023-10-12T15:06:44Z
null
davidleejy
huggingface/text-generation-inference
1,137
When I start the model, I get a warning message. I want to know why and how to solve it.
### System Info - OS version: Debian GNU/Linux 11 (bullseye) - Commit sha: 00b8f36fba62e457ff143cce35564ac6704db860 - Cargo version: 1.70.0 - model: Starcoder - nvidia-smi: ``` Thu Oct 12 18:23:03 2023 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.54.03 Driver Version: 535.54.03 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A800-SXM4-80GB On | 00000000:4B:00.0 Off | 0 | | N/A 29C P0 73W / 400W | 36679MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 1 NVIDIA A800-SXM4-80GB On | 00000000:51:00.0 Off | 0 | | N/A 31C P0 62W / 400W | 5MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 2 NVIDIA A800-SXM4-80GB On | 00000000:6A:00.0 Off | 0 | | N/A 31C P0 61W / 400W | 5MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 3 NVIDIA A800-SXM4-80GB On | 00000000:6F:00.0 Off | 0 | | N/A 29C P0 61W / 400W | 5MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 4 NVIDIA A800-SXM4-80GB On | 00000000:8D:00.0 Off | 0 | | N/A 28C P0 61W / 400W | 5MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 5 NVIDIA A800-SXM4-80GB On | 00000000:92:00.0 Off | 0 | | N/A 30C P0 62W / 400W | 5MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 6 NVIDIA A800-SXM4-80GB On | 00000000:C9:00.0 Off | 0 | | N/A 32C P0 67W / 400W | 78233MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 7 NVIDIA A800-SXM4-80GB On | 00000000:CF:00.0 Off | 0 | | N/A 29C P0 58W / 400W | 5MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| +---------------------------------------------------------------------------------------+ ``` ### Information - [ ] Docker - [X] The CLI directly ### Tasks - [ ] An officially supported command - [X] My own modifications ### Reproduction My execution command is: ``` CUDA_VISIBLE_DEVICES=0 /workspace/xieshijie/text-generation-inference/target/release/deps/text_generation_launcher-b64a71565ded74a5 --model-id /workspace/xieshijie/huggingface-models/starcoder2/models--bigcode--starcoder/snapshots/e117ab3b3d0769fd962bd48b099de711757a3d60 --port 6006 --max-input-length 8000 --max-total-tokens 8192 --max-batch-prefill
https://github.com/huggingface/text-generation-inference/issues/1137
closed
[]
2023-10-12T10:33:38Z
2023-10-19T07:02:58Z
null
coder-xieshijie
huggingface/datasets
6,299
Support for newer versions of JAX
### Feature request Hi, I like your idea of adapting the datasets library to be usable with JAX. Thank you for that. However, in your [setup.py](https://github.com/huggingface/datasets/blob/main/setup.py), you enforce old versions of JAX <= 0.3... It is very cumbersome ! What is the rationale for such a limitation ? Can you remove it please ? Thanks, ### Motivation This library is unusable with new versions of JAX ? ### Your contribution Yes.
https://github.com/huggingface/datasets/issues/6299
closed
[ "enhancement" ]
2023-10-12T10:03:46Z
2023-10-12T16:28:59Z
0
ddrous
huggingface/diffusers
5,372
How to use safety_checker in StableDiffusionXLPipeline?
### Describe the bug I want to use safety_checker in StableDiffusionXLPipeline, but it seems that `safety_checker` keyword does not take effect ### Reproduction ```python pipe = StableDiffusionXLPipeline.from_pretrained( "nyxia/mysterious-xl", torch_dtype=torch.float16, safety_checker = StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker"), ).to("cuda") pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) result = pipe( prompt="1girl", ) ``` ### Logs I got folling error ```shell Keyword arguments {'safety_checker': StableDiffusionSafetyChecker( (vision_model): CLIPVisionModel( (vision_model): CLIPVisionTransformer( (embeddings): CLIPVisionEmbeddings( (patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14), bias=False) (position_embedding): Embedding(257, 1024) ) (pre_layrnorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (encoder): CLIPEncoder( (layers): ModuleList( (0-23): 24 x CLIPEncoderLayer( (self_attn): CLIPAttention( (k_proj): Linear(in_features=1024, out_features=1024, bias=True) (v_proj): Linear(in_features=1024, out_features=1024, bias=True) (q_proj): Linear(in_features=1024, out_features=1024, bias=True) (out_proj): Linear(in_features=1024, out_features=1024, bias=True) ) (layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (mlp): CLIPMLP( (activation_fn): QuickGELUActivation() (fc1): Linear(in_features=1024, out_features=4096, bias=True) (fc2): Linear(in_features=4096, out_features=1024, bias=True) ) (layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) ) ) (post_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) ) (visual_projection): Linear(in_features=1024, out_features=768, bias=False) )} are not expected by StableDiffusionXLPipeline and will be ignored. ``` ### System Info - `diffusers` version: 0.20.0 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31 - Python version: 3.10.6 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Huggingface_hub version: 0.17.3 - Transformers version: 4.34.0 - Accelerate version: 0.23.0 - xFormers version: 0.0.22 - Using GPU in script?: yes ### Who can help? @yiyixuxu @sayakpaul @DN6 @patrickvonplaten thanks for your kindly help
https://github.com/huggingface/diffusers/issues/5372
closed
[ "bug" ]
2023-10-12T03:39:23Z
2023-10-12T08:13:28Z
null
hundredwz
huggingface/transformers.js
354
[Question] Whisper Progress
Is it possible to obtain the transcription progress of Whisper's model, ranging from 0 to 100%?
https://github.com/huggingface/transformers.js/issues/354
open
[ "question" ]
2023-10-11T20:41:01Z
2025-05-23T10:12:13Z
null
FelippeChemello
huggingface/text-generation-inference
1,131
How to send a request with system, user and assistant prompt?
How to send in a request prompt(system, user or assistant) like chatgpt where we can specify to out of 3 categories, does the prompt belong?
https://github.com/huggingface/text-generation-inference/issues/1131
closed
[ "Stale" ]
2023-10-11T09:21:14Z
2024-01-10T17:26:12Z
null
ShRajSh
huggingface/dataset-viewer
1,962
Install dependency `music_tag`?
Requested here: https://huggingface.co/datasets/zeio/baneks-speech/discussions/1
https://github.com/huggingface/dataset-viewer/issues/1962
closed
[ "question", "custom package install", "P2" ]
2023-10-11T08:07:53Z
2024-02-02T17:18:50Z
null
severo
huggingface/datasets
6,292
how to load the image of dtype float32 or float64
_FEATURES = datasets.Features( { "image": datasets.Image(), "text": datasets.Value("string"), }, ) The datasets builder seems only support the unit8 data. How to load the float dtype data?
https://github.com/huggingface/datasets/issues/6292
open
[]
2023-10-11T07:27:16Z
2023-10-11T13:19:11Z
null
wanglaofei
huggingface/optimum
1,442
Steps to quantize Llama 2 models for CPU inference
Team, could you please share the steps to quantize the Llama 2 models for CPU inference. When i followed the ORTModelForCasualLM, faced challenges stating token is 401 forbidden even though token passed. For offline model faced issue something related to cannot load from local directory. Please share steps.
https://github.com/huggingface/optimum/issues/1442
open
[ "question", "quantization" ]
2023-10-11T05:32:58Z
2024-10-15T16:19:59Z
null
eswarthammana
huggingface/dataset-viewer
1,956
upgrade hfh to 0.18.0?
https://github.com/huggingface/huggingface_hub/releases/tag/v0.18.0
https://github.com/huggingface/dataset-viewer/issues/1956
closed
[ "question", "blocked-by-upstream", "dependencies", "P2" ]
2023-10-10T12:33:04Z
2023-11-16T11:47:04Z
null
severo
huggingface/diffusers
5,353
How to use FreeU in SimpleCrossAttnUpBlock2D?
I've tried to change your code in order to maintain SimpleCrossAttnUpBlock2D however it seems that shapes doesn't fit up. How can I do it? Thanks! ```Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/gradio/routes.py", line 523, in run_predict output = await app.get_blocks().process_api( File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1437, in process_api result = await self.call_function( File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1109, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) File "/usr/local/lib/python3.9/dist-packages/gradio/utils.py", line 865, in wrapper response = f(*args, **kwargs) File "/home/ubuntu/mimesis-ml-gan-backend/app.py", line 128, in generate image = pipe(image=input_image, File "/usr/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/ubuntu/mimesis-ml-gan-backend/src/diffusions/kandinsky/pipeline_kandinsky_img2img_scheduler.py", line 125, in __call__ noise_pred = self.unet( File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/lib/python3.9/site-packages/diffusers/models/unet_2d_condition.py", line 1020, in forward sample = upsample_block( File "/usr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/ubuntu/mimesis-ml-gan-backend/free_lunch_utils.py", line 166, in forward hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) RuntimeError: Tensors must have same number of dimensions: got 3 and 4 ```
https://github.com/huggingface/diffusers/issues/5353
closed
[]
2023-10-10T09:13:22Z
2023-10-11T05:11:38Z
null
americanexplorer13
huggingface/computer-vision-course
25
Should we use safetensors?
I wondered if we should add an official recommendation to use the `safetensors` saving format wherever possible. But I have to admit, that I'm not that familiar with it, so I don't know how much overhead it would be in cases where we cannot use a HF library like `transformers`.
https://github.com/huggingface/computer-vision-course/issues/25
closed
[ "question" ]
2023-10-09T19:38:39Z
2023-10-11T20:50:32Z
null
johko
huggingface/tokenizers
1,362
When decoding an English sentence with the 'add_prefix_space' parameter set to 'False,' how can I add spaces?
I train a tokenizer and set 'add_prefix_space' to 'False', How can I ensure that BBPE tokenizers correctly handle space division when decoding a sequence ? ``` normalizer = normalizers.Sequence([NFC(), StripAccents()]) tokenizer.normalizer = normalizer tokenizer.pre_tokenizer = pre_tokenizers.Sequence( [Whitespace(), Punctuation(), Digits(individual_digits=True), UnicodeScripts(), ByteLevel(add_prefix_space=False, use_regex=True), ]) tokenizer.decoder = decoders.ByteLevel(add_prefix_space=False, use_regex=True) tokenizer.post_processor = tokenizers.processors.ByteLevel() ```
https://github.com/huggingface/tokenizers/issues/1362
closed
[]
2023-10-09T16:19:43Z
2023-10-30T14:25:24Z
null
enze5088
huggingface/dataset-viewer
1,952
filter parameter should accept any character?
https://datasets-server.huggingface.co/filter?dataset=polinaeterna/delays_nans&config=default&split=train&where=string_col=ΠΉΠΎΠΏΡ‚Π°&offset=0&limit=100 gives an error ``` {"error":"Parameter 'where' is invalid"} ```
https://github.com/huggingface/dataset-viewer/issues/1952
closed
[ "bug", "question", "P1" ]
2023-10-09T13:59:20Z
2023-10-09T17:26:15Z
null
severo
huggingface/chat-ui
495
Make the description customizable in the .env
I'd like to customize the description of chat-ui as marked below. But I can't find how to do it in your tutorial, README.md. It would be highly appreciated if you assist. ![image](https://github.com/huggingface/chat-ui/assets/142883089/046d3926-ddef-4da8-87a7-8771db218976)
https://github.com/huggingface/chat-ui/issues/495
closed
[ "enhancement", "good first issue", "front", "hacktoberfest" ]
2023-10-09T13:57:32Z
2023-10-13T13:49:47Z
7
sjbpsh
huggingface/datasets
6,287
map() not recognizing "text"
### Describe the bug The [map() documentation](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) reads: ` ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)` I have been trying to reproduce it in my code as: `tokenizedDataset = dataset.map(lambda x: tokenizer(x['text']), batched=True)` But it doesn't work as it throws the error: > KeyError: 'text' Can you please guide me on how to fix it? ### Steps to reproduce the bug 1. `from datasets import load_dataset dataset = load_dataset("amazon_reviews_multi")` 2. Then this code: `from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")` 3. The line I quoted above (which I have been trying) ### Expected behavior As mentioned in the documentation, it should run without any error and map the tokenization on the whole dataset. ### Environment info Python 3.10.2
https://github.com/huggingface/datasets/issues/6287
closed
[]
2023-10-09T10:27:30Z
2023-10-11T20:28:45Z
1
EngineerKhan
huggingface/diffusers
5,337
What is the function of `callback` in stable diffusion?
I am reading the source code for stable diffusion pipeline. I wonder what is the function of `callback`? How to use it? Is there an example? https://github.com/huggingface/diffusers/blob/29f15673ed5c14e4843d7c837890910207f72129/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L585C13-L585C21
https://github.com/huggingface/diffusers/issues/5337
closed
[ "stale" ]
2023-10-09T06:02:13Z
2023-11-16T15:05:20Z
null
g-jing
huggingface/open-muse
122
How to finetune the muse-512?
Thank you for your contributions to the open-source community. After testing your weights, we found that the fine-tuned muse-512 has made significant improvements in image quality. We are very interested in this and would like to know how you performed the fine-tuning on the model. For example, what dataset did you use for fine-tuning? Is it open-source? What are its characteristics? Once again, we appreciate your contributions to the open-source community.
https://github.com/huggingface/open-muse/issues/122
open
[]
2023-10-09T05:00:54Z
2023-10-09T05:00:54Z
null
jiaxiangc
huggingface/diffusers
5,335
how to deploy locally as chinese gov has block huggingface?
### Describe the bug got all the models ckpt safetensor, it still try to connect the /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-infer ### Reproduction pipe = diffusers.StableDiffusionPipeline.from_single_file(base_model, torch_dtype=torch.float16, use_safetensors=True, safety_checker=None,) ### Logs _No response_ ### System Info Platform: Win10 Python version: 3.10.11 PyTorch version (GPU?): 2.0.1+cu118 diffusers version: 0.16.1 Transformers version: 4.26.0 Accelerate version: 0.15.0 xFormers version: not installed Using GPU in script?: 3070 Using distributed or parallel set-up in script?: No ### Who can help? @yiyixuxu @DN6 @patrickvonplaten @sayakpaul @patrickvonplaten
https://github.com/huggingface/diffusers/issues/5335
closed
[ "bug", "stale" ]
2023-10-09T01:55:44Z
2024-01-17T10:44:31Z
null
Louis24
huggingface/chat-ui
485
chat-ui and TGI Connect Timeout Error
Hi, I used TGI as a backend for llama2, when I put TGI endpoints in chat-ui, TGI and chat-ui is in same mechine but it cannot connect. would you give me some suggestions? thank you! TGI work well. ```shell curl http://127.0.0.1:8081/generate_stream \ -X POST \ -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \ -H 'Content-Type: application/json' data:{"token":{"id":13,"text":"\n","logprob":-0.45239258,"special":false},"generated_text":null,"details":null} data:{"token":{"id":13,"text":"\n","logprob":-0.5541992,"special":false},"generated_text":null,"details":null} data:{"token":{"id":2772,"text":"De","logprob":-0.016738892,"special":false},"generated_text":null,"details":null} data:{"token":{"id":1022,"text":"ep","logprob":-0.000002503395,"special":false},"generated_text":null,"details":null} data:{"token":{"id":6509,"text":" learning","logprob":-0.026168823,"special":false},"generated_text":null,"details":null} data:{"token":{"id":30081,"text":" ","logprob":-0.08898926,"special":false},"generated_text":null,"details":null} data:{"token":{"id":29898,"text":"(","logprob":-0.0023441315,"special":false},"generated_text":null,"details":null} data:{"token":{"id":15189,"text":"also","logprob":-0.0006175041,"special":false},"generated_text":null,"details":null} data:{"token":{"id":2998,"text":" known","logprob":-0.000029087067,"special":false},"generated_text":null,"details":null} data:{"token":{"id":408,"text":" as","logprob":-7.1525574e-7,"special":false},"generated_text":null,"details":null} data:{"token":{"id":30081,"text":" ","logprob":-0.0052261353,"special":false},"generated_text":null,"details":null} data:{"token":{"id":24535,"text":"deep","logprob":-0.0019664764,"special":false},"generated_text":null,"details":null} data:{"token":{"id":2281,"text":" struct","logprob":-0.0007429123,"special":false},"generated_text":null,"details":null} data:{"token":{"id":2955,"text":"ured","logprob":-0.000027537346,"special":false},"generated_text":null,"details":null} data:{"token":{"id":6509,"text":" learning","logprob":-0.000081300735,"special":false},"generated_text":null,"details":null} data:{"token":{"id":29897,"text":")","logprob":-0.00006067753,"special":false},"generated_text":null,"details":null} data:{"token":{"id":338,"text":" is","logprob":-0.00009846687,"special":false},"generated_text":null,"details":null} data:{"token":{"id":760,"text":" part","logprob":-0.000022292137,"special":false},"generated_text":null,"details":null} data:{"token":{"id":310,"text":" of","logprob":-3.5762787e-7,"special":false},"generated_text":null,"details":null} data:{"token":{"id":263,"text":" a","logprob":-0.00013446808,"special":false},"generated_text":"\n\nDeep learning (also known as deep structured learning) is part of a","details":null} ``` chat-ui **.env.local** MODELS config: ```shell MODELS=`[ { "name": "Trelis/Llama-2-7b-chat-hf-function-calling", "datasetName": "Trelis/function_calling_extended", "description": "function calling Llama-7B-chat", "websiteUrl": "https://research.Trelis.com", "userMessageToken": "", "userMessageEndToken": " [/INST] ", "assistantMessageToken": "", "assistantMessageEndToken": " </s><s>[INST] ", "chatPromptTemplate" : "<s>[INST] <<SYS>>\nRespond in French to all questions\n<</SYS>>\n\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} </s><s>[INST] {{/ifAssistant}}{{/each}}", "parameters": { "temperature": 0.01, "top_p": 0.95, "repetition_penalty": 1.2, "top_k": 50, "truncate": 1000, "max_new_tokens": 1024 }, "endpoints": [{ "url": "http://127.0.0.1:8081/generate_stream" }] } ]` ``` error message: ```shell [vite] Error when evaluating SSR module /src/lib/server/websearch/sentenceSimilarity.ts: |- TypeError: fetch failed at fetch (/root/chat-ui/node_modules/undici/index.js:109:13) at processTicksAndRejections (node:internal/process/task_queues:95:5) at runNextTicks (node:internal/process/task_queues:64:3) at process.processImmediate (node:internal/timers:447:9) at async getModelFile (file:///root/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24) at async getModelJSON (file:///root/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18) at async Promise.all (index 0) at async loadTokenizer (file:///root/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16) at async AutoTokenizer.from_pretrained (file:///root/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48) at async Promise.all (index 0) 2:32:29 PM [vite] Error when evaluating SSR module /src/lib/server/websearch/runWebSearch.
https://github.com/huggingface/chat-ui/issues/485
closed
[ "support" ]
2023-10-08T06:36:26Z
2025-01-16T23:13:34Z
8
ViokingTung
huggingface/transformers
26,665
How to resume training from a checkpoint when training LoRA using deepspeed?
### System Info - `transformers` version: 4.34.0.dev0 - Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.28 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: 0.21.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - use_cpu: False - num_processes: 1 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - deepspeed_config: {'deepspeed_config_file': 'none', 'zero3_init_flag': False} - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - dynamo_config: {'dynamo_backend': 'INDUCTOR', 'dynamo_mode': 'default', 'dynamo_use_dynamic': False, 'dynamo_use_fullgraph': False} - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @pacman100 @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When using deepspeed to train LoRA, I want to use the resume function of the trainer. The sample code is as follows: ```python causal_model = AutoModelForCausalLM.from_pretrained(model_pretrained_path_, config=config, trust_remote_code=True, low_cpu_mem_usage=self.params["low_cpu_mem_usage"]) peft = PEFT(config_path_or_data=peft_params) causal_model = peft.get_peft_model(model=causal_model) trainer = Seq2SeqTrainer( params=trainer_params, model=causal_model, tokenizer=tokenizer, train_dataset=train_dataset, data_collator=data_collator, eval_dataset=eval_dataset, compute_metrics=dataset_t.metric, ) trainer.train(resume_from_checkpoint=True) ``` deepspeed config as follows: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "zero_optimization": { "stage": 2, "cpu_offload": false, "allgather_partitions": true, "allgather_bucket_size": 5e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 5e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 50, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` ### Expected behavior RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument state_steps in method wrapper_CUDA___fused_adamw_)
https://github.com/huggingface/transformers/issues/26665
closed
[]
2023-10-08T03:51:00Z
2024-01-06T08:06:06Z
null
Sakurakdx
huggingface/chat-ui
484
Rich text input for the chat bar?
Taking a nifty feature from the Claude API here, but models on HuggingChat or most models used with Chat UI, can process or fluently speak markdown. It's pretty easy to take something like remarkable and turn Rich text, like titles, bolds and lists. It's helpful for users to organize content, to be able to highlight things, or put items in lists. Hope for a feature like this
https://github.com/huggingface/chat-ui/issues/484
open
[ "enhancement", "front" ]
2023-10-07T19:25:45Z
2023-10-09T00:20:09Z
2
VatsaDev
huggingface/chat-ui
480
Porting through nginx on aws
I have this up and running with aws but it only works on localhost on my machine. How can use Nginx to port this to some address?
https://github.com/huggingface/chat-ui/issues/480
open
[ "support" ]
2023-10-06T10:39:52Z
2023-10-08T21:13:10Z
0
Mr-Nobody1
huggingface/sentence-transformers
2,330
How to make prediction in NLI
I can't make prediction in NLI task when run based file training_NLI. Can you help me?
https://github.com/huggingface/sentence-transformers/issues/2330
closed
[]
2023-10-06T08:52:59Z
2024-01-31T16:18:18Z
null
trthminh
huggingface/candle
1,036
How to fine-tune large models?
Hello all, How should I finetune a large model? Are there implementations like `peft` in Python for Candle? Specifically, how should I train a quantized, LoRA model? I saw [candle-lora](https://github.com/EricLBuehler/candle-lora), and plan to use that but do not know how to quantize a large model.
https://github.com/huggingface/candle/issues/1036
closed
[]
2023-10-05T16:43:17Z
2024-12-03T15:55:53Z
null
nullptr2nullptr
huggingface/trl
837
What is the loss mask for special tokens in SFFTrainer
### System Info latest transformers ### Who can help? @muellerzr and @pacman100 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm training with SFTTrainer and want to ensure that the model is including the loss on predicting an EOS token (< /s >). What is the default handling of special tokens for the loss computation in SFTTrainer? Can I change this? ``` from transformers import Trainer from trl import SFTTrainer trainer = SFTTrainer( peft_config=config, dataset_text_field="text", max_seq_length=context_length, tokenizer=tokenizer, model=model, train_dataset=data["train"], eval_dataset=data["test"], args=transformers.TrainingArguments( max_steps=60, # comment this out after the first time you run. This is for testing! num_train_epochs=epochs, output_dir=save_dir, evaluation_strategy="steps", do_eval=True, per_device_train_batch_size=batch_size, gradient_accumulation_steps=4, per_device_eval_batch_size=batch_size, log_level="debug", optim="paged_adamw_8bit", save_steps=0.2, logging_steps=1, learning_rate=1e-4, eval_steps=0.2, fp16=True, max_grad_norm=0.3, warmup_ratio=0.03, lr_scheduler_type="linear", ), callbacks=[logging_callback], # Add custom callback here ) model.config.use_cache = False # silence the warnings. Please re-enable for inference! trainer.train() ``` Note that in my dataset I have included EOS tokens where appropriate ### Expected behavior The output of my fine-tuning is not emitting EOS tokens, which leads me to believe that the loss mask is zero for special tokens with SFTTrainer, but I'm unsure if that's true.
https://github.com/huggingface/trl/issues/837
closed
[]
2023-10-05T13:49:52Z
2023-11-13T18:23:54Z
null
RonanKMcGovern
huggingface/chat-ui
476
Chat-ui failing on Edge, Chrome and Safari.
It seems to be working on Firefox for mac and Safari for iOS. Stacktrace in console from Chrome: ``` Failed to load resource: the server responded with a status of 404 () UrlDependency.4e6706f5.js:1 Failed to load resource: the server responded with a status of 404 () stores.6bc4a41f.js:1 Failed to load resource: the server responded with a status of 404 () chat.danskgpt.dk/:1 Uncaught (in promise) TypeError: Failed to fetch dynamically imported module: https://chat.danskgpt.dk/_app/immutable/entry/start.59a3223b.js _layout.svelte.e4398851.js:1 Failed to load resource: the server responded with a status of 404 () _page.svelte.e0b7a273.js:1 Failed to load resource: the server responded with a status of 404 () LoginModal.fe5c7c4d.js:1 Failed to load resource: the server responded with a status of 404 () app.1a92c8bc.js:1 Failed to load resource: the server responded with a status of 404 () www.danskgpt.dk/chatui/favicon.png:1 Failed to load resource: the server responded with a status of 404 () _error.svelte.00b004c8.js:1 Failed to load resource: the server responded with a status of 404 () www.danskgpt.dk/chatui/favicon.svg:1 Failed to load resource: the server responded with a status of 404 () ``` It's hosted at [here](https://chat.danskgpt.dk).
https://github.com/huggingface/chat-ui/issues/476
closed
[ "support" ]
2023-10-05T13:03:01Z
2023-10-05T13:56:49Z
4
mhenrichsen
huggingface/dataset-viewer
1,929
Add a "feature" or "column" level for better granularity
For example, if we support statistics for a new type of columns, or if we change the way we compute some stats, I think that we don't want to recompute the stats for all the columns, just for one of them. It's a guess, because maybe it's more efficient to have one job that downloads the data and computes every possible stats, than having N jobs that download the same data and compute only one stat. To be evaluated
https://github.com/huggingface/dataset-viewer/issues/1929
closed
[ "question", "refactoring / architecture", "P2" ]
2023-10-05T08:24:50Z
2024-02-22T21:24:09Z
null
severo
huggingface/huggingface.js
251
How to get SpaceRuntime information?
Inside hub library, I can see that there's `SpaceRuntime` which specify the hardware requirements. `SpaceRuntime` is defined inside `ApiSpaceInfo`. But seems that it's not being emitted. ``` const items: ApiSpaceInfo[] = await res.json(); for (const item of items) { yield { id: item._id, name: item.id, sdk: item.sdk, likes: item.likes, private: item.private, updatedAt: new Date(item.lastModified), }; } ``` So, is there anyway I can grab those information?
https://github.com/huggingface/huggingface.js/issues/251
closed
[]
2023-10-04T18:23:42Z
2023-10-05T08:26:07Z
null
namchuai
huggingface/chat-ui
471
Custom chatbot which includes sources such as pdf,databases and a specific website only.
I have a chatbot which can query pdf,database,a particular website in python.How do I include may be the quantized models,rag sources and the retrieval logic in this chat ui?
https://github.com/huggingface/chat-ui/issues/471
closed
[]
2023-10-04T04:36:23Z
2024-07-08T16:22:02Z
2
pranavbhat12
huggingface/huggingface.js
250
How to apply pagination for listModels?
Thanks for the library! Could you please help me on how can I apply pagination for `listModels` API from @huggingface/hub? I don't know how to specify the offset.
https://github.com/huggingface/huggingface.js/issues/250
closed
[]
2023-10-03T12:39:17Z
2023-10-04T01:27:01Z
null
namchuai
huggingface/transformers.js
341
[Question] Custom stopping criteria for text generation models
Is it possible to pass a custom `stopping_criteria` to `generate()` method? Is there a way to interrupt generation mid-flight?
https://github.com/huggingface/transformers.js/issues/341
closed
[ "question" ]
2023-10-02T10:35:33Z
2025-10-11T10:12:10Z
null
krassowski
huggingface/datasets
6,273
Broken Link to PubMed Abstracts dataset .
### Describe the bug The link provided for the dataset is broken, data_files = [https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url) The ### Steps to reproduce the bug Steps to reproduce: 1) Head over to [https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#big-data-datasets-to-the-rescue](url) 2) In the Section "What is the Pile?", you can see a code snippet that contains the broken link. ### Expected behavior The link should Redirect to the "PubMed Abstracts dataset" as expected . ### Environment info .
https://github.com/huggingface/datasets/issues/6273
open
[]
2023-10-01T19:08:48Z
2024-04-28T02:30:42Z
5
sameemqureshi
huggingface/chat-ui
466
Deploy with Langchain Agent
I have built a Langchain agent which interacts with Vicuna model hosted with TGI and the web UI is currently hosted with Gradio on Spaces. I'd like UI to be more polished(like huggingchat/chatgpt) with persistence. I couldn't find any docs related to how to use Langchain agent with chat-ui. If anyone could shed some light on this or point me towards the relevant resources. Thank you for your help.
https://github.com/huggingface/chat-ui/issues/466
closed
[]
2023-09-30T21:29:38Z
2023-10-03T09:14:48Z
1
Tejaswgupta
huggingface/accelerate
2,018
A demo of how to perform multi-GPU parallel inference for transformer LLM is needed
In the current demo: "[Distributed inference using Accelerate](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference )" , it is still not clear enough to know how to perform multi-GPU parallel inference for transformer LLM. This gap in the demo has hindered not just me, but also many people in adopting your solution: https://www.reddit.com/r/LocalLLaMA/comments/15rlqsb/how_to_perform_multigpu_parallel_inference_for/ Also in the reply, other frameworks have already started competing for this specific use case. Could you provide the demo for this use case?
https://github.com/huggingface/accelerate/issues/2018
closed
[]
2023-09-30T14:10:30Z
2025-02-10T00:27:24Z
null
KexinFeng
huggingface/candle
1,006
Question: How to use quantized tensors?
Hello everybody, I was looking through Candle's quantized tensor code when I noticed that there is only a matmul_t implemented for QuantizedType, and no other operations. Perhaps other could operations be added? In addition, is there an example of using quantized tensors/converting them from normal tensors? Thanks!
https://github.com/huggingface/candle/issues/1006
closed
[]
2023-09-30T13:35:16Z
2024-08-17T15:20:58Z
null
EricLBuehler
huggingface/transformers.js
340
question
hi @xenova is still there any position as js ts backend developer, next week 06 oct i will be free by finishing the senlife project i am working on for a uk clients this is the app that i build backend for https://play.google.com/store/apps/details?id=com.senlife.app&hl=en&gl=US
https://github.com/huggingface/transformers.js/issues/340
closed
[ "question" ]
2023-09-30T11:35:23Z
2023-10-02T10:01:20Z
null
jedLahrim
huggingface/chat-ui
465
Where to deploy other than HF?
Hey, I've been trying to deploy the chat-ui somewhere I can use a custom domain (such as vercel and azure). Each of them comes with different problems that I have yet to solve. Vercel issues described [here](https://github.com/huggingface/chat-ui/issues/212). It does not seem like I can deploy this as a Azure SWA, as it fails when using the azure-swa-adapter for sveltekit with the following error. ``` Using adapter-azure-swa ✘ [ERROR] Top-level await is currently not supported with the "cjs" output format .svelte-kit/output/server/chunks/models.js:94:15: 94 β”‚ const models = await Promise.all( β•΅ ~~~~~ ✘ [ERROR] Top-level await is currently not supported with the "cjs" output format .svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:199:18: 199 β”‚ const extractor = await pipeline("feature-extraction", modelId); β•΅ ~~~~~ β–² [WARNING] "./xhr-sync-worker.js" should be marked as external for use with "require.resolve" [require-resolve-not-external] node_modules/jsdom/lib/jsdom/living/xhr/XMLHttpRequest-impl.js:31:57: 31 β”‚ ... require.resolve ? require.resolve("./xhr-sync-worker.js") : null; β•΅ ~~~~~~~~~~~~~~~~~~~~~~ error during build: Error: Build failed with 2 errors: .svelte-kit/output/server/chunks/models.js:94:15: ERROR: Top-level await is currently not supported with the "cjs" output format .svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:199:18: ERROR: Top-level await is currently not supported with the "cjs" output format at failureErrorWithLog (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1575:15) at /github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1033:28 at /github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:978:67 at buildResponseToResult (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1031:7) at /github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1143:14 at responseCallbacks.<computed> (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:680:9) at handleIncomingPacket (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:735:9) at Socket.readFromStdout (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:656:7) at Socket.emit (node:events:514:28) at addChunk (node:internal/streams/readable:324:12) ---End of Oryx build logs--- Oryx has failed to build the solution. ``` Any suggestions on how I can otherwise deploy this?
https://github.com/huggingface/chat-ui/issues/465
closed
[]
2023-09-29T13:58:42Z
2023-12-07T19:10:00Z
2
mhenrichsen
huggingface/dataset-viewer
1,892
Use swap to avoid OOM?
The pods don't have swap. Is it possible to have swap to avoid OOM, even at the expense of longer processing time in workers?
https://github.com/huggingface/dataset-viewer/issues/1892
closed
[ "question", "infra", "P2" ]
2023-09-29T13:48:54Z
2024-06-19T14:23:36Z
null
severo
huggingface/transformers.js
337
[Question] How do I specify a non-huggingface URL (that doesn't start with `/models/`) in `AutoTokenizer.from_pretrained`?
My tokenizer files are hosted within this folder: ``` https://example.com/public/models/TheBloke/Llama-2-13B-GPTQ/ ``` First I load the lib: ```js let { AutoTokenizer } = await import('https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.1'); ``` Then I tried what I thought would be the most obvious/intuitive API: ```js await AutoTokenizer.from_pretrained("/public/models/TheBloke/Llama-2-13B-GPTQ") // requests: https://example.com/models/public/models/TheBloke/Llama-2-13B-GPTQ/tokenizer.json ``` This is strongly counter-intuitive to me. If I add a `/` at the start of the URL, it shouldn't add anything before that. A path that starts with `/` on the web always means "append this to the origin". So I read the docs, and it seems to suggest that you need to put at `.` on the end: ```js await AutoTokenizer.from_pretrained("/public/models/TheBloke/Llama-2-13B-GPTQ/.") // requests: https://example.com/models/public/models/TheBloke/Llama-2-13B-GPTQ/tokenizer.json ``` Nope. So the next obvious step was to just give it an absolute URL and be done with it: ```js await AutoTokenizer.from_pretrained("https://example.com/public/models/TheBloke/Llama-2-13B-GPTQ") // requests: 'https://huggingface.co/https://example.com/public/models/TheBloke/Llama-2-13B-GPTQ/resolve/main/tokenizer_config.json ``` Oof. So I'm a bit confused here πŸ˜΅β€πŸ’« Going to keep trying, but I've spent 20 minutes on this so far, so posting here so you can improve the DX around this, even if I do manage to solve it myself soon.
https://github.com/huggingface/transformers.js/issues/337
closed
[ "question" ]
2023-09-28T21:00:41Z
2023-09-28T22:03:05Z
null
josephrocca
huggingface/transformers.js
334
[Question] failed to call OrtRun(). error code = 1. When I try to load Xenova/pygmalion-350m
I'm getting an error `failed to call OrtRun(). error code = 1.` When I try to load Xenova/pygmalion-350m. The error is as follows ``` wasm-core-impl.ts:392 Uncaught Error: failed to call OrtRun(). error code = 1. at e.run (wasm-core-impl.ts:392:19) at e.run (proxy-wrapper.ts:215:17) at e.OnnxruntimeWebAssemblySessionHandler.run (session-handler.ts:100:15) at InferenceSession.run (inference-session-impl.ts:108:40) at sessionRun (models.js:191:36) at async Function.decoderForward [as _forward] (models.js:478:26) at async Function.forward (models.js:743:16) at async Function.decoderRunBeam [as _runBeam] (models.js:564:18) at async Function.runBeam (models.js:1284:16) at async Function.generate (models.js:1009:30) ``` And my Code for running it is this ``` let text = 'Once upon a time, there was'; let generator = await pipeline('text-generation', 'Xenova/pygmalion-350m'); let output = await generator(text, { temperature: 2, max_new_tokens: 10, repetition_penalty: 1.5, no_repeat_ngram_size: 2, num_beams: 2, num_return_sequences: 2, }); console.log(output); ``` I see that `OrtRun` is something returned by the OnnxRuntime on a failure but have you had success in running the Pygmalion-350m model ?
https://github.com/huggingface/transformers.js/issues/334
open
[ "question" ]
2023-09-28T01:34:36Z
2023-12-16T17:14:12Z
null
sebinthomas