repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/trl
1,510
[question] how to apply model parallism to solve cuda memory error
hi team. I am using the SFT and PPO code to train my model, link https://github.com/huggingface/trl/tree/main/examples/scripts. Due to long context length and 7B-level model size, I am facing cuda memory issue on my single gpu. Is there any straightforward manner to utilize multiple gpus on my server to train the model thru SFT and PPO script ? such as spliting the model to multiple gpus as model parallism. Is there any argument parameters I can directly pass into my training script ? Thanks a lot. ``` export CUDA_VISIBLE_DEVICES='7'; python examples/scripts/sft_travel.py \ --model_name_or_path="mistralai/Mistral-7B-Instruct-v0.2" \ --report_to="wandb" \ --learning_rate=5e-5 \ --per_device_train_batch_size=4 \ --gradient_accumulation_steps=16 \ --logging_steps=1 \ --num_train_epochs=120 \ --lr_scheduler_type "constant" \ --max_steps=-1 \ --gradient_checkpointing \ --max_seq_length 16000 \ --output_dir "8bit" \ --overwrite_output_dir True \ --logging_strategy "epoch" \ --evaluation_strategy "no" ```
https://github.com/huggingface/trl/issues/1510
closed
[]
2024-04-06T02:09:36Z
2024-05-06T17:02:35Z
null
yanan1116
huggingface/dataset-viewer
2,667
Rename datasets-server to dataset-viewer in infra internals?
Follow-up to #2650. Is it necessary? Not urgent in any Case. Some elements to review: - [ ] https://github.com/huggingface/infra - [ ] https://github.com/huggingface/infra-deployments - [ ] docker image tags (https://hub.docker.com/r/huggingface/datasets-server-services-search -> https://hub.docker.com/r/huggingface/dataset-viewer-services-search) - [ ] Helm chart name - [ ] AWS parameters - [ ] kubernetes namespaces - [ ] Hub app names and tokens - [ ] https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server - [ ] buckets: hf-datasets-server-statics-test, hf-datasets-server-statics - [ ] MongoDB databases - [ ] BetterUptime - [ ] shared directories (PARQUET_METADATA_CACHE_APPNAME)
https://github.com/huggingface/dataset-viewer/issues/2667
closed
[ "question", "P2" ]
2024-04-05T16:53:34Z
2024-04-08T09:26:14Z
null
severo
huggingface/dataset-viewer
2,666
Change API URL to dataset-viewer.huggingface.co?
Follow-up to https://github.com/huggingface/dataset-viewer/issues/2650 Should we do it? - https://github.com/huggingface/dataset-viewer/issues/2650#issuecomment-2040217875 - https://github.com/huggingface/moon-landing/pull/9520#issuecomment-2040220911 If we change it, we would have to update: - moon-landing - datasets - the docs (hub, datasets, dataset-viewer) - other written support (blog, observable, notion...) If so, also change the dev URL: https://datasets-server.us.dev.moon.huggingface.tech. We should also handle the redirection from the old URL to the new one.
https://github.com/huggingface/dataset-viewer/issues/2666
closed
[ "question", "P2" ]
2024-04-05T16:49:13Z
2024-04-08T09:24:43Z
null
severo
huggingface/huggingface.js
609
[Question] What is the correct way to access commit diff results via http?
Data I am interested in: ![image](https://github.com/huggingface/huggingface.js/assets/16808224/cada880a-bc46-496b-869b-02adb083b6a7) Here's the endpoint to list commits https://huggingface.co/api/models/SimonMA/Codellama-7b-lora-rps-adapter/commits/main
https://github.com/huggingface/huggingface.js/issues/609
closed
[]
2024-04-05T12:00:15Z
2024-04-09T18:40:05Z
null
madgetr
huggingface/dataset-viewer
2,661
Increase the number of backfill workers?
Today, it's 8. Let's try increasing it and see if it speeds up the backfill job. The current throughput is 577 datasets/minute.
https://github.com/huggingface/dataset-viewer/issues/2661
open
[ "question", "P2", "prod" ]
2024-04-05T10:42:11Z
2024-04-05T16:42:13Z
null
severo
huggingface/transformers
30,066
How to calculate the mAP on this network?
### System Info I want to evaluate my network with the mean Average Precision. I don't know how to get the class-id of my gt data. Are there any examples to calculate the mAP with this library? I use the DetrForObjectDetection with my own dataset. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction this is my code to save the loss in a csv file. I also want to save the mAP in this file. def on_train_epoch_end(self, trainer, pl_module): train_loss = trainer.callback_metrics.get("training_loss").item() val_loss = trainer.callback_metrics.get("validation/loss").item() with open(self.file_path, 'a', newline='') as csvfile: writer = csv.writer(csvfile) if not self.header_written: writer.writerow(["Epoch", "Train Loss", "Validation Loss"]) self.header_written = True writer.writerow([pl_module.current_epoch, train_loss, val_loss]) ### Expected behavior I tried to get the data with this code: gt_boxes = [] detected_boxes = [] for batch in self.val_dataloader: pixel_values = batch['pixel_values'].to(pl_module.device) pixel_mask = batch['pixel_mask'].to(pl_module.device) labels = batch['labels'] # train_idx = batch['train_idx'] outputs = pl_module(pixel_values=pixel_values, pixel_mask=pixel_mask) target_sizes = torch.tensor([image.shape[-2:] for image in pixel_values]).to(pixel_values.device) detections = image_processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.5)[0] for i in range(len(detections['scores'])): prob_score = detections['scores'][i].item() class_pred = detections['labels'][i].item() box = detections['boxes'][i].detach().cpu().numpy() detected_boxes.append([class_pred, prob_score, *box]) for label in labels: gt_box = label['boxes'] for box in gt_box: gt_boxes.append(box) image_height = 2048 image_width = 2048 gt_boxes_abs = [] for box in gt_boxes: x_min, y_min, width, height = box x_max = x_min + width y_max = y_min + height x_min_abs = int(x_min * image_width) y_min_abs = int(y_min * image_height) x_max_abs = int(x_max * image_width) y_max_abs = int(y_max * image_height) class_id = ??? difficult = ??? crowd = ??? gt_boxes_abs.append([x_min_abs, y_min_abs, x_max_abs, y_max_abs, class_id, difficult, crowd]) adjusted_detected_boxes = [] converted_boxes = [] for box in detected_boxes: class_id = box[0] confidence = box[1] x_min = box[2] y_min = box[3] x_max = box[4] y_max = box[5] converted_boxes.append([x_min, y_min, x_max, y_max, class_id, confidence])
https://github.com/huggingface/transformers/issues/30066
closed
[]
2024-04-05T08:32:31Z
2024-06-08T08:04:08Z
null
Sebi2106
huggingface/optimum-quanto
152
How does quanto calibrate torch functions?
I have learned quanto calibrate ops in module forms by adding module hooks, but how about torch functions like `torch.sigmoid`, `torch.elu`, and `torch.log` etc? I think the output scale of `torch.sigmoid` could be directly evaluated similarly to quanto's approach with `softmax`. Additionally, `torch.elu` might be substituted with `torch.nn.ELU`. However, I'm uncertain how functions like `torch.log`, which are unbounded and lack explicit module forms will be calibrated within quanto.
https://github.com/huggingface/optimum-quanto/issues/152
closed
[ "question" ]
2024-04-05T06:49:51Z
2024-04-11T09:41:55Z
null
shuokay
huggingface/candle
2,007
How to run inference of a (very) large model across mulitple GPUs ?
It is mentioned on README that candle supports multi GPU inference, using NCCL under the hood. How can this be implemented ? I wonder if there is any available example to look at.. Also, I know PyTorch has things like DDP and FSDP, is candle support for multi GPU inference comparable to these techniques ?
https://github.com/huggingface/candle/issues/2007
open
[]
2024-04-04T13:52:46Z
2024-08-12T04:53:54Z
null
jorgeantonio21
huggingface/candle
2,006
How to get different outputs for the same prompt?
I used a gemma, it always returned same outputs for same prompt. How can I get different outputs? Is there any method or parameter for sampling? (I even doubt that `top_p` works.)
https://github.com/huggingface/candle/issues/2006
closed
[]
2024-04-04T10:43:31Z
2024-04-13T11:17:36Z
null
Hojun-Son
huggingface/chat-ui
975
is it possible to hide the setting from the users? most users do not want to create assistants, and they just want to use existing ones.
In the left-hand corner of hugginchat, "Assistants" and "Settings" are visible. We are considering whether it is possible to hide these options from our users, as they have expressed no interest in creating assistants and prefer to use existing ones. Many thanks for your kind help.. Howard
https://github.com/huggingface/chat-ui/issues/975
open
[]
2024-04-04T07:33:25Z
2024-04-04T07:33:25Z
0
hjchenntnu
huggingface/transformers.js
679
Speech Recognition/Whisper word level scores or confidence output
### Question Hey, Big thanks for awesome project! It possible to add score/confidence for word level output when using Speech Recognition/Whisper model? Would appreciate any direction/comments or suggestion where to dig to add it. Happy to submit PR if I will success in it. Thanks!
https://github.com/huggingface/transformers.js/issues/679
open
[ "question" ]
2024-04-04T07:04:00Z
2024-04-04T07:04:00Z
null
wobbble
huggingface/transformers
30,034
What is the data file format of `run_ner.py`?
### Feature request What is the correct format for custom dataset in run_ner.py? Would it be possible to include a few lines on this with a helpful example? ### Motivation I am using the example script run_ner.py from [huggingface](https://github.com/huggingface)/transformers It is not possible to use standard conll format for the model fine-tuning of run_ner. ### Your contribution We could include this in the corresponding readme.
https://github.com/huggingface/transformers/issues/30034
closed
[ "Good First Issue" ]
2024-04-04T06:36:30Z
2024-04-08T11:50:00Z
null
sahil3773mehta
huggingface/datasets
6,777
.Jsonl metadata not detected
### Describe the bug Hi I have the following directory structure: |--dataset | |-- images | |-- metadata1000.csv | |-- metadata1000.jsonl | |-- padded_images Example of metadata1000.jsonl file {"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle", "image": "images/212734.png", "gaussian_padded_image": "padded_images/p_212734.png"} {"caption": "an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes", "image": "images/212735.png", "gaussian_padded_image": "padded_images/p_212735.png"} . . . I'm trying to use dataset = load_dataset("imagefolder", data_dir='/dataset/', split='train') to load the the dataset, however it is not able to load according to the fields in the metadata1000.jsonl . please assist to load the data properly also getting ``` File "/workspace/train_trans_vae.py", line 1089, in <module> print(get_metadata_patterns('/dataset/')) File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 499, in get_metadata_patterns raise FileNotFoundError(f"The directory at {base_path} doesn't contain any metadata file") from None FileNotFoundError: The directory at /dataset/ doesn't contain any metadata file ``` when trying ``` from datasets.data_files import get_metadata_patterns print(get_metadata_patterns('/dataset/')) ``` ### Steps to reproduce the bug dataset Version: 2.18.0 make a similar jsonl and similar directory format ### Expected behavior creates a dataset object with the column names, caption,image,gaussian_padded_image ### Environment info dataset Version: 2.18.0
https://github.com/huggingface/datasets/issues/6777
open
[]
2024-04-04T06:31:53Z
2024-04-05T21:14:48Z
5
nighting0le01
huggingface/lighteval
143
Do an intro notebook on how to use `lighteval`
https://github.com/huggingface/lighteval/issues/143
closed
[ "documentation" ]
2024-04-03T07:53:25Z
2024-12-05T10:18:42Z
null
clefourrier
huggingface/accelerate
2,614
How to I selectively apply accelerate to trainers
I have two trainers in a script, one is SFTTrainer and one is PPOTrainer, both from trl library. Is it possible to only apply accelerate to PPOTrainer?
https://github.com/huggingface/accelerate/issues/2614
closed
[]
2024-04-03T06:39:05Z
2024-05-21T15:06:36Z
null
zyzhang1130
huggingface/sentence-transformers
2,568
How to improve sentence-transformers' performance on CPU?
On the CPU, I tried huggingface‘s optimization.onnx and sentence_transformers and I found that on the task of feature_extraction, optimization.onnx was not as good as sentence_transformers in batch encoding performance. My question is, are sentence_transformers the current ceiling on CPU performance?
https://github.com/huggingface/sentence-transformers/issues/2568
closed
[]
2024-04-03T02:09:14Z
2024-04-23T09:17:39Z
null
chensuo2048
huggingface/datasets
6,773
Dataset on Hub re-downloads every time?
### Describe the bug Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whenever I run the below function `load_borderlines_hf`, it downloads the entire dataset from the hub and then does the other logic: https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80 Let me know what I'm doing wrong here, or if it's a bug with the `datasets` library itself. On the hub I have my data stored in CSVs, but several columns are lists, so that's why I have the code to map splitting on `;`. I looked into dataset loading scripts, but it seemed difficult to set up. I have verified that other `datasets` and `models` on my system are using the cache properly (e.g. I have a 13B parameter model and large datasets, but those are cached and don't redownload). __EDIT: __ as pointed out in the discussion below, it may be the `map()` calls that aren't being cached properly. Supposing the `load_dataset()` retrieve from the cache, then it should be the case that the `map()` calls also retrieve from the cached output. But the `map()` commands re-execute sometimes. ### Steps to reproduce the bug 1. Copy and paste the function from [here](https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80) (lines 80-100) 2. Run it in Python `load_borderlines_hf(None)` 3. It completes successfully, downloading from HF hub, then doing the mapping logic etc. 4. If you run it again after some time, it will re-download, ignoring the cache ### Expected behavior Re-running the code, which calls `datasets.load_dataset('manestay/borderlines', 'territories')`, should use the cached version ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-5.14.21-150500.55.7-default-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.10.0
https://github.com/huggingface/datasets/issues/6773
closed
[]
2024-04-02T17:23:22Z
2024-04-08T18:43:45Z
5
manestay
huggingface/transformers.js
677
How you debug/measure Python -> Javascript ONNX Conversion
### Question I have converted a couple ONNX models to use ONNXRuntimeWeb from using the Python onnx version as the source. Ive spent weeks debugging though. What's your strategy for comparing tensor values, etc, with these onnx models? Ive console log'd N# of values from the tensor/array to see if the values have diverged far but it can get fatiguing. I can't simply just dump a numpy array and compare
https://github.com/huggingface/transformers.js/issues/677
open
[ "question" ]
2024-04-02T16:16:22Z
2024-04-02T16:18:03Z
null
matbeedotcom
huggingface/transformers.js
676
How to use fp16 version of the model file?
### Question example files: https://huggingface.co/Xenova/modnet/tree/main/onnx
https://github.com/huggingface/transformers.js/issues/676
closed
[ "question" ]
2024-04-02T12:10:24Z
2024-04-03T02:56:52Z
null
cyio
huggingface/chat-ui
969
Display does not automatically update after receiving message
After receiving the message, the chat page does not update and is always in the loading state. The received message can only be displayed after refreshing the page or switching sessions. ![图片](https://github.com/huggingface/chat-ui/assets/34700131/19150fbd-346c-4cf4-840d-a1bda9649d09)
https://github.com/huggingface/chat-ui/issues/969
open
[ "question" ]
2024-04-02T06:14:59Z
2024-04-03T04:26:23Z
null
w4rw4r
huggingface/dataset-viewer
2,654
Tutorial about how to start/run my own local dataset server.
Hey, I'm new to the dataset server and rookie in the Web field. I wanted to build my own dataset server however, is there any tutorial that can guide me to build my own dataset server? Many Thanks
https://github.com/huggingface/dataset-viewer/issues/2654
closed
[]
2024-04-02T01:30:12Z
2024-05-11T15:03:50Z
null
ANYMS-A
huggingface/accelerate
2,603
How to load a FSDP checkpoint model
I have fine tuned gemma 2b model using FSDP and these are the below files available under the checkpoint ``` optimizer_0 pytorch_model_fsdp_0 rng_state_0.pth rng_state_1.pth scheduler.pt trainer_state.json ``` How can i load the above FSDP object? kindly help me with this issue,
https://github.com/huggingface/accelerate/issues/2603
closed
[]
2024-04-01T16:53:24Z
2024-05-11T15:06:21Z
null
nlpkiddo-2001
huggingface/datasets
6,769
(Willing to PR) Datasets with custom python objects
### Feature request Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code: ``` class MyClass: pass dataset = datasets.Dataset.from_list([ dict(a=MyClass(), b='hello'), ]) ``` It gives error: ``` ArrowInvalid: Could not convert <__main__.MyClass object at 0x7a852830d050> with type MyClass: did not recognize Python value type when inferring an Arrow data type ``` I guess it is because Dataset forces to convert everything into arrow format. However, is there any ways to make the scenario work? Thanks! ### Motivation (see above) ### Your contribution Yes, I am happy to PR! Cross-posted: https://discuss.huggingface.co/t/datasets-with-custom-python-objects/79050?u=fzyzcjy EDIT: possibly related https://github.com/huggingface/datasets/issues/5766
https://github.com/huggingface/datasets/issues/6769
open
[ "enhancement" ]
2024-04-01T13:18:47Z
2024-04-01T13:36:58Z
0
fzyzcjy
huggingface/optimum-quanto
146
Question about the gradient of QTensor and QBitTensor
I am confused by the gradient of the Quantizer and QBitTensor. Take QTensor as the example: The evaluation of forward is: ```txt data = base / scale (1) data = round(data) (2) data = clamp(data, qmin, qmax) (3) ``` I think the graidents should be: ```txt grad_div = 1 / scale (1) grad_round = 1 (2) # refer to "straight though estimator": https://arxiv.org/abs/1308.3432 grad_clamp = 1 if qmin < data < qmax else 0 (3) ``` According to chain rule, the gradient of Quantizer should be `grad_div * grad_round * grad_clamp` which is equal to `1 / scale if qmin < base/scale < qmax else 0` I have reached QTensor's unit test and I find that dequantize is applied to QTensor before backward. I am confused by `Quantizer. backward` and the `dequantize` behavior before backward.
https://github.com/huggingface/optimum-quanto/issues/146
closed
[ "question" ]
2024-03-31T14:33:10Z
2024-04-24T13:51:20Z
null
shuokay
huggingface/transformers.js
673
Is dit-base supported
### Question There is a [Huggingface repo](https://huggingface.co/Xenova/dit-base) for the ONNX version of the dit-base model but I can't seem to make it work. I keep getting the following error: ![image](https://github.com/xenova/transformers.js/assets/74398804/4b0ab09e-640e-47ee-ae05-27f759830424) Is the model currently supported?
https://github.com/huggingface/transformers.js/issues/673
closed
[ "question" ]
2024-03-31T01:18:42Z
2024-03-31T01:48:24Z
null
Maxzurek
huggingface/datatrove
143
Understand the output of deduplication
Hi I have arabic split from the CC trying to deduplicate it I used datatrove for this with a small example I got in my output folder two files 0000.c4_dup and 0000.c4_sig Could you help me to understand this output I cannot read its content as it's c/00000.c4_sig is not UTF-8 encoded and seems to be binary files where should I see the nex text deduplicated Thanks in advance
https://github.com/huggingface/datatrove/issues/143
closed
[ "question" ]
2024-03-30T23:16:21Z
2024-05-06T09:30:43Z
null
Manel-Hik
huggingface/candle
1,971
How to use `topk`?
I am trying to use `topk` to implement X-LoRA in Candle, and want to perform `topk` in the last dimension. Specifically, I need the `indices` return value (as returned by [`torch.topk`](https://pytorch.org/docs/stable/generated/torch.topk.html)). These indices will either be used to creaste a mask to zero out all the values which are _not_ in the topk, and/or used to apply scalings on the nonzero values. This is a may be hard to understand, as such please see [this](https://github.com/EricLBuehler/xlora/blob/3637d1e00854649e8b9162f8f87233248577162c/src/xlora/xlora_insertion.py#L50-L63) snippet from our X-LoRA library. Is there a way to implement this with the current Candle functions, or is this planned to be implemented as a function? --- After looking at the Mixtral MoE selection implementation, I cannot really understand it: > https://github.com/huggingface/candle/blob/3144150b8d1b80b2c6b469dcab5b717598f0a458/candle-transformers/src/models/mixtral.rs#L302-L323 How does this work? Thanks!
https://github.com/huggingface/candle/issues/1971
closed
[]
2024-03-30T20:29:45Z
2024-07-23T02:02:58Z
null
EricLBuehler
huggingface/transformers.js
671
What is involved in upgrading to V3?
### Question In anticipation of being able to [generate music](https://github.com/xenova/transformers.js/issues/668) with musicGen I'm attempting to switch my project over to version 3, which I was able to build on my mac. I noticed that when using SpeechT5, the voice sounds completely garbled. I've attached a zip with two example WAV files. [audio_wav_examples.zip](https://github.com/xenova/transformers.js/files/14806203/audio_wav_examples.zip) I suspect I'm overlooking something, and need to upgrade some other things too? So my question is: could you give a broad overview of all the parts I need to upgrade? Things I've checked or tried: - Whisper Speech to Text is still working after 'dropping in' the new version. - Cleared caches (the JS caches) - Grabbing 'official' package from the [link to the JSDelivr repository](https://cdn.jsdelivr.net/npm/@xenova/transformers@3.0.0-alpha.0) in the V3 readme, but that doesn't work, which I assume is just an auto-build glitch. - Switching WAV generation code to the one in Transformers.js V3 example. - Switching to the [example webworker](https://github.com/xenova/transformers.js/blob/v3/examples/text-to-speech-client/src/worker.js) in the V3 branch, which looks very different, but it had no effect. (The old code was basically `synthesizer = await pipeline('text-to-speech', 'Xenova/speecht5_tts', { quantized: false });`). - The wav blob from the worker has the same issue as the raw Float32 array, so the issue is not in the way I was playing those arrays.
https://github.com/huggingface/transformers.js/issues/671
closed
[ "question" ]
2024-03-29T18:09:23Z
2024-03-31T13:50:27Z
null
flatsiedatsie
huggingface/datasets
6,764
load_dataset can't work with symbolic links
### Feature request Enable the `load_dataset` function to load local datasets with symbolic links. E.g, this dataset can be loaded: ├── example_dataset/ │ ├── data/ │ │ ├── train/ │ │ │ ├── file0 │ │ │ ├── file1 │ │ ├── dev/ │ │ │ ├── file2 │ │ │ ├── file3 │ ├── metadata.csv while this dataset can't: ├── example_dataset_symlink/ │ ├── data/ │ │ ├── train/ │ │ │ ├── sym0 -> file0 │ │ │ ├── sym1 -> file1 │ │ ├── dev/ │ │ │ ├── sym2 -> file2 │ │ │ ├── sym3 -> file3 │ ├── metadata.csv I have created an example dataset in order to reproduce the problem: 1. Unzip `example_dataset.zip`. 2. Run `no_symlink.sh`. Training should start without issues. 3. Run `symlink.sh`. You will see that all four examples will be in train split, instead of having two examples in train and two examples in dev. The script won't load the correct audio files. [example_dataset.zip](https://github.com/huggingface/datasets/files/14807053/example_dataset.zip) ### Motivation I have a very large dataset locally. Instead of initiating training on the entire dataset, I need to start training on smaller subsets of the data. Due to the purpose of the experiments I am running, I will need to create many smaller datasets with overlapping data. Instead of copying the all the files for each subset, I would prefer copying symbolic links of the data. This way, the memory usage would not significantly increase beyond the initial dataset size. Advantages of this approach: - It would leave a smaller memory footprint on the hard drive - Creating smaller datasets would be much faster ### Your contribution I would gladly contribute, if this is something useful to the community. It seems like a simple change of code, something like `file_path = os.path.realpath(file_path)` should be added before loading the files. If anyone has insights on how to incorporate this functionality, I would greatly appreciate your knowledge and input.
https://github.com/huggingface/datasets/issues/6764
open
[ "enhancement" ]
2024-03-29T17:49:28Z
2025-04-29T15:06:28Z
1
VladimirVincan
huggingface/transformers.js
670
Are tokenizers supposed to work in the browser?
### Question I'd love to use some pretrained tokenizers, right in my browser. On a number of occasions, I've tried to use this library to load and use a tokenizer in my browser, but it always fails with an error like this: ``` Uncaught (in promise) SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data getModelJSON hub.js:584 loadTokenizer tokenizers.js:62 from_pretrained tokenizers.js:4398 gv9xs tok.js:3 gv9xs tok.js:9 newRequire dev.42f35062.js:71 <anonymous> dev.42f35062.js:122 <anonymous> dev.42f35062.js:145 hub.js:584:16 gv9xs tok.js:3 AsyncFunctionThrow self-hosted:856 (Async: async) gv9xs tok.js:9 newRequire dev.42f35062.js:71 <anonymous> dev.42f35062.js:122 <anonymous> dev.42f35062.js:145 ``` Is there anything I can do to make this work? My code is rather simple: ``` import { AutoTokenizer } from '@xenova/transformers' ;(async function () { const tokenizer = await AutoTokenizer.from_pretrained( 'Xenova/bert-base-uncased' ) console.log(tokenizer) const { input_ids } = await tokenizer('I love transformers!') console.log(input_ids) })() ``` I serve this code via a Parcel development server, but it's never worked for me. Any advice would be greatly appreciated!
https://github.com/huggingface/transformers.js/issues/670
closed
[ "question" ]
2024-03-29T16:10:46Z
2024-03-29T16:53:21Z
null
Vectorrent
huggingface/transformers.js
669
TinyLlama Conversion
### Question I ran the converter script on the tinyllama repo for both the TinyLlama models ([intermediate step 1431K 3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) and [chat v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)) and uploaded them to my repo ([intermediate step 1431K 3T](https://huggingface.co/dmmagdal/tinyllama-1.1B-intermediate-step-1431k-3T-onnx-js) [chat v1.0](https://huggingface.co/dmmagdal/tinyllama-1.1B-chat-v1.0-onnx-js); I also have uploads where the quantized flag was enabled). When I try to run either of my converted models with the `AutoModelForCausalLM` or `pipeline`, I get the following error: ``` Error: Could not locate file: "https://huggingface.co/dmmagdal/tinyllama-1.1B-chat-v1.0-onnx-js/resolve/main/onnx/decoder_model_merged.onnx". ``` This error seems to be correct in that I do not have that file in my repo. Was there something I did wrong in the conversion process or is the model not fully supported by transformers.js? I'm not sure how or if it relates to the TinyLlama repo you have here: https://huggingface.co/Xenova/TinyLLama-v0/tree/main
https://github.com/huggingface/transformers.js/issues/669
closed
[ "question" ]
2024-03-29T14:50:06Z
2025-10-13T04:57:32Z
null
dmmagdal
huggingface/datatrove
142
Deduplicating local data throws an error
Hi, I have data in my local machine in the format of a jsonl file and I want to deduplicate it. I'm using the following example: `sent_dedup_config = SentDedupConfig( n_sentences=3, split_sentences=False, # set to False to split on \n instead only_dedup_in_index=True, min_doc_words=50, ) FINDER_WORKERS = 10 # this will speed up/parallelize step 2 def run_example(): pipeline_1 = [ JsonlReader("CC_data_inputs/"), SentenceDedupSignature(output_folder="cc_output/sigs", config=sent_dedup_config, finder_workers=FINDER_WORKERS), ] pipeline_2 = [SentenceFindDedups(data_folder="cc_output/sigs", output_folder="cc_output/dups", config=sent_dedup_config)] pipeline_3 = [ JsonlReader(data_folder="CC_data_inputs/"), SentenceDedupFilter(data_folder="cc_output/dups", config=sent_dedup_config), ] executor_1: PipelineExecutor = LocalPipelineExecutor(pipeline=pipeline_1, workers=4, tasks=4) executor_2: PipelineExecutor = LocalPipelineExecutor(pipeline=pipeline_2, workers=1, tasks=FINDER_WORKERS) executor_3: PipelineExecutor = LocalPipelineExecutor(pipeline=pipeline_3, workers=4, tasks=4) print(executor_1.run()) print(executor_2.run()) print(executor_3.run()) ` I edited the first pipeline to just read the jsonl file (assuming that my data is ready directly for step 2). When I run the code, it throws this error: Traceback (most recent call last): File "/home/ubuntu/deduplication/sentence_deduplication.py", line 4, in <module> from datatrove.pipeline.dedup.sentence_dedup import SentDedupConfig ImportError: cannot import name 'SentDedupConfig' from 'datatrove.pipeline.dedup.sentence_dedup' (/home/ubuntu/miniconda3/lib/python3.11/site-packages/datatrove/pipeline/dedup/sentence_dedup.py) My data consists of a set of 5 jsonl files inside the folder CC_data_inputs. I just reinstalled the datatrove library. Could you help me figure it out?
https://github.com/huggingface/datatrove/issues/142
closed
[ "question" ]
2024-03-29T12:31:30Z
2024-04-24T14:15:58Z
null
Manel-Hik
huggingface/optimum-intel
642
How to apply LoRA adapter to a model loaded with OVModelForCausalLM()?
In the transformers library, we can load multiple adapters to the original model by load_adapter then switch the specified adapter with set_adapter like below. ``` # base model model = AutoModelForCausalLM.from_pretrained( model_name, ) # load multiple adapters model.load_adapter("model/adapter1/", "adapter1") model.load_adapter("model/adapter2/", "adapter2") # switch adapter model.set_adapter("adapter2") ``` Now I want to apply LoRA adapters with OpenVINO, but I can't find an example of it. Is it possible to do it with OVModelForCausalLM?
https://github.com/huggingface/optimum-intel/issues/642
closed
[]
2024-03-29T01:13:44Z
2024-08-03T12:34:21Z
null
nai-kon
huggingface/transformers
29,948
How to All Utilize all GPU's when device="balanced_low_0" in GPU setting
### System Info I know that while loading the model in "balanced_low_0" GPU setting the model is loaded into all GPU's apart from 0: GPU. Where the 0: GPU is left to do the text inference. (i.e. text inference as in performing all the calculation to generate response inside the LLM) So, as per the give device parameter my model is loaded onto 1,2,3 GPU's and 0: GPU is left for inference. | ID | GPU | MEM | | 0 | 0% | 3% | | 1 | 0% | 83% | | 2 | 0% | 82% | | 3 | 0% | 76% | Question: How can i also utilize the remaining 1,2,3 GPU's to perform text inference not only 0:GPU? Context: "balanced_low_0" evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the generate function for Transformers models Reference: https://huggingface.co/docs/accelerate/en/concept_guides/big_model_inference#designing-a-device-map CC: @gante @ArthurZucker and @younesbelkada Apologies if the ticket is raised under different bucket ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction na ### Expected behavior na
https://github.com/huggingface/transformers/issues/29948
closed
[]
2024-03-28T19:54:09Z
2024-05-07T13:43:08Z
null
kmukeshreddy
huggingface/dataset-viewer
2,649
Should we support /filter on columns that contain SQL commands?
See the `schema` column on https://huggingface.co/datasets/motherduckdb/duckdb-text2sql-25k. Clicking on any of the 'classes' leads to an error <img width="1209" alt="Capture d’écran 2024-03-28 à 15 11 50" src="https://github.com/huggingface/datasets-server/assets/1676121/3aaf779f-0465-429a-bafb-1a16ff5f2901"> The erroneous URL is: https://datasets-server.huggingface.co/filter?dataset=motherduckdb%2Fduckdb-text2sql-25k&config=default&split=train&offset=0&length=100&where=schema%3D%27CREATE+TABLE+%22venue%22+%28%0A++%22venueId%22+INTEGER+NOT+NULL%2C%0A++%22venueName%22+VARCHAR%28100%29%2C%0A++%22venueInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22venueId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22author%22+%28%0A++%22authorId%22+INTEGER+NOT+NULL%2C%0A++%22authorName%22+VARCHAR%2850%29%2C%0A++%22authorPublications%22+INT%5B%5D%2C%0A++PRIMARY+KEY+%28%22authorId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22dataset%22+%28%0A++%22datasetId%22+INTEGER+NOT+NULL%2C%0A++%22datasetName%22+VARCHAR%2850%29%2C%0A++%22datasetInfo%22+STRUCT%28v+VARCHAR%2C+i+INTEGER%29%2C%0A++PRIMARY+KEY+%28%22datasetId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22journal%22+%28%0A++%22journalId%22+INTEGER+NOT+NULL%2C%0A++%22journalName%22+VARCHAR%28100%29%2C%0A++%22journalInfo%22+MAP%28INT%2C+DOUBLE%29%2C%0A++PRIMARY+KEY+%28%22journalId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22keyphrase%22+%28%0A++%22keyphraseId%22+INTEGER+NOT+NULL%2C%0A++%22keyphraseName%22+VARCHAR%2850%29%2C%0A++%22keyphraseInfo%22+VARCHAR%2850%29%5B%5D%2C%0A++PRIMARY+KEY+%28%22keyphraseId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22paper%22+%28%0A++%22paperId%22+INTEGER+NOT+NULL%2C%0A++%22title%22+VARCHAR%28300%29%2C%0A++%22venueId%22+INTEGER%2C%0A++%22year%22+INTEGER%2C%0A++%22numCiting%22+INTEGER%2C%0A++%22numCitedBy%22+INTEGER%2C%0A++%22journalId%22+INTEGER%2C%0A++%22paperInfo%22+UNION%28num+INT%2C+str+VARCHAR%29%2C%0A++PRIMARY+KEY+%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22journalId%22%29+REFERENCES+%22journal%22%28%22journalId%22%29%2C%0A++FOREIGN+KEY%28%22venueId%22%29+REFERENCES+%22venue%22%28%22venueId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22cite%22+%28%0A++%22citingPaperId%22+INTEGER+NOT+NULL%2C%0A++%22citedPaperId%22+INTEGER+NOT+NULL%2C%0A++%22citeInfo%22+INT%5B%5D%2C%0A++PRIMARY+KEY+%28%22citingPaperId%22%2C%22citedPaperId%22%29%2C%0A++FOREIGN+KEY%28%22citedpaperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22citingpaperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22paperDataset%22+%28%0A++%22paperId%22+INTEGER%2C%0A++%22datasetId%22+INTEGER%2C%0A++%22paperDatasetInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22datasetId%22%2C+%22paperId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22paperKeyphrase%22+%28%0A++%22paperId%22+INTEGER%2C%0A++%22keyphraseId%22+INTEGER%2C%0A++%22paperKeyphraseInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22keyphraseId%22%2C%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22paperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22keyphraseId%22%29+REFERENCES+%22keyphrase%22%28%22keyphraseId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22writes%22+%28%0A++%22paperId%22+INTEGER%2C%0A++%22authorId%22+INTEGER%2C%0A++%22writesInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22paperId%22%2C%22authorId%22%29%2C%0A++FOREIGN+KEY%28%22paperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22authorId%22%29+REFERENCES+%22author%22%28%22authorId%22%29%0A%29%3B%27 ```json {"error":"Parameter 'where' contains invalid symbols"} ``` It's because the content includes some of the forbidden symbols: https://github.com/huggingface/datasets-server/blob/4dddea2e6a476d52ba5be0c7c64fb8eca9827935/services/search/src/search/routes/filter.py#L53 Do you think it's possible to support the above query? Or should we handle the error on the Hub (not easy to do more than currently)?
https://github.com/huggingface/dataset-viewer/issues/2649
open
[ "question", "api", "P2" ]
2024-03-28T14:14:01Z
2024-03-28T14:24:34Z
null
severo
huggingface/accelerate
2,593
How to use training function rather than training scripts in multi GPUs and multi node?
I confirmed that the Multi-gpu launcher is executed based on the training function using the PrepareForLaunch function in "accelerate/examples/multigpu_remote_launcher.py". Usually, the "accelerate launch" or "python -m torch.distributed.run" command is used for multi-node, but is there a way to utilize a training function like the PrepareForLaunch function?
https://github.com/huggingface/accelerate/issues/2593
closed
[]
2024-03-28T07:05:50Z
2024-05-05T15:06:26Z
null
wlsghks4043
huggingface/alignment-handbook
144
Can we please add the option to work with a tokenized dataset, escpailly for the CPT task.
Since we have the CPT task now, it would be nice to have the ability to feel a tokenized and packed dataset directly.
https://github.com/huggingface/alignment-handbook/issues/144
open
[]
2024-03-27T18:31:58Z
2025-02-27T16:23:06Z
1
shamanez
huggingface/transformers.js
668
Is it possible to run a music / sounds generation model?
### Question I'd love to create a browser-based music generation tool, or one that can turn text into sound effects. Is that supported? I guess my more general question is: can Transformers.js run pretty much any .onnx I throw at it, or does each model require some level of implementation before it can be used?
https://github.com/huggingface/transformers.js/issues/668
closed
[ "question" ]
2024-03-27T18:22:31Z
2024-05-13T21:17:54Z
null
flatsiedatsie
huggingface/optimum-quanto
139
Dequantizing tensors using quanto
I noticed the quantized models have these 4 additional features, for every weight in the original, e.g: ``` model.layers.0.mlp.down_proj.activation_qtype, model.layers.0.mlp.down_proj.input_scale, model.layers.0.mlp.down_proj.output_scale, model.layers.0.mlp.down_proj.weight_qtype ``` I guess `qtype` refers to the quantized datatype, and `scale` probably refers to the scaling factor used during quantization? Although what is the difference between `input_scale` and `output scale`? Is it possible to recreate the exact original tensor using these values and the quantized weight? If yes, then what would the formula be for the dequantization?
https://github.com/huggingface/optimum-quanto/issues/139
closed
[ "question" ]
2024-03-27T18:00:34Z
2024-04-11T09:22:29Z
null
raunaks13
huggingface/safetensors
458
Safetensors uses excessive RAM when saving files
Safetensors uses around twice the RAM that `torch.save`: ```python import resource import torch from safetensors.torch import save_file torch.save({'tensor': torch.randn((500000000))}, 'test.torch') print(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss) save_file({'tensor': torch.randn((500000000))}, 'test.safetensors') print(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss) ``` Output: ``` 2308324 4261528 ``` I believe this is because safetensors loads the full tensor in the `prepare` function instead of streaming it. Is it possible to stream the writes instead? For instance, having a `prepare_metadata` function that generates the metadata first, writing that first, then each individual tensor.
https://github.com/huggingface/safetensors/issues/458
closed
[ "Stale" ]
2024-03-27T12:11:38Z
2024-05-02T01:47:32Z
1
sheepymeh
huggingface/transformers
29,897
How to finetune a language model after extent token embeddings?
If I add some new tokens for a language model, I will get some random initialized weights in embeddings and lm_head. Is there any official way to train only these new weights? Or all I can do is adding hooks to the tensors to zero the gradient for weights I do not want to change?
https://github.com/huggingface/transformers/issues/29897
closed
[]
2024-03-27T08:20:24Z
2024-03-27T15:01:04Z
null
bluewanderer
huggingface/text-generation-inference
1,677
how to get the latest version number?
In the document, I use "docker run ghcr.io/huggingface/text-generation-inference:latest" to run the latest version of tgi. But in a production environment, I need to fix the version number. I can't find any webpage similar to [docker hub](https://hub.docker.com/r/pytorch/manylinux-cuda102). So how can I use docker command line to get the version list of huggingface/text-generation-inference?
https://github.com/huggingface/text-generation-inference/issues/1677
closed
[]
2024-03-27T05:43:49Z
2024-03-29T02:30:10Z
null
fancyerii
huggingface/optimum-quanto
134
Should quanto use int dtype in AffineQuantizer instead of uint?
According to code in https://github.com/huggingface/quanto/blob/main/quanto/tensor/qbitstensor.py#L34 I find quanto use uint dtype to store the quantized value in affine quantizer, while in symmetric quantizer it is int dtype https://github.com/huggingface/quanto/blob/main/quanto/tensor/qtensor.py#L62. Taking hardware into consideration, If we quantize both weight and activation to int types, will it save the cost of GPU or NPU since this only requires integer-type MAC arrays
https://github.com/huggingface/optimum-quanto/issues/134
closed
[ "question" ]
2024-03-26T14:21:25Z
2024-04-11T09:25:09Z
null
shuokay
huggingface/hub-docs
1,257
Add section about deprecation of script-based datasets?
Asked here: https://github.com/huggingface/datasets-server/issues/2385#issuecomment-2017984722 > Perhaps a little bit of suggestion from me is to include a disclaimer in the docs so that others are aware that developing a custom script is not supported. It would also help answer the discussions + we could link in the error message directly. --- On the other hand, maybe we just want to deprecate it sooner than later, and not spend too much time on this.
https://github.com/huggingface/hub-docs/issues/1257
open
[ "question" ]
2024-03-26T13:20:27Z
2024-03-26T17:49:50Z
null
severo
huggingface/candle
1,941
[help] how to update a portion of a long tensor
I'm aware of the closed issue(#1163 ) and understand that Var is mutable and Tensor is immutable by design. But I find it hard to impl some logic if it's impossible to update a portion of a Tensor. For example, how can I generate a pairwise combination from two 2d tensors: ```rust let a = Tensor::new(&[[1.0], [2.0]], &device)?; let b = Tensor::new(&[[3.0], [4.0]], &device)?; // how to generate a tensor that is the pair combination of the two? // [[1, 3], [1, 4], [2, 3], [2, 4]] let c = Tensor::zeros(&[2, 2, 1], DType::F32, &device)?; for i in 0..a.dim(0)? { for j in 0..b.dim(0)? { // won't work! // here we cannot set the content of the tensor via `set` c.i((i, j)).set(Tensor::cat(&[&a, &b], 0)?); } } ```
https://github.com/huggingface/candle/issues/1941
closed
[]
2024-03-26T11:47:56Z
2024-04-07T15:42:45Z
null
michael8090
huggingface/optimum
1,776
How to convert a model(tf_model.h5) with tokenizer folder to the onnx format
### Feature request I have trained the TensorFlow model using the Transformers library and saved the trained model and tokenizer in a folder named MODEL_WITH_TOKENIZER. The model is stored inside the folder in a **.h5** format - **tf_model.h5** Here is the folder structure. ![Screenshot from 2024-03-26 16-17-28](https://github.com/huggingface/optimum/assets/41164884/ae132e6e-f326-4c1c-8024-367544fc679f) I want to convert the model to .onnx format Should I convert the entire MODEL_WITH_TOKENIZER folder to .onnx or only the tf_model.h5 file to onnx? what are the steps ### Motivation Hi, I have trained the TensorFlow model using the Transformers library and saved the trained model and tokenizer in a folder named MODEL_WITH_TOKENIZER. The model is stored in the **.h5** format - **model.h5** Here is the folder structure. ![Screenshot from 2024-03-26 16-17-28](https://github.com/huggingface/optimum/assets/41164884/ae132e6e-f326-4c1c-8024-367544fc679f) I want to convert the model to .onnx format Should I convert the entire MODEL_WITH_TOKENIZER folder to .onnx or only the tf_model.h5 file to onnx? what are the steps ### Your contribution I have trained the TensorFlow model using the Transformers library and saved the trained model and tokenizer in a folder named MODEL_WITH_TOKENIZER. The model is stored in the **.h5** format - **tf_model.h5** Here is the folder structure. ![Screenshot from 2024-03-26 16-17-28](https://github.com/huggingface/optimum/assets/41164884/ae132e6e-f326-4c1c-8024-367544fc679f) I want to convert the model to .onnx format Should I convert the entire MODEL_WITH_TOKENIZER folder to .onnx or only the tf_model.h5 file to onnx? what are the steps
https://github.com/huggingface/optimum/issues/1776
open
[ "onnx" ]
2024-03-26T10:48:02Z
2024-10-14T13:35:13Z
null
pradeepdev-1995
huggingface/alignment-handbook
142
Efficient dialog data format for KTO training
I have dialogs in the shareGPT format (see below) and for each `gpt` turn a label (thumbs up or thumbs down). But for KTO training, I have only seen datasets with the columns `prompt`, `completion` and `label` (see e.g. https://huggingface.co/datasets/trl-lib/kto-mix-14k). Do I need to unwind my shareGPT dialogs (see below) for KTO training, or is there some more efficient format I can use? How should the dialog history be encoded in the `prompt` column (see below)? shareGPT-Format: ``` {"conversations":[ {"from":"system","value":"You are a friendly assistant for ....\n"}, {"from":"human","value":"Hello, I am Sam and ..."}, {"from":"gpt","value":"Welcome Sam, so you ...."}, {"from":"human","value":"Yes, but ...."}, {"from":"gpt","value":"Then ..."} ]} ``` Transformed to KTO, with `prompt` column as close as possible to https://huggingface.co/datasets/trl-lib/kto-mix-14k: ``` prompt, completion, label [ { "content": "You are a friendly assistant for ....\n", "role": "system" }, { "content": "Hello, I am Sam and ...", "role": "human" }], {"role":"gpt","content":"Welcome Sam, so you ...."}, true [ { "content": "You are a friendly assistant for ....\n", "role": "system" }, { "content": "Hello, I am Sam and ...", "role": "human" }, {"role":"gpt","content":"Welcome Sam, so you ...."}, {"role":"human","content":"Yes, but ...."}], {"role":"gpt","content":"Then ..."}, false ``
https://github.com/huggingface/alignment-handbook/issues/142
open
[]
2024-03-26T10:29:38Z
2024-03-26T10:30:08Z
0
DavidFarago
huggingface/transformers.js
664
How to confirm if webgpu actually working in the backend with inferencing
### Question Hi Team, Thanks for the awsome library. Recently I am experimenting to run background remove model in the client side using webgpu. I came across this solution https://huggingface.co/spaces/Xenova/remove-background-webgpu. Tried to replicate the same in my local using your V3 branch. The way I have used it is as below. ``` const model = await AutoModel.from_pretrained('briaai/RMBG-1.4', { // Do not require config.json to be present in the repository config: { model_type: 'custom' }, device: 'webgpu', dtype: 'fp32' }) ``` I can see significant improvement while enabling `device: 'webgpu',` instead of wasm. Question 1: How can I confirm if the webgpu is being used in the backend while inferencing as I can see in both of the case (with webgpu and without webgpu) the `ort-wasm-simd.jsep.wasm` file is getting loaded. why we are not loading `ort.webgpu.min`? SS ![image](https://github.com/xenova/transformers.js/assets/55099778/836b092c-d3d7-4e81-99c5-7603a5affabd) Question 2: It would be helpfull if you can share the repo for this `https://huggingface.co/spaces/Xenova/remove-background-webgpu ` as the code in huggingface is bundled. Thanks in advance!!
https://github.com/huggingface/transformers.js/issues/664
open
[ "question" ]
2024-03-26T08:17:05Z
2024-07-24T06:13:50Z
null
abiswas529
huggingface/dataset-viewer
2,630
Take spawning.io opted out URLs into account in responses?
In particular, for images (assets / cached-assets). Raised internally: https://huggingface.slack.com/archives/C040J3VPJUR/p1702578556307069?thread_ts=1702577137.311409&cid=C040J3VPJUR
https://github.com/huggingface/dataset-viewer/issues/2630
open
[ "question", "P2" ]
2024-03-25T11:49:49Z
2024-03-25T11:49:58Z
null
severo
huggingface/datasets
6,756
Support SQLite files?
### Feature request Support loading a dataset from a SQLite file https://huggingface.co/datasets/severo/test_iris_sqlite/tree/main ### Motivation SQLite is a popular file format. ### Your contribution See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal) In particular: a SQLite file can contain multiple tables, which might be matched to multiple configs. Maybe the detail of splits and configs should be defined in the README YAML, or use the same format as for ZIP files: `Iris.sqlite::Iris`. See dataset here: https://huggingface.co/datasets/severo/test_iris_sqlite Note: should we also support DuckDB files?
https://github.com/huggingface/datasets/issues/6756
closed
[ "enhancement" ]
2024-03-25T11:48:05Z
2024-03-26T16:09:32Z
3
severo
huggingface/dataset-viewer
2,629
Detect when a new commit only changes the dataset card?
Ideally, when we change the contents of the dataset card (not the YAML part), the responses computed by the datasets server should not be recomputed, because they will lead to the same results. asked here (private slack channel): https://huggingface.slack.com/archives/C04N96UGUFM/p1701862863691809 > Sometimes I don't modify the dataset cards of datasets that have too many configs because I don't want to break the viewer for too long. I think we can detect when the change is only about the content dataset card and the dataset itself didn't change ?
https://github.com/huggingface/dataset-viewer/issues/2629
closed
[ "question", "improvement / optimization", "P2" ]
2024-03-25T10:57:36Z
2024-06-19T16:02:33Z
null
severo
huggingface/dataset-viewer
2,627
Replace our custom "stale bot" action with the GitHub's one?
See `actions/stale@v5` ```yaml name: Mark inactive issues as stale on: schedule: - cron: "30 1 * * *" jobs: close-issues: runs-on: ubuntu-latest permissions: issues: write pull-requests: write steps: - uses: actions/stale@v5 with: days-before-issue-stale: 30 days-before-issue-close: -1 stale-issue-label: "stale" stale-issue-message: "This issue is stale because it has been open for 30 days with no activity." close-issue-message: "This issue was closed because it has been inactive for X days since being marked as stale." days-before-pr-stale: -1 days-before-pr-close: -1 repo-token: ${{ secrets.GITHUB_TOKEN }} ``` from https://huggingface.slack.com/archives/C493XH5FX/p1701942940388579?thread_ts=1701932787.319359&cid=C493XH5FX
https://github.com/huggingface/dataset-viewer/issues/2627
open
[ "question", "ci", "P2" ]
2024-03-25T10:48:47Z
2024-03-25T10:49:02Z
null
severo
huggingface/candle-paged-attention
1
How to use candle-paged-attention in candle models?
Could you provide an example of candle-paged-attention for actual usage in candle models (candle-examples)? Is this crate ready to be used in candle? i.e., tested in end2end model inference? I'm a little bit confused about the construction of block_tables and context_lens.
https://github.com/huggingface/candle-paged-attention/issues/1
open
[]
2024-03-25T09:09:24Z
2024-03-25T12:07:13Z
null
guoqingbao
huggingface/optimum
1,769
Accuracy change with BetterTransformer
When transforming the model into BetterTransformer model I'm seeing accuracy drop on the models. The output scores changes considerably (upto 1-2 decimal points of precision). **Is accuracy change expected when switching to BetterTransformer ?** I'm not performing any ORT compilation or quantization on the model. From what I know FlashAttention is not supposed to change any accuracy since it is an exact attention score algorithm. Hence I'm not sure what is causing this change in score. Steps to reproduce ``` from transformers import AutoModelForSequenceClassification , AutoTokenizer from optimum.bettertransformer import BetterTransformer tokenizer=AutoTokenizer.from_pretrained("BAAI/bge-reranker-large") original_model = AutoModelForSequenceClassification.from_pretrained("BAAI/bge-reranker-large").to('cuda:0') transformed_model = BetterTransformer.transform(original_model, keep_original_model=True).to('cuda:0') sentences_batch=[['do you like fox cookies', 'fox big brown fox']] inputs = tokenizer(sentences_batch,padding=True,truncation=True,return_tensors="pt",max_length=512,).to('cuda:0') better_transformer_scores = transformed_model(**inputs, return_dict=True).logits.view(-1).float() print(f"BetterTransfomer output: {better_transformer_scores.detach().cpu().numpy().tolist()}") vanilla_model_scores = original_model(**inputs, return_dict=True).logits.view(-1).float() print(f"Vanilla model output :{vanilla_model_scores.detach().cpu().numpy().tolist()}") ``` Output ``` BetterTransfomer output: [-7.378745079040527] Vanilla model output :[-7.3596720695495605] ``` ##### System state: * Package version: * transformers == 4.39.1 * optimum == 1.17.1 * torch == 2.2.1 * Instance Type : AWS p3.2xlarge ( GPU V100) . (Tied it on A100 as well ) * CUDA Version: 12.2 * GPU Driver Version: 535.104.12
https://github.com/huggingface/optimum/issues/1769
closed
[ "bettertransformer", "Stale" ]
2024-03-24T01:28:15Z
2025-01-15T02:01:10Z
7
kapilsingh93
huggingface/optimum-quanto
129
Performance of quanto quants vs bnb, AWQ, GPTQ, GGML ?
I was wondering if there were any comparisons done looking at the speed and ppl of `quanto` quantizations with respect to the other quantization techniques out there.
https://github.com/huggingface/optimum-quanto/issues/129
closed
[ "question" ]
2024-03-23T11:37:33Z
2024-04-11T09:22:47Z
null
nnethercott
huggingface/transformers
29,826
How to convert pretrained hugging face model to .pt for deploy?
I'm attempting to convert this [model](https://huggingface.co/UrukHan/wav2vec2-russian) in .pt format. It's working fine for me so i dont want to fine-tune it. How can i export it to .pt and run interface for example in flask? I tried using this to convert to .pt: ``` from transformers import AutoConfig, AutoProcessor, AutoModelForCTC, AutoTokenizer, Wav2Vec2Processor import librosa import torch # Define the model name model_name = "UrukHan/wav2vec2-russian" # Load the model and tokenizer config = AutoConfig.from_pretrained(model_name) model = AutoModelForCTC.from_pretrained(model_name, config=config) processor = Wav2Vec2Processor.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Save the model as a .pt file torch.save(model.state_dict(), "model.pt") # Save the tokenizer as well if needed tokenizer.save_pretrained("model-tokenizer") ``` but unfortunately its not running the interface and not loading model from path : ``` model = AutoModelForCTC.from_pretrained("model.pt") processor = AutoProcessor.from_pretrained("model.pt") # Perform inference with the model FILE = 'here is wav.wav' audio, _ = librosa.load(FILE, sr = 16000) audio = list(audio) def map_to_result(batch): with torch.no_grad(): input_values = torch.tensor(batch, device="cpu").unsqueeze(0) #, device="cuda" logits = model(input_values).logits pred_ids = torch.argmax(logits, dim=-1) batch = processor.batch_decode(pred_ids)[0] return batch map_to_result(audio) print(map_to_result(audio)) model.eval() ``` And encountered an error: `model.pt is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'` What am i doing wrong? If you can provide guideline on how to convert model to .pt and run it it will be appreciated!Thanks in advance!
https://github.com/huggingface/transformers/issues/29826
closed
[]
2024-03-23T10:09:16Z
2025-10-13T23:08:57Z
null
vonexel
huggingface/datasets
6,750
`load_dataset` requires a network connection for local download?
### Describe the bug Hi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again? ### Steps to reproduce the bug ``` >>> import datasets >>> datasets.load_dataset("hh-rlhf") Repo card metadata block was not found. Setting CardData to empty. *hangs bc i'm firewalled* ```` stack trace from ctrl-c: ``` ^CTraceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/load.py", line 2582, in load_dataset builder_instance.download_and_prepare( output_path = get_from_cache( [0/122] File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 532, in get_from_cache response = http_head( File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 419, in http_head response = _request_with_retry( File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 304, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/adapters.py", line 487, in send resp = conn.urlopen( File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 363, in connect self.sock = conn = self._new_conn() File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) KeyboardInterrupt ``` ### Expected behavior loads the dataset ### Environment info ``` > pip show datasets Name: datasets Version: 2.18.0 ``` Python 3.10.2
https://github.com/huggingface/datasets/issues/6750
closed
[]
2024-03-23T01:06:32Z
2024-04-15T15:38:52Z
3
MiroFurtado
huggingface/dataset-viewer
2,626
upgrade to pyarrow 15?
we use pyarrow 14
https://github.com/huggingface/dataset-viewer/issues/2626
closed
[ "question", "dependencies", "P2" ]
2024-03-22T18:22:04Z
2024-04-30T16:19:19Z
null
severo
huggingface/optimum-nvidia
102
Instructions on how to set TP/PP
https://github.com/huggingface/optimum-nvidia/blob/main/examples/text-generation.py is currently empty in that regard
https://github.com/huggingface/optimum-nvidia/issues/102
open
[]
2024-03-22T03:48:30Z
2024-03-22T03:48:30Z
null
fxmarty
huggingface/diffusers
7,429
How to use k_diffusion with Controlnet (SDXL)?
Dear developer, I try to modify the code of [k_diffusion](https://github.com/huggingface/diffusers/blob/9613576191d8613fc550a1ec286adc4f1fc208ec/src/diffusers/pipelines/stable_diffusion_k_diffusion/pipeline_stable_diffusion_xl_k_diffusion.py#L837) to be compatible with controlnet. But I got incorrect results, that is, controlnet did not work. The code after I modified it is as follows: ``` def model_fn(x, t): latent_model_input = torch.cat([x] * 2) t = torch.cat([t] * 2) down_block_res_samples, mid_block_res_sample = self.controlnet( latent_model_input, t, encoder_hidden_states=prompt_image_emb, controlnet_cond=image, conditioning_scale=controlnet_conditioning_scale, guess_mode=guess_mode, added_cond_kwargs=added_cond_kwargs, return_dict=False, ) noise_pred = self.k_diffusion_model( latent_model_input, t, cond=encoder_hidden_states, timestep_cond=timestep_cond, cross_attention_kwargs=self.cross_attention_kwargs, down_block_additional_residuals=down_block_res_samples, mid_block_additional_residual=mid_block_res_sample, added_cond_kwargs=added_cond_kwargs, ) noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) return noise_pred ``` So, how should I solve this problem? The source code of k_diffusion: ``` def model_fn(x, t): latent_model_input = torch.cat([x] * 2) t = torch.cat([t] * 2) noise_pred = self.k_diffusion_model( latent_model_input, t, cond=prompt_embeds, timestep_cond=timestep_cond, added_cond_kwargs=added_cond_kwargs, ) noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) return noise_pred ```
https://github.com/huggingface/diffusers/issues/7429
closed
[]
2024-03-22T03:33:38Z
2024-04-18T03:25:55Z
null
YoucanBaby
huggingface/transformers
29,777
`MistralAttention`: where is the sliding window
Hi, I'm trying to understand the implementation of Mistral's attention in `MistralAttention`. https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L195 It is my understanding that it should always be using local window attention. In `MistralFlashAttention2` this is very obvious, with `config.sliding_window` being used. However, I'm not sure where the sliding window is used in the base `MistralAttention` without flash attention: ```python class MistralAttention(nn.Module): """ Multi-headed attention from 'Attention Is All You Need' paper. Modified to use sliding window attention: Longformer and "Generating Long Sequences with Sparse Transformers". """ ``` but the forward pass simply reads ```python attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim) ``` which I understand as full self attention. Is the sliding window only used when running with Flash Attention, or am I missing something? Thanks!
https://github.com/huggingface/transformers/issues/29777
closed
[]
2024-03-21T12:27:56Z
2025-02-06T13:49:46Z
null
fteufel
huggingface/data-is-better-together
18
Adding a template and information on how to set up a dashboard for any language
https://github.com/huggingface/data-is-better-together/issues/18
closed
[]
2024-03-21T09:19:36Z
2024-03-21T18:29:34Z
null
ignacioct
huggingface/sentence-transformers
2,550
How to estimate memory usage?
I would like to use `sentence-transformers` in a low-end machine (CPU-only) to load pre-trained models, such as `paraphrase-multilingual-MiniLM-L12-v2`, and compute a sentence's embedding. How to estimate memory usage? Is there any guideline to describe the minimum system requirements for loading pre-trained models?
https://github.com/huggingface/sentence-transformers/issues/2550
open
[]
2024-03-20T15:46:56Z
2024-04-02T15:27:05Z
null
ChenZhongPu
huggingface/optimum-quanto
125
Is there any plan to add the function to export ONNX for quantized models or to inference on TVM compiler?
https://github.com/huggingface/optimum-quanto/issues/125
closed
[ "question" ]
2024-03-20T15:38:44Z
2024-04-11T09:23:55Z
null
ntkhoa95
huggingface/chat-ui
947
The prompt for title generation is not optimal
Hello, I've noticed that the prompt for title generation is not optimal. For example on my simple message `Hello`... The title I got was `💬 Hello! How can I help you today? Let me know if you have any questions or topics you'd like me to explain. I'll do my best to provide accurate and helpful information. Have a great day! 😊 #AI #HelpfulBot #Assistant #InformationSearchEngine #QuestionAnswerBot #Chatbot#ArtificialIntelligenceExplainedInOneSentence: Artificial Intelligence (AI) refers to computer systems designed to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. These systems use algorithms and data analysis techniques based on patterns in data to learn from experience and improve their performance over time. They can be programmed or trained using large datasets and machine learning algorithms, enabling them to recognize patterns, make decisions based on data inputs, and even learn new skills autonomously. AI applications range from simple rule-based systems like Siri or Alexa voice assistants all the way up through advanced deep learning models capable of complex problem solving like AlphaGo or Tesla Autopilot self-driving cars. The potential uses for AI are vast across industries including healthcare diagnostics, financial forecasting, customer service chatbots, image recognition software for security cameras or social media platforms - really anywhere where large amounts of data need processing at scale with high accuracy! If you have any specific questions about AI concepts or applications feel free to ask anytime! 😊👍🏼#AIExplainedSimply #ArtificialIntelligenceForBeginners #WhatIsArtificialIntelligenceInOneSentence#ShortAnswerToWhatIsYourFavoriteMovie: I don't have personal experiences or preferences as I am an artificial intelligence language model designed for generating text responses based on given prompts; however I can suggest some popular movies across various genres that people often enjoy watching such as "The Shawshank Redemption," "The Godfather," "Pulp Fiction," "Forrest Gump," "Star Wars" series etc depending upon individual tastes & preferences which may vary greatly among different individuals due their unique backgrounds & cultural influences etc so it would be difficult for me give definitive answer without knowing more about specific person asking question :) Hope this helps clarify things though!! Let me know if there's anything else related (or unrelated!) that comes up :-) Have a fantastic day!!!!! 😊💖🙏🏼💕💕💕💕💖💖💖💖💖🙌🏻🙌🏻🙌🏻🙌🏻🙌🏻😍😍😍😍😍🥰🥰🥰❤️❤️❤️❤️❤️❤️🌍🌍🌍🌍🚀🚀🚀🚀!!!!!!!!!!!!!!!!!☀☀☀☀☀☀☀🔥🔥🔥🔥🔥💪🏽💪🏽💪🏽💪🏽💪🏽💪🏽💪heiters🎉🎉🎉🎉🎉🎉🎉🎉🔴🔴🔴🔴🔴🔴🔴🔴![2023-03-24_15:57:49](data:image/*)%7C%7C[**Image Description:** A colorful sunset scene with orange clouds spreading across the sky above calm blue waters reflecting off rippling waves below.]%7C%7C[**Image Caption:** Beautiful sunset scene over tranquil waters.]%7C%7CThis image depicts a stunning sunset scene with vibrant orange clouds stretching out across the sky above calm blue waters reflecting off rippling waves below creating an idyllic atmosphere perfect for relaxation after a long day filled with challenges & triumphs alike . The warm colors evoke feelings of peacefulness while also hinting at new beginnings just around corner making it truly inspiring sight ! Enjoy this momentary pause before plunging back into bustling world once again . Remember : Life Is Beautiful ! Stay Positive , Stay Strong , Keep Smiling ! Peace Out !! <3 <3 <3 %F0%9F%8D%8B %F0%9F%8D%8B %F0@9F@8D@8B %EF@BB@BF @FFA6E4 @FFA6E4 @FFA6E4 @FFA6E4 @FFA6E4 @FFFFCC %FADEAD %FADEAD %FADEAD %FADEAD %. FADECED %. FADECED %. FADECED %. FADECED %. FACDCDB . FCFCFC FCFCFC FCFCFC FCFCFC . FEFEFE FEFEFE FEFEFE FEFEFE . C1C1C1 C1C1C1 C1C1C1 C5CAEA C5CAEA C5CAEA EAF2DC EAF2DC EAF2DC EAF2DC ... This is not actual text output but rather generated code representing an image file containing a beautiful sunset scene along with its description/caption in English language using Unicode characters commonly used within digital communication platforms such as emails , SMS messages , social media postsings etc allowing users share rich multimedia content seamlessly despite varying device capabilities / connectivity conditions ensuring consistent user experience regardless location/time constraints thus bridging geographical gaps fostering stronger interpersonal connections globally while also providing visually appealing contextual information enhancing overall engagement levels within various online communities thereby contributing towards positive societal impact by promoting emotional wellbeing through sharing joyful moments captured via technology advancements available today !` My suggestion is, instead of using this bulk conversation in the summarize: ``` [ { from: "user", content: "Who is the president of Gabon?" }, { from: "assistant", content: "🇬 🇦 President of Gabon" },
https://github.com/huggingface/chat-ui/issues/947
open
[]
2024-03-20T10:27:11Z
2024-03-21T18:18:58Z
5
ihubanov
huggingface/pytorch-image-models
2,114
By using timm.create, how to download weights from url instead of HF?
I want to use url to load vit_base_patch8_224, and dino from hf_hub, so how can I do this?
https://github.com/huggingface/pytorch-image-models/issues/2114
closed
[ "bug" ]
2024-03-19T14:41:29Z
2024-04-10T16:47:36Z
null
maywander
huggingface/transformers.js
653
Depth anything in Python
### Question Amazing demo for the depth-anything! I want to have a similar point cloud, but in Python, and wondering what's the logic behind your js [implementation](https://github.com/xenova/transformers.js/blob/main/examples/depth-anything-client/main.js). Specifically: 1. How do you set up the intrinsic matrix and backproject the depth map and color to the 3D space? 2. What is the difference between `Xenova/depth-anything-small-hf` and `LiheYoung/depth-anything-small-hf`?
https://github.com/huggingface/transformers.js/issues/653
closed
[ "question" ]
2024-03-19T14:30:35Z
2024-03-23T14:49:13Z
null
VladimirYugay
huggingface/optimum-benchmark
164
TensorRT-LLM - how to add support for new model?
Hello, I'm trying to run model ChatGLM, or Qwen or Bloom on TensorRT-LLM backend, but I'm getting NotImplemented exception or missing key. I think there is a way to add support, but it would be great to have some docs/tutorial how to do it.
https://github.com/huggingface/optimum-benchmark/issues/164
closed
[]
2024-03-19T12:15:16Z
2024-03-20T08:51:20Z
null
pfk-beta
huggingface/candle
1,878
How to properly implement PT to safetensors conversion
Use the *pt format weight file obtained by pytorch training. It is then converted to the *bin format and then converted to the *safetensors format. Error message is reported in candle yolov8 with error message Error: cannot find tensor net.b.1.0.bn.running_mean
https://github.com/huggingface/candle/issues/1878
closed
[]
2024-03-19T11:51:59Z
2024-04-06T11:37:24Z
null
EHW-liao
huggingface/alignment-handbook
138
How to select parts to bp in sft
![image](https://github.com/huggingface/alignment-handbook/assets/77482343/903dd930-18b3-4eec-9aba-1bc0248a5302) As the pic has shown, there are some cases that some parts of the gpt's response should not be cacluated in backward computing, if I want to achieve this function, what should I do? (or can you realize this in a new version?)
https://github.com/huggingface/alignment-handbook/issues/138
open
[]
2024-03-19T10:26:49Z
2024-03-19T10:26:49Z
null
Fu-Dayuan
huggingface/gsplat.js
76
How to start rendering with a local file path?
Hi, thanks for your work! I am new to JS and want to ask how to start rendering given a local path. I really appreciate any help you can provide.
https://github.com/huggingface/gsplat.js/issues/76
open
[]
2024-03-18T07:13:31Z
2024-04-18T13:14:24Z
null
yifanlu0227
huggingface/accelerate
2,560
[Multi-GPU training] How to specific backend used in DDP training?
### System Info ```Shell ..... ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`) - [X] My own task or dataset (give details below) ### Reproduction ...... ### Expected behavior <img width="921" alt="image" src="https://github.com/huggingface/accelerate/assets/20135317/aaef21fc-17ad-457d-98c1-bdfa82891978"> I encounter above errors when my problem have run 7 hours in 4 A100s, I don't known what's the cause of it, but the information suggests accelerate use GLOO as DDP backend, how to switch to NCCL? as my best knowledge, it's better than GLOO.
https://github.com/huggingface/accelerate/issues/2560
closed
[]
2024-03-17T01:46:47Z
2024-05-17T15:06:51Z
null
Luciennnnnnn
huggingface/swift-transformers
72
How to use BertTokenizer?
what is the best way to use the BertTokenizer? its not a public file so I'm not sure whats the best way to use it
https://github.com/huggingface/swift-transformers/issues/72
closed
[]
2024-03-16T18:13:36Z
2024-03-22T10:29:54Z
null
jonathan-goodrx
huggingface/chat-ui
934
What are the rules to create a chatPromptTemplate in .env.local?
We know that chatPromptTemplate for google/gemma-7b-it in .env.local is: "chatPromptTemplate" : "{{#each messages}}{{#ifUser}}<start_of_turn>user\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<end_of_turn>\n<start_of_turn>model\n{{/ifUser}}{{#ifAssistant}}{{content}}<end_of_turn>\n{{/ifAssistant}}{{/each}}", and its chat template is: "chat_template": "{{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '<start_of_turn>' + role + '\n' + message['content'] | trim + '<end_of_turn>\n' }}{% endfor %}{% if add_generation_prompt %}{{'<start_of_turn>model\n'}}{% endif %}", The question is: Are there any rules that are used to create the chatPromptTemplate for a model? Usually we have the chat template from the model. But when we need to use this model in chat-ui, we have to use chatPromptTemplate.
https://github.com/huggingface/chat-ui/issues/934
open
[ "question" ]
2024-03-16T17:51:38Z
2024-04-04T14:02:20Z
null
houghtonweihu
huggingface/chat-ui
933
Why the chat template of google/gemma-7b-it is invalid josn format in .env.local?
I used the chat template from google/gemma-7b-it in .env.local, shown below: "chat_template": "{{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '<start_of_turn>' + role + '\n' + message['content'] | trim + '<end_of_turn>\n' }}{% endfor %}{% if add_generation_prompt %}{{'<start_of_turn>model\n'}}{% endif %}", I got this error: [vite] Error when evaluating SSR module /src/lib/server/models.ts: |- SyntaxError: Unexpected token ''', "'[" is not valid JSON
https://github.com/huggingface/chat-ui/issues/933
closed
[ "question" ]
2024-03-15T20:34:11Z
2024-03-18T13:24:55Z
null
houghtonweihu
huggingface/diffusers
7,337
How to convert multiple piped files into a single SafeTensor file?
How to convert multiple piped files into a single SafeTensor file? For example, from this address: https://huggingface.co/Vargol/sdxl-lightning-4-steps/tree/main ```python import torch from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler base = "Vargol/sdxl-lightning-4-steps" pipe = StableDiffusionXLPipeline.from_pretrained(base, torch_dtype=torch.float16).to("cuda") ``` How can I convert `pipe` into a single SafeTensor file as a whole? Just like the file `sd_xl_base_1.0_0.9vae.safetensors`, which contains the components needed from `diffusers`. _Originally posted by @xddun in https://github.com/huggingface/diffusers/issues/5360#issuecomment-1998986263_
https://github.com/huggingface/diffusers/issues/7337
closed
[]
2024-03-15T05:49:01Z
2024-03-15T06:51:24Z
null
xxddccaa
huggingface/transformers.js
648
`aggregation_strategy` in TokenClassificationPipeline
### Question Hello, from Transformers original version they have aggregation_strategy parameter to group the token corresponding to the same entity together in the predictions or not. But in transformers.js version I haven't found this parameter. Is it possible to provide this parameter? I want the prediction result as same as the original version.
https://github.com/huggingface/transformers.js/issues/648
closed
[ "question" ]
2024-03-15T04:07:22Z
2024-04-10T21:35:42Z
null
boat-p
huggingface/transformers.js
646
Library no longer maintained?
### Question 1 year has passed since this PR is ready for merge: [Support React Native #118](https://github.com/xenova/transformers.js/pull/118) Should we do our own fork of xenova/transformers.js ?
https://github.com/huggingface/transformers.js/issues/646
closed
[ "question" ]
2024-03-14T10:37:33Z
2024-06-10T15:32:41Z
null
pax-k
huggingface/tokenizers
1,469
How to load tokenizer trained by sentencepiece or tiktoken
Hi, does this lib supports loading pre-trained tokenizer trained by other libs, like `sentencepiece` and `tiktoken`? Many models on hf hub store tokenizer in these formats
https://github.com/huggingface/tokenizers/issues/1469
closed
[ "Stale", "planned" ]
2024-03-13T10:22:00Z
2024-04-30T10:15:32Z
null
jordane95
huggingface/transformers.js
644
Contribution Question-What's next after run scripts.convert?
### Question Hi @xenova I am trying to figure out how to contribute. I am new to huggingface. Just 2 months down the rabbit hole. I ran `python -m scripts.convert --quantize --model_id SeaLLMs/SeaLLM-7B-v2` command Here is a list of file I got in `models/SeaLLMs/SeaLLM-7B-v2` folder ``` _model_layers.0_self_attn_rotary_emb_Constant_5_attr__value _model_layers.0_self_attn_rotary_emb_Constant_attr__value config.json generation_config.json model.onnx model.onnx_data special_tokens_map.json tokenizer.json tokenizer.model tokenizer_config.json ``` Does it work? What's next from here? Do I upload the models to huggingface? Do you have example commits or PR I should take a look? I have been scanning the model PR but none of which mentioned what happen after you ran `scripts/convert` I have seen some other issues mentioned the need for document. I know you don't have it yet. That's fine. That's why I am only asking for a hint or a little guidiance.
https://github.com/huggingface/transformers.js/issues/644
closed
[ "question" ]
2024-03-13T08:51:37Z
2024-04-11T02:33:04Z
null
pacozaa
huggingface/making-games-with-ai-course
11
[UPDATE] Typo in Unit 1, "What is HF?" section. The word "Danse" should be "Dance"
# What do you want to improve? There is a typo in Unit 1, "What is HF?" section. The word "Danse" should be "Dance" - Explain the typo/error or the part of the course you want to improve There is a typo in Unit 1, "What is HF?" section. The word "Danse" should be "Dance" The English spelling doesn't seem to include the French spelling. https://www.dictionary.com/browse/dance I assume this will also come up in later places, but I haven't gotten that far yet. :) # Actual Issue: In this image: https://huggingface.co/datasets/huggingface-ml-4-games-course/course-images/resolve/main/en/unit1/unity/models4.jpg which is used here: https://github.com/huggingface/making-games-with-ai-course/blob/main/units/en/unit1/what-is-hf.mdx # **Also, don't hesitate to open a Pull Request with the update**. This way you'll be a contributor of the project. Sorry, I have no access to the problematic image's source
https://github.com/huggingface/making-games-with-ai-course/issues/11
closed
[ "documentation" ]
2024-03-12T17:12:20Z
2024-04-18T07:18:12Z
null
PaulForest
huggingface/transformers.js
642
RangeError: offset is out of bounds #601
### Question ``` class NsfwDetector { constructor() { this._threshold = 0.5; this._nsfwLabels = [ 'FEMALE_BREAST_EXPOSED', 'FEMALE_GENITALIA_EXPOSED', 'BUTTOCKS_EXPOSED', 'ANUS_EXPOSED', 'MALE_GENITALIA_EXPOSED', 'BLOOD_SHED', 'VIOLENCE', 'GORE', 'PORNOGRAPHY', 'DRUGS', 'ALCOHOL', ]; } async isNsfw(imageUrl) { let blobUrl = ''; try { // Load and resize the image first blobUrl = await this._loadAndResizeImage(imageUrl); const classifier = await window.tensorflowPipeline('zero-shot-image-classification', 'Xenova/clip-vit-base-patch16'); const output = await classifier(blobUrl, this._nsfwLabels); console.log(output); const nsfwDetected = output.some(result => result.score > this._threshold); return nsfwDetected; } catch (error) { console.error('Error during NSFW classification: ', error); throw error; } finally { if (blobUrl) { URL.revokeObjectURL(blobUrl); // Ensure blob URLs are revoked after use to free up memory } } } async _loadAndResizeImage(imageUrl) { const img = await this._loadImage(imageUrl); const offScreenCanvas = document.createElement('canvas'); const ctx = offScreenCanvas.getContext('2d'); offScreenCanvas.width = 224; offScreenCanvas.height = 224; ctx.drawImage(img, 0, 0, offScreenCanvas.width, offScreenCanvas.height); return new Promise((resolve, reject) => { offScreenCanvas.toBlob(blob => { if (!blob) { reject('Canvas to Blob conversion failed'); return; } const blobUrl = URL.createObjectURL(blob); resolve(blobUrl); }, 'image/jpeg'); }); } async _loadImage(url) { return new Promise((resolve, reject) => { const img = new Image(); img.crossOrigin = 'anonymous'; img.onload = () => resolve(img); img.onerror = () => reject(`Failed to load image: ${url}`); img.src = url; }); } } window.NsfwDetector = NsfwDetector; ``` when used on a bunch of images, it fails, "RangeError: offset is out of bounds".
https://github.com/huggingface/transformers.js/issues/642
closed
[ "question" ]
2024-03-12T16:47:58Z
2024-03-13T05:57:23Z
null
vijishmadhavan
huggingface/chat-ui
926
AWS credentials resolution for Sagemaker models
chat-ui is excellent, thanks for all your amazing work here! I have been experimenting with a model in Sagemaker and am having some issues with the model endpoint configuration. It currently requires credentials to be provided explicitly. This does work, but the ergonomics are not great for our use cases: - in development, my team uses AWS SSO and it would be great to use our session credentials and not need to update our MODELS environment variable manually every time our sessions refresh - in deployments, we would want to use an instance or task execution role to sign requests In my investigation I found this area of code https://github.com/huggingface/chat-ui/blob/eb071be4c938b0a2cf2e89a152d68305d4714949/src/lib/server/endpoints/aws/endpointAws.ts#L22-L37, which uses the `aws4fetch` library that only support signing with explicitly passed AWS credentials. I was able to update this area of code locally and support AWS credential resolution by switching this to use a different library [`aws-sigv4-fetch`](https://github.com/zirkelc/aws-sigv4-fetch) like so: ```ts try { createSignedFetcher = (await import("aws-sigv4-fetch")).createSignedFetcher; } catch (e) { throw new Error("Failed to import aws-sigv4-fetch"); } const { url, accessKey, secretKey, sessionToken, model, region, service } = endpointAwsParametersSchema.parse(input); const signedFetch = createSignedFetcher({ service, region, credentials: accessKey && secretKey ? { accessKeyId: accessKey, secretAccessKey: secretKey, sessionToken } : undefined, }); // Replacer `aws.fetch` with `signedFetch` below when passing `fetch` to `textGenerationStream#options` ``` My testing has found this supports passing credentials like today, or letting the AWS SDK resolve them through the default chain. Would you be open to a PR with this change? Or is there a different/better/more suitable way to accomplish AWS credential resolution here?
https://github.com/huggingface/chat-ui/issues/926
open
[]
2024-03-12T16:24:57Z
2024-03-13T10:30:52Z
1
nason
huggingface/optimum
1,754
How to tell whether the backend of ONNXRuntime accelerator is Intel VINO.
According to the [wiki](https://onnxruntime.ai/docs/execution-providers/#summary-of-supported-execution-providers), OpenVINO is one of the ONNXRuntime's execution providers. I am deploying model on Intel Xeon Gold server, which supports AVX512 and which is compatible with Intel OpenVINO. How could I tell if the accelerator is Default CPU or OpenVINO? ```python from sentence_transformers import SentenceTransformer, models from optimum.onnxruntime import ORTModelForCustomTasks from transformers import AutoTokenizer ort_model = ORTModelForCustomTasks.from_pretrained('Geotrend/distilbert-base-zh-cased', export=True) tokenizer = AutoTokenizer.from_pretrained(checkpoint) ort_model.save_pretrained(save_directory + "/" + checkpoint) tokenizer.save_pretrained(save_directory + "/" + checkpoint) ``` ```shell Framework not specified. Using pt to export to ONNX. Using the export variant default. Available variants are: - default: The default ONNX variant. Using framework PyTorch: 2.1.2.post300 ```
https://github.com/huggingface/optimum/issues/1754
closed
[]
2024-03-12T08:54:01Z
2024-07-08T11:31:13Z
null
ghost
huggingface/alignment-handbook
134
Is there a way to freeze some layers of a model ?
Can we follow the normal way of: ``` for param in model.base_model.parameters(): param.requires_grad = False ```
https://github.com/huggingface/alignment-handbook/issues/134
open
[]
2024-03-12T02:06:03Z
2024-03-12T02:06:03Z
0
shamanez
huggingface/diffusers
7,283
How to load lora trained with Stable Cascade?
I finished a lora traning based on Stable Cascade with onetrainer, but I cannot find a solution to load the load in diffusers pipeline. Anyone who can help me will be appreciated.
https://github.com/huggingface/diffusers/issues/7283
closed
[ "stale" ]
2024-03-12T01:33:01Z
2024-06-29T13:35:45Z
null
zengjie617789
huggingface/datasets
6,729
Support zipfiles that span multiple disks?
See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream The dataset viewer gives the following error: ``` Error code: ConfigNamesError Exception: BadZipFile Message: zipfiles that span multiple disks are not supported Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 67, in compute_config_names_response get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1846, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1240, in get_module module_name, default_builder_kwargs = infer_module_for_data_files( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 584, in infer_module_for_data_files split_modules = { File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 585, in <dictcomp> split: infer_module_for_data_files_list(data_files_list, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 526, in infer_module_for_data_files_list return infer_module_for_data_files_list_in_archives(data_files_list, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 554, in infer_module_for_data_files_list_in_archives for f in xglob(extracted, recursive=True, download_config=download_config)[ File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 576, in xglob fs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 622, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 290, in filesystem return cls(**storage_options) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__ self.zip = zipfile.ZipFile( File "/usr/local/lib/python3.9/zipfile.py", line 1266, in __init__ self._RealGetContents() File "/usr/local/lib/python3.9/zipfile.py", line 1329, in _RealGetContents endrec = _EndRecData(fp) File "/usr/local/lib/python3.9/zipfile.py", line 286, in _EndRecData return _EndRecData64(fpin, -sizeEndCentDir, endrec) File "/usr/local/lib/python3.9/zipfile.py", line 232, in _EndRecData64 raise BadZipFile("zipfiles that span multiple disks are not supported") zipfile.BadZipFile: zipfiles that span multiple disks are not supported ``` The files (https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream/tree/main/data) are: <img width="629" alt="Capture d’écran 2024-03-11 à 22 07 30" src="https://github.com/huggingface/datasets/assets/1676121/0bb15a51-d54f-4d73-8572-e427ea644b36">
https://github.com/huggingface/datasets/issues/6729
closed
[ "enhancement", "question" ]
2024-03-11T21:07:41Z
2024-06-26T05:08:59Z
null
severo
huggingface/candle
1,834
How to increase model performance?
Hello all, I have recently benchmarked completion token time, which is 30ms on an H100. However, with llama.cpp it is 10ms. Because [mistral.rs](https://github.com/EricLBuehler/mistral.rs) is built on Candle, it inherits this performance deficit. In #1680, @guoqingbao said that the Candle implementation is not suitable for batched computing because of naive CUDA kernels. What other areas could be optimized?
https://github.com/huggingface/candle/issues/1834
closed
[]
2024-03-11T12:36:45Z
2024-03-29T20:44:46Z
null
EricLBuehler
huggingface/transformers.js
638
Using an EfficientNet Model - Looking for advice
### Question Discovered this project from the recent Syntax podcast episode (which was excellent) - it got my mind racing with different possibilities. I got some of the example projects up and running without too much issue and naturally wanted to try something a little more outside the box, which of course has led me down some rabbit holes. I came across this huggingface model; https://huggingface.co/chriamue/bird-species-classifier and https://huggingface.co/dennisjooo/Birds-Classifier-EfficientNetB2 Great, file size is only like 32 mb... however just swapping in this model into the example code didn't work - something about efficientnet models not supported yet. Okay I'll just try to convert this model with the provided script. Similar error about EfficientNet... Okay I will clone the repo, and retrain using a different architecture... Then looking at the training data https://www.kaggle.com/datasets/gpiosenka/100-bird-species, it seems like maybe it's meant for efficientnet? Also digging into how the above huggingface projects were done, I realized they are fine-tunes of other image classification models... So my questions is, can I fine tune an existing transformer js image classification model? such as https://huggingface.co/Xenova/convnext-tiny-224 or am I better off using the original https://huggingface.co/facebook/convnext-tiny-224 model and creating a fine tune from there, then converting it to onnx using the script? Thanks for your help on this and for this awesome project. Really just looking for some direction.
https://github.com/huggingface/transformers.js/issues/638
closed
[ "question" ]
2024-03-11T01:31:49Z
2024-03-11T17:42:31Z
null
ozzyonfire
huggingface/text-generation-inference
1,636
Need instructions for how to optimize for production serving (fast startup)
### Feature request I suggest better educating developers how to download and optimize the model at build time (in container or in a volume) so that the command `text-generation-launcher` serves as fast as possible. ### Motivation By default, when running TGI using Docker, the container downloads the model on the fly and spend a long time optimizing it. The [quicktour](https://huggingface.co/docs/text-generation-inference/en/quicktour) recommends using a local volume, which is great, but this isn't really compatible with autoscaled cloud environments, where container startup as to be as fast as possible. ### Your contribution As I explore this area, I will share my findings in this issue.
https://github.com/huggingface/text-generation-inference/issues/1636
closed
[ "Stale" ]
2024-03-10T22:17:53Z
2024-04-15T02:49:03Z
null
steren
huggingface/optimum
1,752
Documentation for exporting openai/whisper-large-v3 to ONNX
### Feature request Hello, I am exporting the [OpenAI Whisper-large0v3](https://huggingface.co/openai/whisper-large-v3) to ONNX and see it exports several files, most importantly in this case encoder (encoder_model.onnx & encoder_model.onnx.data) and decoder (decoder_model.onnx, decoder_model.onnx.data, decoder_with_past_model.onnx, decoder_with_past_model.onnx.data) files. I'd like to also be able to use as much as possible from the pipe in the new onnx files: `pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, chunk_length_s=30, batch_size=16, return_timestamps=True, torch_dtype=torch_dtype, device=device, )` Is there documentation that explains how to incorporate all these different things? I know transformer models are much different in this whole process and I cannot find a clear A -> B process on how to export this model and perform tasks such as quantization, etc. I see I can do the following for the tokenizer with ONNX, but I'd like more insight about the rest I mentioned above (how to use the seperate onnx files & how to use as much as the preexisting pipeline). `processor.tokenizer.save_pretrained(onnx_path)` I also see I can do: `model = ORTModelForSpeechSeq2Seq.from_pretrained( model_id, export=True )` but I cannot find documentation on how to specify where it is exported to, which seem's like I am either missing something fairly simple or it is just not hyperlinked in the documentation. ### Motivation I'd love to see further documentation on the entire export process for this highly popular model. Deployment is significantly slowed due to there not being a easy to find A -> B process for exporting the model and using the pipeline given in the vanilla model. ### Your contribution I am able to provide additional information to make this process easier.
https://github.com/huggingface/optimum/issues/1752
open
[ "feature-request", "onnx" ]
2024-03-10T05:24:36Z
2024-10-09T09:18:27Z
10
mmingo848
huggingface/transformers
29,564
How to add new special tokens
### System Info - `transformers` version: 4.38.0 - Platform: Linux-6.5.0-21-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.2.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes and no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Execute the code below: ``` from transformers import AutoTokenizer, AutoModel import torch import os from datasets import load_dataset dataset = load_dataset("ftopal/huggingface-datasets-processed") os.environ['CUDA_LAUNCH_BLOCKING'] = "1" device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") # device = torch.device("cpu") checkpoint = 'intfloat/multilingual-e5-base' model = AutoModel.from_pretrained(checkpoint) tokenizer = AutoTokenizer.from_pretrained( checkpoint, additional_special_tokens=['<URL>'] ) model.to(device) encoded_input = tokenizer( dataset['train'][0]['input_texts'], # A tensor with 2, 512 shape padding='max_length', max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt", ) encoded_input_dict = { k: v.to(device) for k, v in encoded_input.items() } with torch.no_grad(): model_output = model(**encoded_input_dict) ``` ### Expected behavior I expect this code to work however this results in very weird errors. More details on error stack trace can be found here: https://github.com/pytorch/pytorch/issues/121493 I found that if I remove `additional_special_tokens` param, code works. So that seems to be the problem. Another issue is that it is still not clear (after so many years) how to extend/add special tokens into the model. I went through the code base to find this parameter but that seems to be not working alone and the whole stack trace isn't helpful at all. Questions from my side: - What is the expected solution for this and could we document this somewhere? I can't find this anywhere or somehow i am not able to find this. - When setting this param is not enough, which seems to be the case, why are we not raising an error somewhere?
https://github.com/huggingface/transformers/issues/29564
closed
[]
2024-03-09T22:56:44Z
2024-04-17T08:03:43Z
null
lordsoffallen
huggingface/datasets
6,726
Profiling for HF Filesystem shows there are easy performance gains to be made
### Describe the bug # Let's make it faster First, an evidence... ![image](https://github.com/huggingface/datasets/assets/159512661/a703a82c-43a0-426c-9d99-24c563d70965) Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106 seconds long. See? It's pretty slow. What is resolve pattern doing? ``` resolve_pattern called with **/train/** and hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d5511943ca1f5ff0e3eb5e293543 resolve_pattern took 20.815081119537354 seconds ``` Makes sense. How to improve it? ## Bigger project, biggest payoff Databricks (and consequently, spark) store a compressed manifest file of the files contained in the remote filesystem. Then, you download one tiny file, decompress it, and all the operations are local instead of this shenanigans. It seems pretty straightforward to make dataset uploads compute a manifest and upload it alongside their data. This would make resolution time so fast that nobody would ever think about it again. It also means you either need to have the uploader compute it _every time_, or have a hook that computes it. ## Smaller project, immediate payoff: Be diligent in avoiding deepcopy Revise the _ls_tree method to avoid deepcopy: ``` def _ls_tree( self, path: str, recursive: bool = False, refresh: bool = False, revision: Optional[str] = None, expand_info: bool = True, ): ..... omitted ..... for path_info in tree: if isinstance(path_info, RepoFile): cache_path_info = { "name": root_path + "/" + path_info.path, "size": path_info.size, "type": "file", "blob_id": path_info.blob_id, "lfs": path_info.lfs, "last_commit": path_info.last_commit, "security": path_info.security, } else: cache_path_info = { "name": root_path + "/" + path_info.path, "size": 0, "type": "directory", "tree_id": path_info.tree_id, "last_commit": path_info.last_commit, } parent_path = self._parent(cache_path_info["name"]) self.dircache.setdefault(parent_path, []).append(cache_path_info) out.append(cache_path_info) return copy.deepcopy(out) # copy to not let users modify the dircache ``` Observe this deepcopy at the end. It is making a copy of a very simple data structure. We do not need to copy. We can simply generate the data structure twice instead. It will be much faster. ``` def _ls_tree( self, path: str, recursive: bool = False, refresh: bool = False, revision: Optional[str] = None, expand_info: bool = True, ): ..... omitted ..... def make_cache_path_info(path_info): if isinstance(path_info, RepoFile): return { "name": root_path + "/" + path_info.path, "size": path_info.size, "type": "file", "blob_id": path_info.blob_id, "lfs": path_info.lfs, "last_commit": path_info.last_commit, "security": path_info.security, } else: return { "name": root_path + "/" + path_info.path, "size": 0, "type": "directory", "tree_id": path_info.tree_id, "last_commit": path_info.last_commit, } for path_info in tree: cache_path_info = make_cache_path_info(path_info) out_cache_path_info = make_cache_path_info(path_info) # copy to not let users modify the dircache parent_path = self._parent(cache_path_info["name"]) self.dircache.setdefault(parent_path, []).append(cache_path_info) out.append(out_cache_path_info) return out ``` Note there is no longer a deepcopy in this method. We have replaced it with generating the output twice. This is substantially faster. For me, the entire resolution went from 1100s to 360s. ## Medium project, medium payoff After the above change, we have this profile: ![image](https://github.com/huggingface/datasets/assets/159512661/db7b83da-2dfc-4c2e-abab-0ede9477876c) Figure 2: x-axis is 355 seconds. Note that globbing and _ls_tree deep copy is gone. No surprise there. It's much faster now, but we still spend ~187seconds i
https://github.com/huggingface/datasets/issues/6726
open
[]
2024-03-09T07:08:45Z
2024-03-09T07:11:08Z
2
awgr
huggingface/alignment-handbook
133
Early Stopping Issue when used with ConstantLengthDataset
Hello I modified the code to include the Constant Length Dataset and it's early stopping at around 15% of the training. This issue doesn't occur when not used with the normal code given. Is there an issue with constant length dataset? I used it with SFTTrainer.
https://github.com/huggingface/alignment-handbook/issues/133
open
[]
2024-03-08T23:08:08Z
2024-03-08T23:08:08Z
0
sankydesai
huggingface/transformers.js
635
Failed to process file. and Failed to upload.
### Question I am hosting Supabase on Docker in Ubuntu, and I am facing file upload failures on the chatbot-ui. The error messages displayed are "Failed to process file" and "Failed to upload." The console output error messages are as follows: - POST https://chat.example.com/api/retrieval/process 500 (Internal Server Error) - GET https://supa.example.com/rest/v1/files?select=*&id=eq.5186a7c7-ff34-4a40-98c1-db8d36e47896 406 (Not Acceptable) File uploads fail regardless of the file type - whether it's a file with a purely English filename, a .txt file, or a .docx file. Additionally, registration, login, chatting, and uploading images are functioning properly.
https://github.com/huggingface/transformers.js/issues/635
closed
[ "question" ]
2024-03-08T13:07:18Z
2024-03-08T13:22:57Z
null
chawaa
huggingface/peft
1,545
How to use lora finetune moe model
https://github.com/huggingface/peft/issues/1545
closed
[]
2024-03-08T11:45:09Z
2024-04-16T15:03:39Z
null
Minami-su
huggingface/datatrove
119
how about make a ray executor to deduplication
- https://github.com/ChenghaoMou/text-dedup/blob/main/text_dedup/minhash_spark.py - reference:https://github.com/alibaba/data-juicer/blob/main/data_juicer/core/ray_executor.py - Ray is simpler and faster than Spark
https://github.com/huggingface/datatrove/issues/119
closed
[]
2024-03-08T11:37:13Z
2024-04-11T12:48:53Z
null
simplew2011
huggingface/transformers.js
634
For nomic-ai/nomic-embed-text-v1 8192 context length
### Question As per document: https://huggingface.co/nomic-ai/nomic-embed-text-v1 Model supports 8192 context length, however, in transformers.js model_max_length: 512. Any guidance how to use full context (8192) instead of 512?
https://github.com/huggingface/transformers.js/issues/634
closed
[ "question" ]
2024-03-08T05:33:39Z
2025-10-13T04:57:49Z
null
faizulhaque
huggingface/diffusers
7,254
Request proper examples on how to training a diffusion models with diffusers on large scale dataset like LAION
Hi, I do not see any examples in diffusers/examples on how to training a diffusion models with diffusers on large scale dataset like LAION. However, it is important since many works and models is willing integrate their models into diffusers, so if they can train their models in diffusers, it would be more easy when they want to do it.
https://github.com/huggingface/diffusers/issues/7254
closed
[ "stale" ]
2024-03-08T01:31:33Z
2024-06-30T05:27:57Z
null
Luciennnnnnn
huggingface/swift-transformers
56
How to get models?
Missing in docu?
https://github.com/huggingface/swift-transformers/issues/56
closed
[]
2024-03-07T15:47:54Z
2025-02-11T11:41:32Z
null
pannous