repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 30,827 | Using this command(optimum-cli export onnx --model Qwen1.5-0.5B-Chat --task text-generation Qwen1.5-0.5B-Chat_onnx/) to perform onnx transformation, it is found that the tensor type of the model becomes int64. How to solve this problem? | ### System Info
transformers version : 4.38.1
platform: ubuntu 22.04
python version : 3.10.14
optimum version : 1.19.2
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1.reference conversion command link: https://huggingface.co/docs/transformers/v4.40.1/zh/serialization
2.download model files offline (https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/tree/main)
3.Execute transition instruction:optimum-cli export onnx --model Qwen1.5-0.5B-Chat --task text-generation Qwen1.5-0.5B-Chat_onnx/
The conversion results are as follows:
(mypy3.10_qnn) zhengjr@ubuntu-ThinkStation-P3-Tower:~$ optimum-cli export onnx --model Qwen1.5-0.5B-Chat --task text-generation Qwen1.5-0.5B-Chat_onnx/
2024-05-15 19:42:07.726433: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-05-15 19:42:07.916257: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-05-15 19:42:07.997974: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-05-15 19:42:08.545959: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2024-05-15 19:42:08.546100: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2024-05-15 19:42:08.546104: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Framework not specified. Using pt to export the model.
The task `text-generation` was manually specified, and past key values will not be reused in the decoding. if needed, please pass `--task text-generation-with-past` to export using the past key values.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Using the export variant default. Available variants are:
- default: The default ONNX variant.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
***** Exporting submodel 1/1: Qwen2ForCausalLM *****
Using framework PyTorch: 1.13.1
Overriding 1 configuration item(s)
- use_cache -> False
/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py:114: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if (input_shape[-1] > 1 or self.sliding_window is not None) and self.is_causal:
/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/optimum/exporters/onnx/model_patcher.py:300: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if past_key_values_length > 0:
/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:126: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if seq_len > self.max_seq_len_cached:
/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:290: TracerWarning: Converting a tensor to a Python boole | https://github.com/huggingface/transformers/issues/30827 | closed | [] | 2024-05-15T12:45:50Z | 2024-06-26T08:04:10Z | null | JameslaoA |
huggingface/chat-ui | 1,142 | Feature request, local assistants | I experimented with a few assistants on HF.
The problem I am facing is that I don't know how to get the same behaviour I get on HF from local model (which is the same model).
I tried everything I could thing of.
I think HF does some filtering or rephrasing or has an additional prompt before the assistant description.
Please help.
I am available for chat on discord https://discordapp.com/users/Zibri/ | https://github.com/huggingface/chat-ui/issues/1142 | open | [
"support"
] | 2024-05-15T11:11:29Z | 2024-05-27T06:53:21Z | 2 | Zibri |
huggingface/optimum | 1,855 | how to change optimum temporary path ? | ### Feature request
c drive less space
### Motivation
help to solve many issue
### Your contribution
dont know | https://github.com/huggingface/optimum/issues/1855 | closed | [] | 2024-05-14T11:17:14Z | 2024-10-14T12:22:35Z | null | neonarc4 |
huggingface/optimum | 1,854 | ai21labs/Jamba-tiny-random support | ### Feature request
ai21labs/Jamba-tiny-random mode, is not supported by Optimum export.
ValueError: Trying to export a jamba model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type jamba to be supported natively in the ONNX export.
### Motivation
Jamba is potentially very significant as it has a large context but a small size. This could be used in lots of scenarios if it has good performance.
### Your contribution
Unlikely I could do a PR as ONNX work is not my forte. | https://github.com/huggingface/optimum/issues/1854 | open | [
"feature-request",
"onnx"
] | 2024-05-14T10:22:05Z | 2024-10-09T09:10:58Z | 0 | frankia312 |
huggingface/transformers.js | 763 | Have considered using wasm technology to implement this library? | ### Question
Hello, have you ever considered using wasm technology to implement this library? For example, rust's wgpu-rs and c++'s dawn are both implementations of webgpu. They can be converted to wasm and can also be accelerated with simd. | https://github.com/huggingface/transformers.js/issues/763 | open | [
"question"
] | 2024-05-14T09:22:57Z | 2024-05-14T09:28:38Z | null | ghost |
huggingface/trl | 1,643 | How to save and resume a checkpoint from PPOTrainer | https://github.com/huggingface/trl/blob/5aeb752053876cce64f2164a178635db08d96158/trl/trainer/ppo_trainer.py#L203
It seems that every time the PPOTrainer is initialized, the accelerator is initialized as well. There's no API provided by PPOTrainer to resume checkpoints. How can we save and resume checkpoints? | https://github.com/huggingface/trl/issues/1643 | closed | [] | 2024-05-14T09:10:40Z | 2024-08-08T12:44:25Z | null | paraGONG |
huggingface/tokenizers | 1,531 | How to Batch-Encode Paired Input Sentences with Tokenizers: Seeking Clarification | Hello.
I'm using the tokenizer to encoding pair sentences in TemplateProcessing in batch_encode.
There's a confusing part where the method requires two lists for sentence A and sentence B.
According to the [guide documentation](https://huggingface.co/docs/tokenizers/quicktour): "To process a batch of sentences pairs, pass two lists to the Tokenizer.encode_batch method: the list of sentences A and the list of sentences B."
Since it instructs to input two lists, it seems like [[A1, A2], [B1, B2]] --(encode)-> {A1, B1}, {A2, B2}.
However, the actual input expects individual pairs batched, not splitting the sentence pairs into lists for A and B.
So, it should be [[A1, B1], [A2, B2]] to encode as {A1, B1}, {A2, B2}.
I've also confirmed that the length of the input list for encode_batch keeps increasing with the number of batches.
Since the guide instructs to input sentence A and sentence B, this is where the confusion arises.
If I've misunderstood anything, could you help clarify this point so I can understand it better? | https://github.com/huggingface/tokenizers/issues/1531 | closed | [
"Stale"
] | 2024-05-14T08:03:52Z | 2024-06-21T08:20:05Z | null | insookim43 |
huggingface/transformers.js | 762 | Options for the "translation" pipeline when using Xenova/t5-small | ### Question
The translation pipeline is [documented](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TranslationPipeline) to use {src_lang and tgt_lang} options to translate from the src language to the tgt language. However, when using Xenova/t5-small none of the options seem to be used. Instead looking at the demo code it appears that you have to change the pipeline.task field to "translation_{fromLanguage}_to_{targetLanguage}" but I can't find a way to normalize the usage of the translation pipeline with different models.
Is this task pattern documented somewhere or am I missing some other option settings when calling the translation pipeline?
| https://github.com/huggingface/transformers.js/issues/762 | open | [
"question"
] | 2024-05-13T21:09:15Z | 2024-05-13T21:09:15Z | null | lucapivato |
huggingface/datasets | 6,894 | Better document defaults of to_json | Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891 | https://github.com/huggingface/datasets/issues/6894 | closed | [
"documentation"
] | 2024-05-13T13:30:54Z | 2024-05-16T14:31:27Z | 0 | albertvillanova |
huggingface/chat-ui | 1,134 | Websearch failed on retrieving from pdf files | On chat ui I am getting the error as shown in screenshot, on pdf files it always says "Failed to parse webpage". I set USE_LOCAL_WEBSEARCH=True in .env.local. can anyone help me.

| https://github.com/huggingface/chat-ui/issues/1134 | open | [
"support",
"websearch"
] | 2024-05-13T06:41:08Z | 2024-06-01T09:25:59Z | 2 | prateekvyas1996 |
huggingface/parler-tts | 47 | Custom pronunciation for words - any thoughts / recommendations about how best to handle them? | Hello! This is a really interesting looking project.
Currently there doesn't seem any way that users can help the model correctly pronounce custom words - for instance **JPEG** is something that speakers just need to know is broken down as "**Jay-Peg**" rather than **Jay-Pea-Ee-Gee**.
I appreciate this project is at an early stage but for practical uses, especially with brands and product names often having quirky ways of saying words or inventing completely new words, it's essential to be able to handle their correct pronunciation on some sort of override basis. It's not just brands - plenty of people's names need custom handling and quite a few novel computer words are non-obvious too.
Examples that cause problems in the current models: **Cillian, Joaquin, Deirdre, Versace, Tag Heuer, Givenchy, gigabytes, RAM, MPEG** etc.
Are there any suggestions on how best to tackle this?
I saw there was #33 which uses a normaliser specifically for numbers. Is there something similar for custom words? I suppose perhaps one could drop in a list of custom words and some sort of mapping to the desired pronunciation, applying that as a stage similar to how it handles abbreviations.
In espeak backed tools, it's sometimes possible to replace words with custom IPA that replaces the default IPA generated but I believe this model doesn't use IPA for controlling pronunciation.
Given the frequently varying pronunciations, I doubt that simply finetuning to include the words would be a viable approach.
Anyway, would be great to hear what others have to recommend.
_Incidentally certain mainstream terms also get completely garbled, it seems impossible to get Instagram, Linux or Wikipedia to be spoken properly, but that's more a training data issue and those are mainstream enough that you wouldn't need to cover them via custom overrides._ | https://github.com/huggingface/parler-tts/issues/47 | open | [] | 2024-05-12T15:51:05Z | 2025-01-03T08:39:58Z | null | nmstoker |
huggingface/text-generation-inference | 1,875 | How to share memory among 2 GPUS for distributed inference? | # Environment Setup
Runtime environment:
Target: x86_64-unknown-linux-gnu
Cargo version: 1.75.0
Commit sha: https://github.com/huggingface/text-generation-inference/commit/c38a7d7ddd9c612e368adec1ef94583be602fc7e
Docker label: sha-6c4496a
Kubernetes Cluster deployment
2 A100 GPU with 80GB RAM
12 CPU with 32 GB RAM
TGI version: 2.0.0
TGI Parameters:
MAX_INPUT_LENGTH: "8000"
MAX_TOTAL_TOKENS: "8512"
MAX_CONCURRENT_REQUESTS: "128"
LOG_LEVEL: "INFO"
MAX_BATCH_TOTAL_TOKENS: "4294967295"
WAITING_SERVED_RATIO: "0.3"
MAX_WAITING_TOKENS: "0"
MAX_BATCH_PREFILL_TOKENS: "32768"
# Question
I am courious about how to optimize distributed inference for LLMs. I see in that in the docs you mention this:
```
### A note on Shared Memory (shm)
[`NCCL`](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/index.html) is a communication framework used by `PyTorch` to do distributed training/inference. `text-generation-inference` make use of `NCCL` to enable Tensor Parallelism to dramatically speed up inference for large language models.
In order to share data between the different devices of a `NCCL` group, `NCCL` might fall back to using the host memory if peer-to-peer using NVLink or PCI is not possible.
To allow the container to use 1G of Shared Memory and support SHM sharing, we add `--shm-size 1g` on the above command.
If you are running `text-generation-inference` inside `Kubernetes`. You can also add Shared Memory to the container by creating a volume with:
\- name: shm
emptyDir:
medium: Memory
sizeLimit: 1Gi
and mounting it to `/dev/shm`.
Finally, you can also disable SHM sharing by using the `NCCL_SHM_DISABLE=1` environment variable. However, note that this will impact performance.
```
We currently have this setup with K8s:
```
- name: m
emptyDir:
sizeLimit: 1Gi
medium: Memory
```
However, I feel like I am missing something.
Say GPU memory size is G, model weight in megabytes is M and free available memory for processing requests is F.
Then when I deploy a model with size M (where M < G) with SHARDED=True and over 2 full GPUs(G_1 and G_2). What I expect is the model weights taking M megabytes from GPU1 (G_1) and then the available/free memory, F, for processing tokens/requests should be (G_1 - M) + G_2 = F. Right?
Instead what I am seeing is that the model is replicated on both GPUs, so F = (G_1 - M) + (G_2 - M) . I believe this is not what we want. For example with Mistral7b:
| Sharded | GPU 1 | GPU 2 |
| -------- | ----- | ------ |
| False | 66553MiB / 81920MiB 81% used | Does not exist |
| True | 66553MiB / 81920MiB 81% used | 66553MiB / 81920MiB 81% used |
We would like to have the model only on 1 GPU (if it fits) and then use the extra available GPUs just for inference, i.e, increasing our memory budget at processing time by sharing the memory between the left over memory from the GPU where the model weights live and the memory from the GPU without model weights.
This is what makes me think we are not using NCCL correctly, or maybe my assumptions are wrong, and what I am saying is not possible to do?
# Visual description

| https://github.com/huggingface/text-generation-inference/issues/1875 | closed | [
"Stale"
] | 2024-05-10T08:49:05Z | 2024-06-21T01:48:05Z | null | martinigoyanes |
huggingface/accelerate | 2,759 | How to specify the backend of Trainer | ### System Info
```Shell
accelerate 0.28.0
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
I am running a multi-node, multi-gpu training code on two nodes with one A100-40GB respectively. I don't have the `NCCL` installed on this cluster, so I am trying to use the default `gloo` backend to start training. But I didn't find any documents on how to specify backend when `accelerate launch`. Any help will be very appreciated!
Here is my launching script.
```
srun -N 2 -n 2 -w xgpg2,xgpg3 accelerate launch --config_file /tmp/my_dist_config.yaml --gradient_accumulation_steps 8 --gradient_clipping 1.0 --mixed_precision bf16 train.py ...my training arguments..
```
Here is my accelerate config on each node.
```
# `/tmp/my_dist_config.yaml` on xgpg2
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_process_ip: xgpg2
main_process_port: 9999
main_training_function: main
mixed_precision: bf16
num_machines: 2
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
# `/tmp/my_dist_config.yaml` on xgpg3
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 1
main_process_ip: xgpg2
main_process_port: 9999
main_training_function: main
mixed_precision: bf16
num_machines: 2
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
Here is the main body of my training code
```
...
tokenizer = load_tokenizer(model_args.tokenizer_dir, train_mode=model_args.do_train)
model = load_model(model_args, quant_config, peft_config)
logger.info(f"Model Architecture:\n{model}")
print_trainable_parameters(model)
trainer = Trainer(
model=model,
train_dataset=train_data,
eval_dataset=eval_data,
args=trainer_config,
data_collator=PaddToMaxLenCollator(tokenizer, model_args.max_length),
)
# Training
if model_args.do_train:
train_result = trainer.train(resume_from_checkpoint=model_args.resume_from_checkpoint)
trainer.log_metrics("train", train_result.metrics)
trainer.save_metrics("train", train_result.metrics)
...
```
I tried to run this directly, but it went into some NCCL error like this:
```
torch.distributed.DistBackendError: NCCL error in: /opt/conda/conda-bld/pytorch_1704987394225/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3
```
I think the NCCL isn't installed on the system by system administrator, but there is a `nccl` library in my conda environment, which could probably be installed as some other library's dependency. I am not familiar with NCCL, but my understanding is this won't work because NCCL should be installed on system level. Am I right?
```
# Name Version Build Channel
nccl 2.21.5.1 h3a97aeb_0 conda-forge
```
### Expected behavior
Hope to know how to use the 'gloo' backend for Trainer. And also hope to know if I can use Trainer's Deepspeed Integration with gloo backend | https://github.com/huggingface/accelerate/issues/2759 | closed | [] | 2024-05-10T03:18:08Z | 2025-01-16T10:29:19Z | null | Orion-Zheng |
huggingface/lerobot | 167 | python3.10 how to install rerun-sdk | ### System Info
```Shell
ubuntu18.04
python3.10
ERROR: Could not find a version that satisfies the requirement rerun-sdk>=0.15.1 (from lerobot) (from versions: none)
ERROR: No matching distribution found for rerun-sdk>=0.15.1
```
### Information
- [X] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
pip install .
ERROR: Could not find a version that satisfies the requirement rerun-sdk>=0.15.1 (from lerobot) (from versions: none)
ERROR: No matching distribution found for rerun-sdk>=0.15.1
### Expected behavior
I want to know how to solve this problem | https://github.com/huggingface/lerobot/issues/167 | closed | [
"dependencies"
] | 2024-05-10T03:07:30Z | 2024-05-13T01:25:09Z | null | MountainIntelligent |
huggingface/safetensors | 478 | Can't seem to skip parameter initialization while using the `safetensors.torch.load_model` API! | ### System Info
- `transformers` version: 4.40.0
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.22.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- Tensorflow version (GPU?): 2.16.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.8.2 (cpu)
- Jax version: 0.4.26
- JaxLib version: 0.4.21`
### Reproduction
In order to load a serialized model, I use the `safetensors.torch.load_model` API which requires a `torch.nn.Module` type as the first argument.
I create this model while ensuring that the parameters are **not** initialized since they will get overridden anyway. I do this by using the `init_empty_weights` context manager from the `accelerate` package.
```
from transformers import LlamaConfig, LlamaForCausalLM
from accelerate import init_empty_weights
config = LlamaConfig()
with init_empty_weights():
model = LlamaForCausalLM(config)
safetensors.torch.load_model(model, <path-to-file>) //throws an error
```
The last line throws the error
```
warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '
UserWarning: for model.norm.weight: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?)
```
Turns out the loading of the state_dict is a no-op which could be resolved by using the `assign=True` argument however the current API doesn't provide a way to set that. Any ideas on how to overcome this issue?
### Expected behavior
`load_model` API returns a model object where the state_dict is initialized from the stored checkpoint. | https://github.com/huggingface/safetensors/issues/478 | closed | [
"Stale"
] | 2024-05-09T19:12:05Z | 2024-06-15T01:49:24Z | 1 | goelayu |
huggingface/tokenizers | 1,525 | How to write custom Wordpiece class? | My aim is get the rwkv5 model‘s "tokenizer.json",but it implemented through slow tokenizer(class Pretrainedtokenizer).
I want to convert "slow tokenizer" to "fast tokenizer",it needs to use "tokenizer = Tokenizer(Wordpiece())",but rwkv5 has it‘s own Wordpiece file.
So I want to create a custom Wordpiece
the code is here
```python
from tokenizers.models import Model
class MyWordpiece(Model):
def __init__(self,vocab,unk_token):
self.vocab = vocab
self.unk_token = unk_token
test = MyWordpiece('./vocab.txt',"<s>")
```
```
Traceback (most recent call last):
File "test.py", line 78, in <module>
test = MyWordpiece('./vocab.txt',"<s>")
TypeError: Model.__new__() takes 0 positional arguments but 2 were given
``` | https://github.com/huggingface/tokenizers/issues/1525 | closed | [
"Stale"
] | 2024-05-09T03:48:27Z | 2024-07-18T01:53:23Z | null | xinyinan9527 |
huggingface/trl | 1,635 | How to use trl\trainer\kto_trainer.py | If I want to use KTO trainer, I could set the parameter [loss_type == "kto_pair"] in dpo_trainer.py. Then what is kto_trainer.py used for? And how to use it? | https://github.com/huggingface/trl/issues/1635 | closed | [] | 2024-05-09T02:40:14Z | 2024-06-11T10:17:51Z | null | mazhengyufreedom |
huggingface/datasets | 6,882 | Connection Error When Using By-pass Proxies | ### Describe the bug
I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f969d391870>: Failed to establish a new connection: [Errno 111] Connection refused'))")))"
I have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library.
### Steps to reproduce the bug
1. Turn on any proxy software like Clash / ShadosocksR etc.
2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library)
3. load any dataset from hugginface online
### Expected behavior
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
Cell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3)
[1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric
----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric("seqeval")
File ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
[44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2)
[45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash)
---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs)
File ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs)
[2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings("ignore", message=".*https://huggingface.co/docs/evaluate$", category=FutureWarning)
[2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS)
-> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory(
[2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path,
[2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision,
[2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config,
[2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode,
[2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code,
[2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path
[2111](https://vscode-remote+wsl-002bubuntu-002d22-00 | https://github.com/huggingface/datasets/issues/6882 | open | [] | 2024-05-08T06:40:14Z | 2024-05-17T06:38:30Z | 1 | MRNOBODY-ZST |
huggingface/datatrove | 180 | how to turn log/traceback color off? | Trying datatrove for the first time and the program spews a bunch of logs and tracebacks in yellow and cyan which are completely unreadable on the b&w console.
Does the program make an assumption that the user is using w&b (dark) console?
I tried to grep for `color` to see how it controls the colors but found nothing relevant, so it's probably some 3rd party component that does that.
If the coloring logic doesn't bother to check what the console colors are to keep the output readable, any idea how to turn it off completely? I RTFM'ed - didn't find any docs that address that aspect.
Thanks a lot! | https://github.com/huggingface/datatrove/issues/180 | closed | [] | 2024-05-08T03:51:11Z | 2024-05-17T17:53:20Z | null | stas00 |
huggingface/candle | 2,171 | How to run LLama-3 or Phi with more then 4096 prompt tokens? | Could you please show me an example where LLama-3 model used (better GGUF quantized) and initial prompt is more then 4096 tokens long? Or better 16-64K long (for RAG). Currently everything I do ends with error:
In this code:
let logits = model.forward(&input, 0); // input is > 4096 tokens
Error:
narrow invalid args start + len > dim_len: [4096, 64], dim: 0, start: 0, len:4240
Model used:
https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF
Thank you a lot in advance! | https://github.com/huggingface/candle/issues/2171 | open | [] | 2024-05-07T20:15:28Z | 2024-05-07T20:16:13Z | null | baleksey |
huggingface/chat-ui | 1,115 | [v0.8.4] IMPORTANT: Talking to PDFs and general Roadmap? | Hi @nsarrazin
I have a couple of questions that I could not get answers to in the repo and on the web.
1. Is there a plan to enable file uploads (PDFs, etc) so that users can talk to those files? Similar to ChatGPT, Gemini etc?
2. Is there a feature roadmap available somewhere?
Thanks! | https://github.com/huggingface/chat-ui/issues/1115 | open | [] | 2024-05-07T06:10:20Z | 2024-09-10T15:44:16Z | 4 | adhishthite |
huggingface/candle | 2,167 | How to do a Axum's sse function for Candle? | fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> {
use std::io::Write;
self.tokenizer.clear();
let mut tokens = self
.tokenizer
.tokenizer()
.encode(prompt, true)
.map_err(E::msg)?
.get_ids()
.to_vec();
for &t in tokens.iter() {
if let Some(t) = self.tokenizer.next_token(t)? {
print!("{t}")
}
}
std::io::stdout().flush()?;
let mut generated_tokens = 0usize;
let eos_token = match self.tokenizer.get_token("<|endoftext|>") {
Some(token) => token,
None => anyhow::bail!("cannot find the <|endoftext|> token"),
};
let start_gen = std::time::Instant::now();
for index in 0..sample_len {
let context_size = if index > 0 { 1 } else { tokens.len() };
let start_pos = tokens.len().saturating_sub(context_size);
let ctxt = &tokens[start_pos..];
let input = Tensor::new(ctxt, &self.device)?.unsqueeze(0)?;
let logits = self.model.forward(&input, start_pos)?;
let logits = logits.squeeze(0)?.squeeze(0)?.to_dtype(DType::F32)?;
let logits = if self.repeat_penalty == 1. {
logits
} else {
let start_at = tokens.len().saturating_sub(self.repeat_last_n);
candle_transformers::utils::apply_repeat_penalty(
&logits,
self.repeat_penalty,
&tokens[start_at..],
)?
};
let next_token = self.logits_processor.sample(&logits)?;
tokens.push(next_token);
generated_tokens += 1;
if next_token == eos_token {
break;
}
if let Some(t) = self.tokenizer.next_token(next_token)? {
print!("{t}");
std::io::stdout().flush()?;
}
}
let dt = start_gen.elapsed();
if let Some(rest) = self.tokenizer.decode_rest().map_err(E::msg)? {
print!("{rest}");
}
std::io::stdout().flush()?;
println!(
"\n{generated_tokens} tokens generated ({:.2} token/s)",
generated_tokens as f64 / dt.as_secs_f64(),
);
Ok(())
}
How to rewrite above function to sse? | https://github.com/huggingface/candle/issues/2167 | closed | [] | 2024-05-07T02:38:50Z | 2024-05-08T04:27:14Z | null | sunnyregion |
huggingface/optimum | 1,847 | Static Quantization for Seq2Seq models like T5 | I'm currently trying to static quantize T5 but it seem in the optimum doc last committed 10 months ago said it don't support static only dynamic. Is there anyone ever try this before or has optimum updated any related recently, may be help me take a look? | https://github.com/huggingface/optimum/issues/1847 | open | [
"question",
"quantization"
] | 2024-05-06T19:34:30Z | 2024-10-14T12:24:28Z | null | NQTri00 |
huggingface/optimum | 1,846 | Low performance of THUDM/chatglm3-6b onnx model | I ran the chatglm3-6b model by exporting it to ONNX framework using custom onnx configuration. Although the functionality is correct, the latency of the model is very high, much higher than the pytorch model.
I have attached a minimal reproducible code which exports and run the model. Can someone take a look into it and suggest how to rectify the performance degradation.
```
from optimum.exporters.onnx import main_export
from transformers import AutoConfig
from optimum.exporters.onnx.config import TextDecoderOnnxConfig,TextDecoderWithPositionIdsOnnxConfig
from optimum.exporters.onnx.base import ConfigBehavior
from optimum.utils import NormalizedTextConfig, DummyPastKeyValuesGenerator
from typing import Dict
import os
import shutil
import time
class ChatGLM2DummyPastKeyValuesGenerator(DummyPastKeyValuesGenerator):
def generate(self, input_name: str, framework: str = "pt"):
past_key_shape = (
self.batch_size,
self.num_attention_heads,
self.hidden_size // self.num_attention_heads,
self.sequence_length,
)
past_value_shape = (
self.batch_size,
self.num_attention_heads,
self.sequence_length,
self.hidden_size // self.num_attention_heads,
)
return [
(
self.random_float_tensor(past_key_shape, framework=framework),
self.random_float_tensor(past_value_shape, framework=framework),
)
for _ in range(self.num_layers)
]
class CustomChatGLM2OnnxConfig(TextDecoderOnnxConfig):
DUMMY_INPUT_GENERATOR_CLASSES = (
ChatGLM2DummyPastKeyValuesGenerator,
) + TextDecoderOnnxConfig.DUMMY_INPUT_GENERATOR_CLASSES
DUMMY_PKV_GENERATOR_CLASS = ChatGLM2DummyPastKeyValuesGenerator
DEFAULT_ONNX_OPSET = 15 # aten::tril operator requires opset>=14
NORMALIZED_CONFIG_CLASS = NormalizedTextConfig.with_args(
hidden_size="hidden_size",
num_layers="num_layers",
num_attention_heads="num_attention_heads",
)
def add_past_key_values(
self, inputs_or_outputs: Dict[str, Dict[int, str]], direction: str
):
if direction not in ["inputs", "outputs"]:
raise ValueError(
f'direction must either be "inputs" or "outputs", but {direction} was given'
)
if direction == "inputs":
decoder_sequence_name = "past_sequence_length"
name = "past_key_values"
else:
decoder_sequence_name = "past_sequence_length + 1"
name = "present"
for i in range(self._normalized_config.num_layers):
inputs_or_outputs[f"{name}.{i}.key"] = {
0: "batch_size",
3: decoder_sequence_name,
}
inputs_or_outputs[f"{name}.{i}.value"] = {
0: "batch_size",
2: decoder_sequence_name,
}
model_id = "THUDM/chatglm3-6b"
config = AutoConfig.from_pretrained(model_id, trust_remote_code=True)
onnx_config = CustomChatGLM2OnnxConfig(
config=config,
task="text-generation",
use_past_in_inputs=False,
)
onnx_config_with_past = CustomChatGLM2OnnxConfig(
config, task="text-generation", use_past=True
)
custom_onnx_configs = {
"model": onnx_config,
}
main_export(
model_id,
output="chatglm",
task="text-generation-with-past",
trust_remote_code=True,
custom_onnx_configs=custom_onnx_configs,
no_post_process=True,
opset=15
)
### Running
from transformers import AutoTokenizer, AutoModelForCausalLM
from optimum.utils import NormalizedTextConfig, NormalizedConfigManager
NormalizedConfigManager._conf["chatglm"] = NormalizedTextConfig
import torch
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
tokenizer.add_special_tokens({"pad_token": "[PAD]"})
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
start = time.perf_counter()
inputs = tokenizer("What is the meaning of life?", return_tensors="pt", padding=True)
input_ids = inputs.input_ids
# Generate
generate_ids = model.generate(
input_ids,
max_length=64,
pad_token_id=tokenizer.eos_token_id,
)
# Stop timer
end = time.perf_counter()
generate_time = end - start
# Num of tokens
prompt_tokens = input_ids.shape[1]
num_tokens_out = generate_ids.shape[1]
new_tokens_generated = num_tokens_out - prompt_tokens
time_per_token = (generate_time / new_tokens_generated) * 1e3
print(time_per_token)
``` | https://github.com/huggingface/optimum/issues/1846 | open | [
"inference",
"onnxruntime",
"onnx"
] | 2024-05-06T17:18:58Z | 2024-10-14T12:25:29Z | 0 | tuhinp-amd |
huggingface/dataset-viewer | 2,775 | Support LeRobot datasets? | Currently:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'VideoFrame' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image']
```
eg on https://huggingface.co/datasets/lerobot/aloha_static_towel
Requires datasets to support `VideoFrame` | https://github.com/huggingface/dataset-viewer/issues/2775 | open | [
"question",
"feature request",
"dependencies",
"P2"
] | 2024-05-06T09:16:40Z | 2025-07-24T03:36:41Z | null | severo |
huggingface/peft | 1,712 | how to finetune whisper model with 'initial_prompt' | when use 'initial_prompt', the decoding result of finetuning with my data on whisper model v2 is bad, on the contrary, the result is good.
however, when use 'initial_prompt' the decoding result of based whisper model v2 is also good, so it means If want to use 'initial_prompt' during decoding , must add it when training? | https://github.com/huggingface/peft/issues/1712 | closed | [] | 2024-05-06T06:28:20Z | 2024-06-13T15:03:43Z | null | zyb8543d |
huggingface/dataspeech | 17 | UnboundLocalError: cannot access local variable 't' where it is not associated with a value """ | ### What i do
Hello. I tried to annotate my own dataset. And I got an error that I don't understand.
I'm a newbie. He is generally unable to understand what happened and why it happened.
I am attaching all the materials that I have
I have CSV-Scheme
| audio | text | speeker_id |
| ------------- | ------------- | ------------- |
| ./audio/audio_427.wav | Текст на кириллице | 1111 |
I upload CSV and cast csv as written in the documentation.
Uploading to HgFace. I start dataspeech with arguments.
He loaded it, he started doing something, and then that was it.
### What i group dataset
```sh
python group_dataset.py from_audio to_csv
```
Out. It save datasets.csv:
```csv
./audio/audio_427.wav, а затем базальта!. ,1111
./audio/audio_231.wav, razus!. ,1111
```
#### Cast and upload dataset to HG
```sh
python group_dataset.py from_csv cast_audio push_to_hub
```
```py
# In short it does this >
df = Dataset.from_csv("./datasets.csv")
df = df.cast_column("audio", Audio(32000))
df.push_to_hub(repo_id="", token="")
```
### Start dataspeach
```sh
python main.py "Anioji/testra" \
--configuration "default" \
--output_dir /root/dataspeech/tmp_stone_base/ \
--text_column_name "text_original" \
--audio_column_name "audio" \
--cpu_num_workers 4 \
--num_workers_per_gpu 4 \
--rename_column \
```
### Tracelog
```pyhon
/root/dataspeech/venv/lib/python3.11/site-packages/pyannote/audio/core/io.py:43: UserWarning: torchaudio._backend.set_audio_backend has been deprecated. With dispatcher enabled, this function is no-op. You can remove the function call.
torchaudio.set_audio_backend("soundfile")
WARNING - torchvision is not available - cannot save figures
Compute speaking rate
Compute snr and reverb
Map (num_proc=4): 0%| | 0/534 [00:00<?, ? examples/s]/root/dataspeech/venv/lib/python3.11/site-packages/pyannote/audio/core/io.py:43: UserWarning: torchaudio._backend.set_audio_backend has been deprecated. With dispatcher enabled, this function is no-op. You can remove the function call.
torchaudio.set_audio_backend("soundfile")
/root/dataspeech/venv/lib/python3.11/site-packages/pyannote/audio/core/io.py:43: UserWarning: torchaudio._backend.set_audio_backend has been deprecated. With dispatcher enabled, this function is no-op. You can remove the function call.
torchaudio.set_audio_backend("soundfile")
WARNING - torchvision is not available - cannot save figures
WARNING - torchvision is not available - cannot save figures
INFO - Lightning automatically upgraded your loaded checkpoint from v1.6.5 to v2.2.2. To apply the upgrade to your files permanently, run `python -m pytorch_lightning.utilities.upgrade_checkpoint ../.cache/huggingface/hub/models--ylacombe--brouhaha-best/snapshots/99bf97b13fd4dda2434a6f7c50855933076f2937/best.ckpt`
Model was trained with pyannote.audio 0.0.1, yours is 3.1.1. Bad things might happen unless you revert pyannote.audio to 0.x.
Model was trained with torch 1.12.1+cu102, yours is 2.2.2+cu121. Bad things might happen unless you revert torch to 1.x.
Using default parameters optimized on Brouhaha
Map (num_proc=4): 3%|█▏ | 16/534 [00:08<04:39, 1.85 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 6%|██▍ | 32/534 [00:09<02:00, 4.16 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 9%|███▋ | 48/534 [00:09<01:10, 6.91 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 12%|████▉ | 64/534 [00:10<00:46, 10.02 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 15%|██████▏ | 80/534 [00:10<00:35, 12.97 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 18%|███████▎ | 96/534 [00:11<00:28, 15.57 examples/s]Using default parameters optimized on Brouhaha
Map (num_proc=4): 18%|███████▎ | 96/534 [00:12<00:57, 7.58 examples/s]
multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/root/dataspeech/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "/root/dataspeech/venv/lib/python3.11/site-packages/datasets/utils/py_utils.py", line 675, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/root/dataspeech/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3547, in _map_single
batch = apply_function_on_filtered_inputs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/dataspeech/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3416, in apply_function_on_filtered_inputs | https://github.com/huggingface/dataspeech/issues/17 | closed | [] | 2024-05-05T20:49:26Z | 2024-05-28T11:31:37Z | null | anioji |
huggingface/parler-tts | 38 | how to use common voice mozilla dataset train for Parler-TTS | how to use common voice mozilla dataset train for Parler-TTS ?can you help me ? | https://github.com/huggingface/parler-tts/issues/38 | open | [] | 2024-05-04T12:36:30Z | 2024-05-04T12:36:30Z | null | herbiel |
huggingface/setfit | 519 | how to optimize setfit inference | hi,
im currently investigating what the options we have to optimize setfit inference and have a few questions about it:
- gpu:
- torch compile: https://huggingface.co/docs/transformers/en/perf_torch_compile
is the following the only way to use setfit with torch.compile?
```
model.model_body[0].auto_model = torch.compile(model.model_body[0].auto_model)
```
info above was provided by Tom Aarsen.
does torch.compile also work for cpu? edit: looks like it should work for cpu too...
https://pytorch.org/docs/stable/generated/torch.compile.html
does torch compile change anything about the accuracy of the model inference?
i see different modes here:
Can be either “default”, “reduce-overhead”, “max-autotune” or “max-autotune-no-cudagraphs” ... so far reduce-overhead gives best results....
- cpu:
what are the options to optimize cpu inference?
- BetterTransformer: https://huggingface.co/docs/transformers/en/perf_infer_cpu
is BetterTransformer really not available for setFit? i dont see setFit in this list: https://huggingface.co/docs/optimum/bettertransformer/overview#supported-models
are there any other resources to speedup setfit model inference? where can you run a setFit model except torchServe?
Thanks,
Gerald
| https://github.com/huggingface/setfit/issues/519 | closed | [] | 2024-05-03T19:19:21Z | 2024-06-02T20:30:34Z | null | geraldstanje |
huggingface/chat-ui | 1,097 | Katex fails to render math expressions from ChatGPT4. | I am using Chat UI version 0.8.3 and ChatGPT version gpt-4-turbo-2024-04-09.
ChatGPT is outputting formula delimiters as `\[`, `\]`, `\(`, `\)` and katex in the current version of ChatUI is not rendering them correctly. Based on my experiments, katex renders only formulas with `$` delimiters correctly.
I did a quick test with the following prompts
```echo following text as is: \[ D_i \]``` <- Fail to render
```echo following text as is: $ D_i $``` <- Successful
Thank you in advance. | https://github.com/huggingface/chat-ui/issues/1097 | closed | [
"bug",
"help wanted",
"front"
] | 2024-05-03T08:19:40Z | 2024-11-22T12:18:44Z | 5 | haje01 |
huggingface/chat-ui | 1,096 | error in login redirect | I am running chat-ui in online vps ubuntu 22
I am stuck at login redirection
I went through google authorization page and confirm my Gmail then redirect to my main domain again
The problem is simply it back with no action, not logged on and the URL been like that:
mydomain.com/login/callback?state=xxxxxxxxx
when I try again it redirect me to my main domain with 500 internal error
is there something that I missed in .env file ?
This is parts from env
COOKIE_NAME=SP-chat
HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxxxxxx
HF_API_ROOT=https://api-inference.huggingface.co/models
OPENID_CONFIG=`{
"PROVIDER_URL": "https://accounts.google.com",
"CLIENT_ID": "xxxxxxxxxxx.apps.googleusercontent.com",
"CLIENT_SECRET": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"SCOPES": "",
"NAME_CLAIM": ""
}`
USE_CLIENT_CERTIFICATE=false
CERT_PATH=/etc/letsencrypt/live/xxxxxxxxxx/fullchain.pem
KEY_PATH=/etc/letsencrypt/live/xxxxxxxxxx/privkey.pem
CA_PATH=#
CLIENT_KEY_PASSWORD=#
REJECT_UNAUTHORIZED=true
PUBLIC_ORIGIN=https://xxxxxxxxxx.com
PUBLIC_SHARE_PREFIX=https://xxxxxxxxx.com/
PUBLIC_GOOGLE_ANALYTICS_ID=#G-XXXXXXXX / Leave empty to disable
PUBLIC_PLAUSIBLE_SCRIPT_URL=#/js/script.js / Leave empty to disable | https://github.com/huggingface/chat-ui/issues/1096 | open | [
"support"
] | 2024-05-02T22:19:13Z | 2024-05-07T20:50:28Z | 0 | abdalladorrah |
huggingface/trl | 1,614 | How to do fp16 training with PPOTrainer? | I modified the example from the official website to do PPO training with llama3 using lora. When I use fp16, the weights go to nan after the first update, which does not occur when using fp32.
Here is the code
```python
# 0. imports
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead, PPOConfig, PPOTrainer
from copy import deepcopy
from peft import LoraConfig, TaskType, get_peft_model
# 1. load a pretrained model
model_name = "meta-llama/Meta-Llama-3-8B-Instruct"
current_device = Accelerator().local_process_index
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True,
attn_implementation="flash_attention_2",
)
lora_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
inference_mode=False,
r=8,
target_modules=["q_proj", "v_proj"],
lora_alpha=16,
lora_dropout=0,
)
model = get_peft_model(model, lora_config)
model = AutoModelForCausalLMWithValueHead.from_pretrained(model)
model_ref = deepcopy(model).eval()
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
# 2. initialize trainer
ppo_config = {"mini_batch_size": 1, "batch_size": 1}
config = PPOConfig(**ppo_config)
ppo_trainer = PPOTrainer(config, model, model_ref, tokenizer)
# 3. encode a query
query_txt = "This morning I went to the "
query_tensor = tokenizer.encode(query_txt, return_tensors="pt").to(
model.pretrained_model.device
)
# 4. generate model response
generation_kwargs = {
"min_length": -1,
"top_k": 0.0,
"top_p": 1.0,
"do_sample": True,
"pad_token_id": tokenizer.eos_token_id,
"max_new_tokens": 20,
}
response_tensor = ppo_trainer.generate(
[item for item in query_tensor], return_prompt=False, **generation_kwargs
)
response_txt = tokenizer.decode(response_tensor[0])
# 5. define a reward for response
# (this could be any reward such as human feedback or output from another model)
reward = [torch.tensor(1.0, device=model.pretrained_model.device)]
# 6. train model with ppo
train_stats = ppo_trainer.step([query_tensor[0]], [response_tensor[0]], reward)
```
What is the correct way to do fp16 ppo training? | https://github.com/huggingface/trl/issues/1614 | closed | [] | 2024-05-02T17:52:16Z | 2024-11-18T08:28:08Z | null | KwanWaiChung |
huggingface/optimum | 1,843 | Support for speech to text models. | ### Feature request
Hi, it would be really useful if speech to text models could be supported by optimum, specifically to ONNX. I saw a repo that managed to do it and they claimed they used optimum to do it.
https://huggingface.co/Xenova/speecht5_tts
Is there a way to do this?
### Motivation
I am finding it very difficult to convert any speech to text models to ONNX format and this would be very useful for both optimising serving them and also possibly running them with transformers.js.
### Your contribution
I don't think I would be able to do this myself unfortunately. | https://github.com/huggingface/optimum/issues/1843 | open | [
"feature-request",
"onnx"
] | 2024-05-02T11:43:49Z | 2024-10-14T12:25:52Z | 0 | JamesBowerXanda |
huggingface/datasets | 6,854 | Wrong example of usage when config name is missing for community script-datasets | As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name is missing.
Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all']
Example of usage:
`load_dataset('fleurs', 'af_za')`
```
Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs". | https://github.com/huggingface/datasets/issues/6854 | closed | [
"bug"
] | 2024-05-02T06:59:39Z | 2024-05-03T15:51:59Z | 0 | albertvillanova |
huggingface/distil-whisper | 130 | How to set the target language for examples in README? | The code examples in the README do not make it obvious how to set the language of the audio to transcribe.
The default settings create garbled english text if the audio language is different. | https://github.com/huggingface/distil-whisper/issues/130 | open | [] | 2024-05-01T11:52:00Z | 2024-05-22T11:59:09Z | null | clstaudt |
huggingface/transformers | 30,596 | AutoModal how to enable TP for extremly large models? | Hi, I have 8V100s, but a single one can not fit InternVL1.5 model which has 28B parameters.
So that, I just wonder if I can fit all of them into 8 V100 with TP?
I found that Deepspeed can be used to do tensor parallel like this:
```
# create the model
if args.pre_load_checkpoint:
model = model_class.from_pretrained(args.model_name_or_path)
else:
model = model_class()
...
import deepspeed
# Initialize the DeepSpeed-Inference engine
ds_engine = deepspeed.init_inference(model,
tensor_parallel={"tp_size": 2},
dtype=torch.half,
checkpoint=None if args.pre_load_checkpoint else args.checkpoint_json,
replace_with_kernel_inject=True)
model = ds_engine.module
output = model('Input String')
```
I didn't succeed because of it just support built in model which can be imported, but for custom model which have to `fromPretrained` it does support.
But as I mentioned at start, my V100 will OOM when load model.
Does there any convenient way to loading hf model which is customized with tp enable ? | https://github.com/huggingface/transformers/issues/30596 | closed | [] | 2024-05-01T10:06:45Z | 2024-06-09T08:03:23Z | null | MonolithFoundation |
huggingface/transformers | 30,595 | i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ | ### System Info
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Who can help?
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Expected behavior
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ | https://github.com/huggingface/transformers/issues/30595 | closed | [] | 2024-05-01T09:17:58Z | 2024-05-01T09:31:39Z | null | ldh127 |
huggingface/transformers.js | 732 | What does "Error: failed to call OrtRun(). error code = 6." mean? I know it is ONNX related, but how to fix? | ### Question
I keep running into the same issue when using transformers.js Automatic Speech Recognition pipeline. I've tried solving it multiple ways. But pretty much hit a wall every time. I've done lots of googling, LLMs, and used my prior knowledge of how this stuff functions in python. But I can't seem to get it to work.
I've tried setting up my environment with and without vite. I've tried with react javascript. I've tried with with react typescript. Nothing.
Am i missing a dependency or something? is there a place I can find what the error code means? because I couldn't find it anywhere.
I've fed it an array. I've fed it a .wav file. Nothing works. No matter what I do. No matter if it's an array or a wav file. I always get the same error:
```
An error occurred during model execution: "Error: failed to call OrtRun(). error code = 6.".
Inputs given to model: {input_features: Proxy(Tensor)}
Error transcribing audio: Error: failed to call OrtRun(). error code = 6.
at e.run (wasm-core-impl.ts:392:1)
at e.run (proxy-wrapper.ts:212:1)
at e.OnnxruntimeWebAssemblySessionHandler.run (session-handler.ts:99:1)
at InferenceSession.run (inference-session-impl.ts:108:1)
at sessionRun (models.js:207:1)
at encoderForward (models.js:520:1)
at Function.seq2seqForward [as _forward] (models.js:361:1)
at Function.forward (models.js:820:1)
at Function.seq2seqRunBeam [as _runBeam] (models.js:480:1)
at Function.runBeam (models.js:1373:1)
```
It seems to be a ONNX Runtime issue. But don't know how to fix it. Any guidance will be appreciated.
Note: I'm currently testing with English. Nothing fancy. | https://github.com/huggingface/transformers.js/issues/732 | closed | [
"question"
] | 2024-05-01T07:01:06Z | 2024-05-11T09:18:35Z | null | jquintanilla4 |
huggingface/transformers | 30,591 | i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ | ### Feature request
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Motivation
x
### Your contribution
x | https://github.com/huggingface/transformers/issues/30591 | closed | [] | 2024-05-01T04:27:47Z | 2024-06-08T08:03:17Z | null | ldh127 |
huggingface/chat-ui | 1,093 | I want to get the html of a website https://bit.ly/4bgmLb9 in huggingchat web search | I want to get the html of a website https://bit.ly/4bgmLb9 in hugging-chat web search. In chrome, I can put https://bit.ly/4bgmLb9 in the address bar and get the result. But I do not know how to do that in hugging-chat web search?
I try in hugging-chat and the screenshot

how to write the prompt so that huggingchat can fullfill the requirement | https://github.com/huggingface/chat-ui/issues/1093 | closed | [] | 2024-05-01T03:00:29Z | 2024-05-02T14:26:16Z | 1 | ghost |
huggingface/dataset-viewer | 2,756 | Upgrade pyarrow to 16? | Release notes here: https://arrow.apache.org/blog/2024/04/20/16.0.0-release/
Are we affected by any change? Does it enable something for us? | https://github.com/huggingface/dataset-viewer/issues/2756 | open | [
"question",
"dependencies",
"P2"
] | 2024-04-30T10:20:45Z | 2024-04-30T16:19:31Z | null | severo |
huggingface/peft | 1,693 | How to convert a loha safetensor trained from diffusers to webui format | Hello, when I finetune SDXL (actually that is InstantID) with PEFT method, I use lora、loha and lokr for PEFT in [diffuser](https://github.com/huggingface/diffusers).
I have a question, how to convert a loha safetensor trained from diffusers to webui format?
In the training process:
the loading way:
`peft_config = LoHaConfig(
r=args.rank,
alpha=args.rank //2,
target_modules=["to_k", "to_q", "to_v", "to_out.0"],
) `
`unet = get_peft_model(unet, peft_config)
`
when train process finished, the saving way as:
`unet.save_pretrained(args.output_dir)`
and I get the safetensor as

But [webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui/) can't recognize it, I can't use it in webui.
How can I fix this promblem!
| https://github.com/huggingface/peft/issues/1693 | closed | [] | 2024-04-30T07:17:48Z | 2024-06-08T15:03:44Z | null | JIAOJIAYUASD |
huggingface/safetensors | 474 | How to fully load checkpointed weights in memory? | ### System Info
- `transformers` version: 4.40.0
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.22.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- Tensorflow version (GPU?): 2.16.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.8.2 (cpu)
- Jax version: 0.4.26
- JaxLib version: 0.4.21
### Reproduction
1. Load a checkpointed `.safetensor` file using `safetensors.torch.load_file` API in the CPU memory.
2. Negligible increase in the CPU memory usage
### Expected behavior
The CPU memory should increase by exactly the size of the file being read.
I think the negligible increase in the CPU memory might be the expected behavior, due to safetensors' lazy loading feature? However if I want to load the entire model in host memory, is there another way to do that? I am running some benchmarks with safetensor APIs, and need to ensure that the model is fully loaded in the CPU memory. | https://github.com/huggingface/safetensors/issues/474 | closed | [] | 2024-04-29T21:30:37Z | 2024-04-30T22:12:29Z | null | goelayu |
huggingface/dataset-viewer | 2,754 | Return partial dataset-hub-cache instead of error? | `dataset-hub-cache` depends on multiple previous steps, and any error in one of them makes it fail. It provokes things like https://github.com/huggingface/moon-landing/issues/9799 (internal): in the datasets list, a dataset is not marked as "supporting the dataset viewer", whereas the only issue is that we didn't manage to list the compatible libraries, to create the tags.
https://github.com/huggingface/dataset-viewer/blob/main/services/worker/src/worker/job_runners/dataset/hub_cache.py
In this case, we could return a partial response, or maybe return an empty list of libraries or modalities if we have an error.
What do you think @lhoestq?
| https://github.com/huggingface/dataset-viewer/issues/2754 | closed | [
"question",
"P2"
] | 2024-04-29T17:10:09Z | 2024-06-13T13:57:20Z | null | severo |
huggingface/datasets | 6,848 | Cant Downlaod Common Voice 17.0 hy-AM | ### Describe the bug
I want to download Common Voice 17.0 hy-AM but it returns an error.
```
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name='hfds_config', config_path=None)
/usr/local/lib/python3.10/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
ret = run_job(
/usr/local/lib/python3.10/dist-packages/datasets/load.py:1429: FutureWarning: The repository for mozilla-foundation/common_voice_17_0 contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/mozilla-foundation/common_voice_17_0
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
warnings.warn(
Reading metadata...: 6180it [00:00, 133224.37it/s]les/s]
Generating train split: 0 examples [00:00, ? examples/s]
HuggingFace datasets failed due to some reason (stack trace below).
For certain datasets (eg: MCV), it may be necessary to login to the huggingface-cli (via `huggingface-cli login`).
Once logged in, you need to set `use_auth_token=True` when calling this script.
Traceback error for reference :
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1743, in _prepare_split_single
example = self.info.features.encode_example(record) if self.info.features is not None else record
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1878, in encode_example
return encode_nested_example(self, example)
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in encode_nested_example
{
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in <dictcomp>
{
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: 'sentence_id'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/nemo/scripts/speech_recognition/convert_hf_dataset_to_nemo.py", line 358, in main
dataset = load_dataset(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2549, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1605, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1762, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
```
from datasets import load_dataset
cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hy-AM")
```
### Expected behavior
It works fine with common_voice_16_1
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.11.6
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0 | https://github.com/huggingface/datasets/issues/6848 | open | [] | 2024-04-29T10:06:02Z | 2025-04-01T20:48:09Z | 3 | mheryerznkanyan |
huggingface/optimum | 1,839 | why does ORTModelForCausalLM assume new input length is 1 when past_key_values is passed | https://github.com/huggingface/optimum/blob/c55f8824f58db1a2f1cfc7879451b4743b8f206b/optimum/onnxruntime/modeling_decoder.py#L649
``` python
def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs):
if past_key_values is not None:
past_length = past_key_values[0][0].shape[2]
# Some generation methods already pass only the last input ID
if input_ids.shape[1] > past_length:
remove_prefix_length = past_length
else:
# Default to old behavior: keep only final ID
remove_prefix_length = input_ids.shape[1] - 1
input_ids = input_ids[:, remove_prefix_length:]
```
while in non-onnx modeling, it's not.
https://github.com/huggingface/transformers/blob/a98c41798cf6ed99e1ff17e3792d6e06a2ff2ff3/src/transformers/models/mistral/modeling_mistral.py#L1217
```python
# Keep only the unprocessed tokens:
# 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
# some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as
# input)
if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]:
input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
# 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
# input_ids based on the past_length.
elif past_length < input_ids.shape[1]:
input_ids = input_ids[:, past_length:]
``` | https://github.com/huggingface/optimum/issues/1839 | open | [
"question",
"onnxruntime"
] | 2024-04-29T07:06:04Z | 2024-10-14T12:28:51Z | null | cyh-ustc |
huggingface/diffusers | 7,813 | I feel confused about this TODO issue. how to pass timesteps as tensors? | https://github.com/huggingface/diffusers/blob/235d34cf567e78bf958344d3132bb018a8580295/src/diffusers/models/unets/unet_2d_condition.py#L918
| https://github.com/huggingface/diffusers/issues/7813 | closed | [
"stale"
] | 2024-04-29T03:46:21Z | 2024-11-23T00:19:17Z | null | ghost |
huggingface/datasets | 6,846 | Unimaginable super slow iteration | ### Describe the bug
Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the dataset……?Is there something wrong with my iteration?
### Steps to reproduce the bug
```python
import datasets
import time
import random
num_rows = 52000
num_cols = 500
random_input = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]
random_output = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]
s=time.time()
d={'random_input':random_input,'random_output':random_output}
dataset=datasets.Dataset.from_dict(d)
print('from dict',time.time()-s)
print(dataset)
for i in range(len(dataset)):
aa=time.time()
a,b=dataset['random_input'][i],dataset['random_output'][i]
print(time.time()-aa)
```
corresponding output
```bash
from dict 9.215498685836792
Dataset({
features: ['random_input', 'random_output'],
num_rows: 52000
})
19.129778146743774
19.329464197158813
19.27668261528015
19.28557538986206
19.247620582580566
19.624247074127197
19.28673791885376
19.301053047180176
19.290496110916138
19.291821718215942
19.357765197753906
```
### Expected behavior
Under normal circumstances, iteration should be very rapid as it does not involve the main tasks other than getting items
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | https://github.com/huggingface/datasets/issues/6846 | closed | [] | 2024-04-28T05:24:14Z | 2024-05-06T08:30:03Z | 1 | rangehow |
huggingface/lerobot | 112 | Do we want to use `transformers`? | I'd really go against establishing transformers as a dependency of lerobot and importing their whole library just to use the `PretrainedConfig` (or even other components). I think in this case it's very overkill and wouldn't necessarily fit our needs right now. The class is ~1000 lines of code - which we can copy into our lib anyway - and looks way more mature and feature-rich than what — IMO — we need and have with the rest of our code base.
Copying code is even part of [Transformers' philosophy](https://huggingface.co/blog/transformers-design-philosophy) — which we *do* copy.
_Originally posted by @aliberts in https://github.com/huggingface/lerobot/pull/101#discussion_r1581860998_
| https://github.com/huggingface/lerobot/issues/112 | closed | [
"question"
] | 2024-04-27T17:24:20Z | 2024-04-30T11:59:25Z | null | qgallouedec |
huggingface/evaluate | 582 | How to pass generation_kwargs to the TextGeneration evaluator ? | How can I pass the generation_kwargs to TextGeneration evaluator ? | https://github.com/huggingface/evaluate/issues/582 | open | [] | 2024-04-25T16:09:46Z | 2024-04-25T16:09:46Z | null | swarnava112 |
huggingface/chat-ui | 1,074 | 503 error | Hello, I was trying to install the chat-ui
I searched for any documentation to how to handle that on my vps
error 500 after build and not working with https although allow_insecure=false | https://github.com/huggingface/chat-ui/issues/1074 | closed | [
"support"
] | 2024-04-25T15:34:07Z | 2024-04-27T14:58:45Z | 1 | abdalladorrah |
huggingface/chat-ui | 1,073 | Support for Llama-3-8B-Instruct model | hi,
For model meta-llama/Meta-Llama-3-8B-Instruct, it is unlisted, not sure when will be supported?
https://github.com/huggingface/chat-ui/blob/3d83131e5d03e8942f9978bf595a7caca5e2b3cd/.env.template#L229
thanks. | https://github.com/huggingface/chat-ui/issues/1073 | open | [
"question",
"models",
"huggingchat"
] | 2024-04-25T14:03:35Z | 2024-04-30T05:47:05Z | null | cszhz |
huggingface/chat-ui | 1,072 | [v0.8.3] serper, serpstack API, local web search not working | ## Context
I have serper.dev API key, serpstack API key and I have put it correctly in my `.env.local` file.
<img width="478" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/5082893a-7ecd-4ab5-9cb9-059875118dcd">
## Issue
However, even if I enable Web Search, it still does not reach out to those APIs, and shows me "an error occured" no the Web Search part.
<img width="931" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/da96c121-89e0-402b-8e93-33c9e6709c71">
I don't see calls reaching Serper and SerpStack as well.
<img width="1365" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/7230b1a0-2567-424f-8884-8fc53417fa41">
<img width="1302" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/b35c1a7f-1c2c-4c8a-9c46-5c2171f73f9b">
It was working for a bit on `v0.8.2`, but then it stopped working there as well. Now, for `v.0.8.3`, it's not working at all. Am I missing something? I have tried using either of those APIs too, but it still does not work.
Please help. | https://github.com/huggingface/chat-ui/issues/1072 | closed | [
"support"
] | 2024-04-25T13:24:40Z | 2024-05-09T16:28:15Z | 14 | adhishthite |
huggingface/diffusers | 7,775 | How to input gradio settings in Python | Hi.
I use **realisticStockPhoto_v20** on Fooocus with **sdxl_film_photography_style** lora and I really like the results.
Fooocus and other gradio implementations come with settings inputs that I want to utilize in Python as well. In particular, if this is my code:
```
device = "cuda"
model_path = "weights/realisticStockPhoto_v20.safetensors"
pipe = StableDiffusionXLInpaintPipeline.from_single_file(
model_path,
torch_dtype=torch.float16,
num_in_channels=4).to(device)
pipe.load_lora_weights(".", weight_name="weights/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors", adapter_name="film")
```
how can I set the following settings/parameters in code?
- Negative Prompt
- Preset (initial, lcm, default, lighting, realistic, sai, anime)
- Performance (quality, speed, extreme speed, lightning)
- width-height
- image number
- output format
- Style (Fooocus v2, fooocus photography, fooocus negative, foocus enhance, etc.)
- Base Model
- Refiner
- Lora 1,2,3,4,5,...
- Guidance scale
- Image sharpness | https://github.com/huggingface/diffusers/issues/7775 | closed | [] | 2024-04-25T08:43:20Z | 2024-11-20T00:07:26Z | null | levoz92 |
huggingface/chat-ui | 1,069 | CohereForAI ChatTemplate | Now that there is official support for tgi in CohereForAI/c4ai-command-r-v01. How to use the chat template found in the tokenizer config for the ui. Or alternatively, is it possible to add in PROMPTS.md the correct template for cohere? | https://github.com/huggingface/chat-ui/issues/1069 | open | [] | 2024-04-25T05:45:35Z | 2024-04-25T05:45:35Z | 0 | yanivshimoni89 |
huggingface/transformers.js | 727 | Preferred citation of Transformers.js | ### Question
Love the package, and am using it in research - I am wondering, does there exist a preferred citation format for the package to cite it in papers? | https://github.com/huggingface/transformers.js/issues/727 | open | [
"question"
] | 2024-04-24T23:07:20Z | 2024-04-24T23:21:13Z | null | ludgerpaehler |
huggingface/diarizers | 4 | How to save the finetuned model as a .bin file? | Hi,
I finetuned the pyannote-segmentation model for my usecase but it is saved as a model.safetensors file. Can I convert it to a pytorch_model.bin file? I am using whisperx to create speaker-aware transcripts and .safetensors isn't working with that library. Thanks! | https://github.com/huggingface/diarizers/issues/4 | closed | [] | 2024-04-24T20:50:19Z | 2024-04-30T21:02:32Z | null | anuragrawal2024 |
huggingface/transformers.js | 725 | How to choose a language's dialect when using `automatic-speech-recognition` pipeline? | ### Question
Hi, so I was originally using the transformers library (python version) in my backend, but when refactoring my application for scale. It made more sense to move my implementation of whisper from the backend to the frontend (for my specific usecase). So I was thrilled when I saw that transformers.js supported whisper via the `automatic-speech-recognition` pipeline. However I'm a little confused by the implementation and the documentation left me with the question in the title.
How to choose a language's dialect when using `automatic-speech-recognition` pipeline?
In the python implementation of whisper, you don't have to specify the language being spoken as long as you're using the correct model size for multilingual support. But from your examples on transformers.js, it seems like you do in the js implementation.
```
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-small');
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/french-audio.mp3';
const output = await transcriber(url, { language: 'french', task: 'transcribe' });
// { text: " J'adore, j'aime, je n'aime pas, je déteste." }
```
However there's no list of supported languages, beyond what you can find on the whisper github repo. That's usually not a problem. But how do you deal with a language like Chinese, that has two main dialects; Mandarin and Cantonese. In python, I didn't have to worry about it, but in js, it seems to be a potential issue.
Please help. Any guidance will be appreciated. | https://github.com/huggingface/transformers.js/issues/725 | closed | [
"question"
] | 2024-04-24T09:44:38Z | 2025-11-06T20:36:01Z | null | jquintanilla4 |
huggingface/text-embeddings-inference | 248 | how to support gpu version 10.1 rather than 12.2 | ### Feature request
how to support gpu version 10.1 rather than 12.2
### Motivation
how to support gpu version 10.1 rather than 12.2
### Your contribution
how to support gpu version 10.1 rather than 12.2 | https://github.com/huggingface/text-embeddings-inference/issues/248 | closed | [] | 2024-04-24T08:49:45Z | 2024-04-26T13:02:44Z | null | fanqiangwei |
huggingface/diffusers | 7,766 | IP-Adapter FaceID PLus How to use questions | https://github.com/huggingface/diffusers/blob/9ef43f38d43217f690e222a4ce0239c6a24af981/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L492
## error msg:
pipe.unet.encoder_hid_proj.image_projection_layers[0].clip_embeds = clip_embeds.to(dtype=torch.float16)
AttributeError: 'list' object has no attribute 'to'
hi!
I'm having some problems using the ip adapter FaceID PLus. Can you help me answer these questions? Thank you very much
1. first question: What should I pass in the `ip_adapter_image` parameter in the `prepare_ip_adapter_image_embeds` function
2. second question: What problem does this cause when the following code does not match in the merge code link below and in the example in the ip_adapter.md file
this is merge link:
https://github.com/huggingface/diffusers/pull/7186#issuecomment-1986961595
Differential code:
```
ref_images_embeds = torch.stack(ref_images_embeds, dim=0).unsqueeze(0)
neg_ref_images_embeds = torch.zeros_like(ref_images_embeds)
id_embeds = torch.cat([neg_ref_images_embeds, ref_images_embeds]).to(dtype=torch.float16, device="cuda"))
```
@yiyixuxu @fabiorigano
## os:
diffusers==diffusers-0.28.0.dev0
## this is my code:
```
# @FileName:StableDiffusionIpAdapterFaceIDTest.py
# @Description:
# @Author:dyh
# @Time:2024/4/24 11:45
# @Website:www.xxx.com
# @Version:V1.0
import cv2
import numpy as np
import torch
from PIL import Image
from diffusers import StableDiffusionPipeline
from insightface.app import FaceAnalysis
from transformers import CLIPVisionModelWithProjection
model_path = '../../../aidazuo/models/Stable-diffusion/stable-diffusion-v1-5'
clip_path = '../../../aidazuo/models/CLIP-ViT-H-14-laion2B-s32B-b79K'
ip_adapter_path = '../../../aidazuo/models/IP-Adapter-FaceID'
ip_img_path = '../../../aidazuo/jupyter-script/test-img/vermeer.png'
def extract_face_features(image_lst: list, input_size: tuple):
# Extract Face features using insightface
ref_images = []
app = FaceAnalysis(name="buffalo_l",
root=ip_adapter_path,
providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=input_size)
for img in image_lst:
image = cv2.cvtColor(np.asarray(img), cv2.COLOR_BGR2RGB)
faces = app.get(image)
image = torch.from_numpy(faces[0].normed_embedding)
ref_images.append(image.unsqueeze(0))
ref_images = torch.cat(ref_images, dim=0)
return ref_images
ip_adapter_img = Image.open(ip_img_path)
image_encoder = CLIPVisionModelWithProjection.from_pretrained(
clip_path,
torch_dtype=torch.float16,
use_safetensors=True
)
pipe = StableDiffusionPipeline.from_pretrained(
model_path,
variant="fp16",
safety_checker=None,
image_encoder=image_encoder,
torch_dtype=torch.float16).to("cuda")
adapter_file_lst = ["ip-adapter-faceid-plus_sd15.bin"]
adapter_weight_lst = [0.5]
pipe.load_ip_adapter(ip_adapter_path, subfolder=None, weight_name=adapter_file_lst)
pipe.set_ip_adapter_scale(adapter_weight_lst)
face_id_embeds = extract_face_features([ip_adapter_img], ip_adapter_img.size)
clip_embeds = pipe.prepare_ip_adapter_image_embeds(ip_adapter_image=[ip_adapter_img],
ip_adapter_image_embeds=None,
device='cuda',
num_images_per_prompt=1,
do_classifier_free_guidance=True)
pipe.unet.encoder_hid_proj.image_projection_layers[0].clip_embeds = clip_embeds.to(dtype=torch.float16)
pipe.unet.encoder_hid_proj.image_projection_layers[0].shortcut = False # True if Plus v2
generator = torch.manual_seed(33)
images = pipe(
prompt='a beautiful girl',
ip_adapter_image_embeds=clip_embeds,
negative_prompt="",
num_inference_steps=30,
num_images_per_prompt=1,
generator=generator,
width=512,
height=512).images
print(images)
```
| https://github.com/huggingface/diffusers/issues/7766 | closed | [] | 2024-04-24T07:56:38Z | 2024-11-20T00:02:30Z | null | Honey-666 |
huggingface/peft | 1,673 | How to set Lora_dropout=0 when loading trained peft model for inference? | ### System Info
peft==0.10.0
transformers==4.39.3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
class Linear(nn.Module, LoraLayer):
def forward(self, x: torch.Tensor, *args: Any, **kwargs: Any) -> torch.Tensor:
self._check_forward_args(x, *args, **kwargs)
adapter_names = kwargs.pop("adapter_names", None)
if self.disable_adapters:
if self.merged:
self.unmerge()
result = self.base_layer(x, *args, **kwargs)
elif adapter_names is not None:
result = self._mixed_batch_forward(x, *args, adapter_names=adapter_names, **kwargs)
elif self.merged:
result = self.base_layer(x, *args, **kwargs)
else:
result = self.base_layer(x, *args, **kwargs)
torch_result_dtype = result.dtype
for active_adapter in self.active_adapters:
if active_adapter not in self.lora_A.keys():
continue
lora_A = self.lora_A[active_adapter]
lora_B = self.lora_B[active_adapter]
dropout = self.lora_dropout[active_adapter]
scaling = self.scaling[active_adapter]
x = x.to(lora_A.weight.dtype)
if not self.use_dora[active_adapter]:
result = result + lora_B(lora_A(dropout(x))) * scaling
else:
x = dropout(x)
result = result + self._apply_dora(x, lora_A, lora_B, scaling, active_adapter)
result = result.to(torch_result_dtype)
return result
```
### Expected behavior
We can see that `lora_dropout` in forward function is working the same way whether under train or inference mode. | https://github.com/huggingface/peft/issues/1673 | closed | [] | 2024-04-24T07:47:19Z | 2024-05-10T02:22:17Z | null | flyliu2017 |
huggingface/optimum | 1,826 | Phi3 support | ### Feature request
Microsoft's new phi3 mode, in particular the 128K context mini model, is not supported by Optimum export.
Error is:
"ValueError: Trying to export a phi3 model, that is a custom or unsupported architecture, but no custom export configuration was passed as `custom_export_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type phi3 to be supported natively in the ONNX export."
### Motivation
Phi3-mini is potentially very significant as it has a large context but a small size. This could be used in lots of scenarios if it has good performance.
### Your contribution
Unlikely I could do a PR as ONNX work is not my forte. | https://github.com/huggingface/optimum/issues/1826 | closed | [] | 2024-04-23T15:54:21Z | 2024-05-24T13:53:08Z | 4 | martinlyons |
huggingface/datasets | 6,830 | Add a doc page for the convert_to_parquet CLI | Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova | https://github.com/huggingface/datasets/issues/6830 | closed | [
"documentation"
] | 2024-04-23T09:49:04Z | 2024-04-25T10:44:11Z | 0 | severo |
huggingface/transformers.js | 723 | 404 when trying Qwen in V3 | ### Question
This is probably just because V3 is a work in progress, but I wanted to make sure.
When trying to run Qwen 1.5 - 0.5B it works with the V2 script, but when swapping to V3 I get a 404 not found.
```
type not specified for model. Using the default dtype: q8.
GET https://huggingface.co/Xenova/Qwen1.5-0.5B-Chat/resolve/main/onnx/model_quantized.onnx 404 (Not Found)
```
It seems V3 is looking for a file that was renamed 3 months ago.
[Rename onnx/model_quantized.onnx to onnx/decoder_model_merged_quantized.onnx](https://huggingface.co/Xenova/Qwen1.5-0.5B-Chat/commit/09e055ac27002bb954137751b31376de79ae17a5)
I've tried setting `dtype` to 16 and 32, which does change the URL it tries to get, but those URL's also do not exist :-D
e.g. `https://huggingface.co/Xenova/Qwen1.5-0.5B-Chat/resolve/main/onnx/model_fp16.onnx` when using `dtype: 'fp16'`.
Is there something I can do to make V3 find the correct files?
(I'm still trying to find that elusive small model with a large context size to do document summarization with) | https://github.com/huggingface/transformers.js/issues/723 | open | [
"question"
] | 2024-04-22T19:14:17Z | 2024-05-28T08:26:09Z | null | flatsiedatsie |
huggingface/diffusers | 7,740 | How to get config of single_file | Hi,
Is there any way to get the equivalent of model_index.json from a single_file? | https://github.com/huggingface/diffusers/issues/7740 | closed | [] | 2024-04-22T14:00:21Z | 2024-04-22T23:26:50Z | null | suzukimain |
huggingface/diffusers | 7,724 | RuntimeError: Error(s) in loading state_dict for AutoencoderKL: Missing Keys! How to solve? | ### Describe the bug
I am trying to get a Lora to run locally on my computer by using this code: https://github.com/hollowstrawberry/kohya-colab and changing it to a local format. When I get to the loading of the models, it gives an error, It seems that the AutoEncoder model has changed but I do not know how to adjust this or solve this issue in any of the files. I am a very amateur coder, could some one still help me out?
### Reproduction
Here is the code: https://github.com/hollowstrawberry/kohya-colab
### Logs
```shell
Traceback (most recent call last):
File "/Users/veravanderburg/Loras/kohya-trainer/train_network_wrapper.py", line 9, in <module>
train(args)
File "/Users/veravanderburg/Loras/kohya-trainer/train_network.py", line 168, in train
text_encoder, vae, unet, _ = train_util.load_target_model(args, weight_dtype, accelerator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/veravanderburg/Loras/kohya-trainer/library/train_util.py", line 3149, in load_target_model
text_encoder, vae, unet, load_stable_diffusion_format = _load_target_model(
^^^^^^^^^^^^^^^^^^^
File "/Users/veravanderburg/Loras/kohya-trainer/library/train_util.py", line 3115, in _load_target_model
text_encoder, vae, unet = model_util.load_models_from_stable_diffusion_checkpoint(args.v2, name_or_path, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/veravanderburg/Loras/kohya-trainer/library/model_util.py", line 873, in load_models_from_stable_diffusion_checkpoint
info = vae.load_state_dict(converted_vae_checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for AutoencoderKL:
Missing key(s) in state_dict: "encoder.mid_block.attentions.0.to_q.weight", "encoder.mid_block.attentions.0.to_q.bias", "encoder.mid_block.attentions.0.to_k.weight", "encoder.mid_block.attentions.0.to_k.bias", "encoder.mid_block.attentions.0.to_v.weight", "encoder.mid_block.attentions.0.to_v.bias", "encoder.mid_block.attentions.0.to_out.0.weight", "encoder.mid_block.attentions.0.to_out.0.bias", "decoder.mid_block.attentions.0.to_q.weight", "decoder.mid_block.attentions.0.to_q.bias", "decoder.mid_block.attentions.0.to_k.weight", "decoder.mid_block.attentions.0.to_k.bias", "decoder.mid_block.attentions.0.to_v.weight", "decoder.mid_block.attentions.0.to_v.bias", "decoder.mid_block.attentions.0.to_out.0.weight", "decoder.mid_block.attentions.0.to_out.0.bias".
Unexpected key(s) in state_dict: "encoder.mid_block.attentions.0.key.bias", "encoder.mid_block.attentions.0.key.weight", "encoder.mid_block.attentions.0.proj_attn.bias", "encoder.mid_block.attentions.0.proj_attn.weight", "encoder.mid_block.attentions.0.query.bias", "encoder.mid_block.attentions.0.query.weight", "encoder.mid_block.attentions.0.value.bias", "encoder.mid_block.attentions.0.value.weight", "decoder.mid_block.attentions.0.key.bias", "decoder.mid_block.attentions.0.key.weight", "decoder.mid_block.attentions.0.proj_attn.bias", "decoder.mid_block.attentions.0.proj_attn.weight", "decoder.mid_block.attentions.0.query.bias", "decoder.mid_block.attentions.0.query.weight", "decoder.mid_block.attentions.0.value.bias", "decoder.mid_block.attentions.0.value.weight".
```
### System Info
that command does not work for me
### Who can help?
@saya | https://github.com/huggingface/diffusers/issues/7724 | closed | [
"bug"
] | 2024-04-19T13:27:17Z | 2024-04-22T08:45:24Z | null | veraburg |
huggingface/optimum | 1,821 | Idefics2 Support in Optimum for ONNX export | ### Feature request
With reference to the new Idefics2 model- https://huggingface.co/HuggingFaceM4/idefics2-8b
I would like to export it to ONNX which is currently not possible.
Please enable conversion support. Current Error with pip install transformers via GIT
```
Traceback (most recent call last):
File "/usr/local/bin/optimum-cli", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/optimum/commands/optimum_cli.py", line 163, in main
service.run()
File "/usr/local/lib/python3.10/dist-packages/optimum/commands/export/onnx.py", line 265, in run
main_export(
File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 352, in main_export
onnx_export_from_model(
File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1048, in onnx_export_from_model
raise ValueError(
ValueError: Trying to export a idefics2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type idefics2 to be supported natively in the ONNX export.
```
### Motivation
The model is good and would like to export it to onnx asap
### Your contribution
- | https://github.com/huggingface/optimum/issues/1821 | open | [
"feature-request",
"onnx"
] | 2024-04-19T07:12:41Z | 2025-02-18T19:25:11Z | 8 | gtx-cyber |
huggingface/alignment-handbook | 158 | How to work with local data | I downloaded a dataset from hf. I want to load it locally, but it still tries to download it from hf and place it into the cache.
How can I use the local one I already downloaded?
Thank you. | https://github.com/huggingface/alignment-handbook/issues/158 | open | [] | 2024-04-18T10:26:14Z | 2024-05-14T11:20:55Z | null | pretidav |
huggingface/optimum-quanto | 182 | Can I use quanto on AMD GPU? | Does quanto work with AMD GPUs ? | https://github.com/huggingface/optimum-quanto/issues/182 | closed | [
"question",
"Stale"
] | 2024-04-18T03:06:54Z | 2024-05-25T01:49:56Z | null | catsled |
huggingface/accelerate | 2,680 | How to get pytorch_model.bin from ckeckpoint files without zero_to_fp32.py | https://github.com/huggingface/accelerate/issues/2680 | closed | [] | 2024-04-17T11:30:32Z | 2024-04-18T22:40:14Z | null | lipiji | |
huggingface/datasets | 6,819 | Give more details in `DataFilesNotFoundError` when getting the config names | ### Feature request
After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error:
```
{
"error": "Cannot get the config names for the dataset.",
"cause_exception": "DataFilesNotFoundError",
"cause_message": "No (supported) data files found in cis-lmu/Glot500",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_runners/dataset/config_names.py\", line 73, in compute_config_names_response\n config_names = get_dataset_config_names(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 347, in get_dataset_config_names\n dataset_module = dataset_module_factory(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1873, in dataset_module_factory\n raise e1 from None\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1854, in dataset_module_factory\n return HubDatasetModuleFactoryWithoutScript(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1245, in get_module\n module_name, default_builder_kwargs = infer_module_for_data_files(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 595, in infer_module_for_data_files\n raise DataFilesNotFoundError(\"No (supported) data files found\" + (f\" in {path}\" if path else \"\"))\n",
"datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in cis-lmu/Glot500\n"
]
}
```
because the deleted files were still listed in the README, see https://huggingface.co/datasets/cis-lmu/Glot500/discussions/4
Ideally, the error message would include the name of the first configuration with missing files, to help the user understand how to fix it. Here, it would tell that configuration `aze_Ethi` has no supported data files, instead of telling that the `cis-lmu/Glot500` *dataset* has no supported data files (which is not true).
### Motivation
Giving more detail in the error would help the Datasets Hub users to debug why the dataset viewer does not work.
### Your contribution
Not sure how to best fix this, as there are a lot of loops on the dataset configs in the traceback methods. "maybe" it would be easier to handle if the code was completely isolating each config. | https://github.com/huggingface/datasets/issues/6819 | open | [
"enhancement"
] | 2024-04-17T11:19:47Z | 2024-04-17T11:19:47Z | 0 | severo |
huggingface/optimum | 1,818 | Request for ONNX Export Support for Blip Model in Optimum | Hi Team,
I hope this message finds you well.
I've encountered an issue while attempting to export Blip model into the ONNX format using Optimum. I have used below command.
`! optimum-cli export onnx -m Salesforce/blip-itm-base-coco --task feature-extraction blip_onnx`
It appears that Optimum currently lacks support for this functionality, leading to errors during the export process.
`ValueError: Trying to export a blip model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type blip to be supported natively in the ONNX export`
Could you kindly provide insights into when we might expect support for exporting Blip models to ONNX to be implemented in Optimum?
Thank you for considering this request. I look forward to any updates or information you can provide on this matter.
| https://github.com/huggingface/optimum/issues/1818 | open | [
"feature-request",
"question",
"onnx"
] | 2024-04-17T08:55:45Z | 2024-10-14T12:26:36Z | null | n9s8a |
huggingface/transformers.js | 715 | How to unload/destroy a pipeline? | ### Question
I tried to find how to unload a pipeline to free up memory in the documentation, but couldn't find a mention of how to do that properly.
If there a proper way to "unload" a pipeline?
I'd be happy to add the answer to the documentation. | https://github.com/huggingface/transformers.js/issues/715 | closed | [
"question"
] | 2024-04-16T09:02:05Z | 2024-05-29T09:32:23Z | null | flatsiedatsie |
huggingface/transformers.js | 714 | Reproducing model conversions | ### Question
I'm trying to reproduce the conversion of `phi-1_5_dev` to better understand the process. I'm running into a few bugs / issues along the way that I thought it'd be helpful to document.
The model [`@Xenova/phi-1_5_dev`](https://huggingface.co/Xenova/phi-1_5_dev) states:
> https://huggingface.co/susnato/phi-1_5_dev with ONNX weights to be compatible with Transformers.js.
I'm doing the following:
```
git clone https://github.com/xenova/transformers.js.git && cd transformers.js/scripts
git clone https://huggingface.co/susnato/phi-1_5_dev
python3 -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt
python3 convert.py --quantize --model_id phi-1_5_dev --task "text-generation"
```
Here, I hit my first issue - it looks like `transformers` on `pypi` does not support Phi:
```
raise KeyError(key)
KeyError: 'phi'
```
So I install from Github:
```
pip install git+https://github.com/huggingface/transformers.git
```
That produces:
```
RuntimeError: Failed to import optimum.exporters.onnx.__main__ because of the following error (look up to see its traceback):
cannot import name 'is_torch_less_than_1_11' from 'transformers.pytorch_utils' (/Users/thekevinscott/code/codegen/research/model-conversion/throwaway/transformers.js/scripts/.venv/lib/python3.10/site-packages/transformers/pytorch_utils.py)
```
I believe `optimum` is also out of date:
```
pip install git+https://github.com/huggingface/optimum.git
```
With those two dependencies updated, this command now works:
```
python3 convert.py --quantize --model_id phi-1_5_dev --task "text-generation"
```
Though there are a few warnings I'm assuming I can ignore:
```
Ignore MatMul due to non constant B: /[/model/layers.22/self_attn/MatMul]
Ignore MatMul due to non constant B: /[/model/layers.22/self_attn/MatMul_1]
Ignore MatMul due to non constant B: /[/model/layers.23/self_attn/MatMul]
Ignore MatMul due to non constant B: /[/model/layers.23/self_attn/MatMul_1]
```
However, out of the box it can't find the right `onnx` file:
```
Error: `local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at "transformers.js/scripts/models/phi-1_5_dev/onnx/decoder_model_merged_quantized.onnx".
```
I see in the [`@Xenova` repo history](https://huggingface.co/Xenova/phi-1_5_dev/commit/ae1a980babe16f9d136c22eb119d171dec7c6a09) that the files were manually renamed; I'll try that too:
```
mv model.onnx decoder_model_merged.onnx
mv model_quantized.onnx decoder_model_merged_quantized.onnx
mv model.onnx_data decoder_model_merged.onnx_data
```
I then try to run the model with:
```
const model = await loadModel('transformers.js/scripts/models/phi-1_5_dev', {
});
const result = await model('Write me a list of numbers:\n', {
});
console.log('result', result);
```
The model loads, but upon generating I see:
```
WARNING: Too many inputs were provided (51 > 3). The following inputs will be ignored: "past_key_values.0.key, past_key_values.0.value, past_key_values.1.key, past_key_values.1.value, past_key_values.2.key, past_key_values.2.value, past_key_values.3.key, past_key_values.3.value, past_key_values.4.key, past_key_values.4.value, past_key_values.5.key, past_key_values.5.value, past_key_values.6.key, past_key_values.6.value, past_key_values.7.key, past_key_values.7.value, past_key_values.8.key, past_key_values.8.value, past_key_values.9.key, past_key_values.9.value, past_key_values.10.key, past_key_values.10.value, past_key_values.11.key, past_key_values.11.value, past_key_values.12.key, past_key_values.12.value, past_key_values.13.key, past_key_values.13.value, past_key_values.14.key, past_key_values.14.value, past_key_values.15.key, past_key_values.15.value, past_key_values.16.key, past_key_values.16.value, past_key_values.17.key, past_key_values.17.value, past_key_values.18.key, past_key_values.18.value, past_key_values.19.key, past_key_values.19.value, past_key_values.20.key, past_key_values.20.value, past_key_values.21.key, past_key_values.21.value, past_key_values.22.key, past_key_values.22.value, past_key_values.23.key, past_key_values.23.value".
2024-04-15 11:00:50.956 node[91488:12372370] 2024-04-15 11:00:50.956090 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running Gather node. Name:'/model/layers.0/self_attn/Gather_4' Status Message: indices element out of data bounds, idx=8 must be within the inclusive range [-1,0]
An error occurred during model execution: "Error: Non-zero status code returned while running Gather node. Name:'/model/layers.0/self_attn/Gather_4' Status Message: indices element out of data bounds, idx=8 must be within the inclusive range [-1,0]".
Inputs given to model: [Object: null prototype] {
input_ids: Tensor {
dims: [ 1, 1 ],
type: 'int64',
data: BigInt64Array(1) [ 13n ],
size: 1
},
attention_mask: T | https://github.com/huggingface/transformers.js/issues/714 | open | [
"question"
] | 2024-04-15T15:02:33Z | 2024-05-10T14:26:00Z | null | thekevinscott |
huggingface/sentence-transformers | 2,594 | What is the maximum number of sentences that a fast cluster can cluster? | What is the maximum number of sentences that a fast cluster can cluster? When I cluster 2 million sentences, the cluster gets killed. | https://github.com/huggingface/sentence-transformers/issues/2594 | open | [] | 2024-04-15T09:55:06Z | 2024-04-15T09:55:06Z | null | BinhMinhs10 |
huggingface/dataset-viewer | 2,721 | Help dataset owner to chose between configs and splits? | See https://huggingface.slack.com/archives/C039P47V1L5/p1713172703779839
> Am I correct in assuming that if you specify a "config" in a dataset, only the given config is downloaded, but if you specify a split, all splits for that config are downloaded? I came across it when using facebook's belebele (https://huggingface.co/datasets/facebook/belebele). Instead of a config for each language, they use a split for each language, but that seems to mean that the full dataset is downloaded, even if you select just one language split.
For languages, we recommend using different configs, not splits.
Maybe we should also show a warning / open a PR/discussion? when a dataset contains more than 5 splits, hinting that it might be better to use configs? | https://github.com/huggingface/dataset-viewer/issues/2721 | open | [
"question",
"P2"
] | 2024-04-15T09:51:43Z | 2024-05-24T15:17:51Z | null | severo |
huggingface/diffusers | 7,676 | How to determine the type of file, such as checkpoint, etc. | Hello.
Is there some kind of script that determines the type of file "checkpoint", "LORA", "textual_inversion", etc.? | https://github.com/huggingface/diffusers/issues/7676 | closed | [] | 2024-04-14T23:58:08Z | 2024-04-15T02:50:43Z | null | suzukimain |
huggingface/diffusers | 7,670 | How to use IDDPM in diffusers ? | The code base is here:
https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py | https://github.com/huggingface/diffusers/issues/7670 | closed | [
"should-move-to-discussion"
] | 2024-04-14T12:30:34Z | 2024-11-20T00:17:18Z | null | jiarenyf |
huggingface/transformers.js | 713 | Help understanding logits and model vocabs | ### Question
I'm trying to write a custom `LogitsProcessor` and have some questions. For reference, I'm using [`Xenova/phi-1_5_dev`](https://huggingface.co/Xenova/phi-1_5_dev). I'm trying to implement a custom logic for white or blacklisting tokens, but running into difficulties understanding how to interpret token ids, tokens, and their decoded counterparts.
Here's what I think I understand:
- [The vocab file is defined at `vocab.json`](https://huggingface.co/Xenova/phi-1_5_dev/blob/main/vocab.json), and has 50,257 entries.
- This file is exposed on `pipeline.tokenizer.vocab`, translated from the object representation of `vocab.json` (`{ token: tokenID }`), to an array of `token`s whose indices correspond to `tokenID`.
- **Question:** `vocab.json` has 50,257 entries, but `pipeline.tokenizer.vocab` has 50,295 entries. Is this because `pipeline.tokenizer.vocab` _also_ includes `added_tokens.json`?
- And [`special_tokens_map.json`](https://huggingface.co/Xenova/phi-1_5_dev/blob/main/special_tokens_map.json) is already included in `vocab.json` it appears
- The tokens in the vocab file must be decoded before being displayed
- for example, the token in `vocab.json` at `50255` is `"Ġgazed"`, but if I decode this character by character (`pipeline.tokenizer.decoder.byte_decoder('Ġ')` becomes `32` which corresponds to a space `" "`) I get `" gazed"`. I _think_ these correspond to code points.
- The `logits` argument contains scores where the index of each score is the `tokenID`. So setting the score at position `50255` to `-Infinity` should ensure that the token `"Ġgazed"` (or, decoded, `" gazed"`) should never appear.
- The `logits` argument I'm getting back for this model in my `LogitsProcessor` has dimensions of `[51200,]`. `pipeline.tokenizer.vocab` has size of is 50,295. That would seem to indicate 905 unused tokens at the end of the tensor; can these be safely ignored, or do they correspond to something important that I'm missing?
I'd appreciate any insight or feedback on whether my assumptions above are correct or not. Thank you! | https://github.com/huggingface/transformers.js/issues/713 | closed | [
"question"
] | 2024-04-13T21:06:14Z | 2024-04-14T15:17:43Z | null | thekevinscott |
huggingface/lighteval | 155 | How to run 30b plus model with lighteval when accelerate launch failed? OOM | CUDA Memory OOM when I launch an evaluation for 30b model using lighteval.
Whats the correct config for it? | https://github.com/huggingface/lighteval/issues/155 | closed | [] | 2024-04-13T03:49:20Z | 2024-05-04T11:18:38Z | null | xiechengmude |
huggingface/transformers | 30,213 | Mamba: which tokenizer has been saved and how to use it? | ### System Info
Hardware independent.
### Who can help?
@ArthurZucker
I described the doubts in the link below around 1 month ago, but maybe model-hub discussions are not so active. Then I post it here as repo issue. Please, let me know where to discuss it :)
https://huggingface.co/state-spaces/mamba-2.8b-hf/discussions/1
Thanks!
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
.
### Expected behavior
. | https://github.com/huggingface/transformers/issues/30213 | closed | [] | 2024-04-12T11:28:17Z | 2024-05-17T13:13:12Z | null | javiermcebrian |
huggingface/sentence-transformers | 2,587 | Implementing Embedding Quantization for Dynamic Serving Contexts | I'm currently exploring embedding quantization strategies to enhance storage and computation efficiency while maintaining high accuracy. Specifically, I'm looking at integrating these strategies with Infinity (https://github.com/michaelfeil/infinity/discussions/198), a high-throughput, low-latency REST API for serving vector embeddings.
Here is the quantization method I want to use from sentence-transformers (specifically scalar int8, because binary quant. also reduces the vector dimensions, something I do not want to keep the accuracy high): https://sbert.net/examples/applications/embedding-quantization/README.html
So this is what I want to apply:
```
from sentence_transformers import SentenceTransformer
from sentence_transformers.quantization import quantize_embeddings
from datasets import load_dataset
# 1. Load an embedding model
model = SentenceTransformer("mixedbread-ai/mxbai-embed-large-v1")
# 2. Prepare an example calibration dataset
corpus = load_dataset("nq_open", split="train[:1000]")["question"]
calibration_embeddings = model.encode(corpus)
# 3. Encode some text without quantization & apply quantization afterwards
embeddings = model.encode(["I am driving to the lake.", "It is a beautiful day."])
int8_embeddings = quantize_embeddings(
embeddings,
precision="int8",
calibration_embeddings=calibration_embeddings,
)
```
The main challenge for me which arises with scalar quantization is, that it requires a calibration dataset to compute min and max values, making the embedding process stateful. This conflicts with the need for a flexible, dynamic serving via the Infinity API, which typically handles embeddings on the fly. So this embedding API I created is used by various other services which have different types of datasets. Therefore I am looking for a way to not need such calibration dataset.
I am seeking advice on:
- Managing the statefulness introduced by scalar quantization.
- Alternative strategies that might be more suitable for dynamic environments where embeddings are generated on demand.
- Any guidance or suggestions on how to tackle these issues would be greatly appreciated.
Thank you! | https://github.com/huggingface/sentence-transformers/issues/2587 | open | [
"question"
] | 2024-04-11T11:03:23Z | 2024-04-12T07:28:48Z | null | Nookbe |
huggingface/diffusers | 7,636 | how to use the controlnet sdxl tile model in diffusers | ### Describe the bug
I want to use [this model](https://huggingface.co/TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic_V1) to make my slightly blurry photos clear, so i found this model.
I follow the code [here](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile) , but as the model mentioned above is XL not 1.5 , so i change the code, but it error.
### Reproduction
import torch
from PIL import Image
from diffusers import ControlNetModel, DiffusionPipeline, StableDiffusionXLControlNetPipeline
def resize_for_condition_image(input_image: Image, resolution: int):
input_image = input_image.convert("RGB")
W, H = input_image.size
k = float(resolution) / min(H, W)
H *= k
W *= k
H = int(round(H / 64.0)) * 64
W = int(round(W / 64.0)) * 64
img = input_image.resize((W, H), resample=Image.LANCZOS)
return img
controlnet = ControlNetModel.from_pretrained('/mnt/asian-t2i/pretrained_models/TTPLanet_SDXL_Controlnet_Tile_Realistic_V1',
torch_dtype=torch.float16, use_safetensors = True)
pipe = DiffusionPipeline.from_pretrained("/mnt/asian-t2i/pretrained_models/RealVisXL_V3.0",
custom_pipeline="stable_diffusion_controlnet_img2img",
controlnet=controlnet,
torch_dtype=torch.float16,).to('cuda')
pipe.enable_xformers_memory_efficient_attention()
source_image = Image.open("/mnt/asian-t2i/data/luchuan/1024/0410-redbook-luchuan-6.jpg")
condition_image = resize_for_condition_image(source_image, 1024)
image = pipe(
prompt="best quality",
negative_prompt="blur, lowres, bad anatomy, bad hands, cropped, worst quality",
image=condition_image,
controlnet_conditioning_image=condition_image,
width=condition_image.size[0],
height=condition_image.size[1],
strength=1.0,
generator=torch.manual_seed(0),
num_inference_steps=32,
).images[0]
image.save('output.png')
### Logs
```shell
/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:678: FutureWarning: 'cached_download' is the legacy way to download files from the HF hub, please consider upgrading to 'hf_hub_download'
warnings.warn(
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:02<00:00, 2.00it/s]
You have disabled the safety checker for <class 'diffusers_modules.git.stable_diffusion_controlnet_img2img.StableDiffusionControlNetImg2ImgPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
0%| | 0/32 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/mnt/asian-t2i/demo.py", line 31, in <module>
image = pipe(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/.cache/huggingface/modules/diffusers_modules/git/stable_diffusion_controlnet_img2img.py", line 839, in __call__
down_block_res_samples, mid_block_res_sample = self.controlnet(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/asian-t2i/diffusers/src/diffusers/models/controlnet.py", line 775, in forward
if "text_embeds" not in added_cond_kwargs:
TypeError: argument of type 'NoneType' is not iterable
```
### System Info
Name: diffusers
Version: 0.27.0.dev0
### Who can help?
@sayakpaul @yiyixuxu @DN6 | https://github.com/huggingface/diffusers/issues/7636 | closed | [
"bug",
"stale"
] | 2024-04-11T03:20:42Z | 2024-06-29T13:26:58Z | null | xinli2008 |
huggingface/optimum-quanto | 161 | Question: any plan to formally support smooth quantization and make it more general | Awesome work!
I noticed there are smooth quant implemented under [external](https://github.com/huggingface/quanto/tree/main/external/smoothquant). Currently, its implementation seems to be model-specific, we can only apply smooth on special `Linear`.
However, in general, the smooth can be applied on any `Linear` by inserting a `mul`. Are there any plans to officially support smooth quantization in-tree? My initial thought was, is it possible to define a `SmoothTensor` and use `__torch_dispatch__` to override the `bmm` behavior? | https://github.com/huggingface/optimum-quanto/issues/161 | closed | [
"question",
"Stale"
] | 2024-04-11T02:45:31Z | 2024-05-18T01:49:52Z | null | yiliu30 |
huggingface/accelerate | 2,647 | How to use deepspeed with dynamic batch? | ### System Info
```Shell
- `Accelerate` version: 0.29.1
- Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
- `accelerate` bash location: /home/yuchao/miniconda3/envs/TorchTTS/bin/accelerate
- Python version: 3.10.13
- Numpy version: 1.23.5
- PyTorch version (GPU?): 2.2.2+cu118 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- System RAM: 125.48 GB
- GPU type: NVIDIA GeForce RTX 4090
- `Accelerate` default config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: none
offload_param_device: none
zero3_init_flag: false
zero_stage: 2
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
For sequence task, we always use dynamic batch to group long sequence to small batches while group short sequence to large batches. But deepspeed here needs to specify either `batch_size` or `train_micro_batch_size_per_gpu` which is unavailable for use. Any idea to fix that?
```
When using DeepSpeed, `accelerate.prepare()` requires you to pass at least one of training or evaluation dataloaders with `batch_size` attribute returning an integer value or alternatively set an integer value in `train_micro_batch_size_per_gpu` in the deepspeed config file or assign integer value to `AcceleratorState().deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu']`.
```
### Expected behavior
Be able to train deepspeed with dynamic batch | https://github.com/huggingface/accelerate/issues/2647 | closed | [] | 2024-04-10T09:09:53Z | 2025-05-11T15:07:27Z | null | npuichigo |
huggingface/transformers.js | 690 | Is top-level await necessary in the v3 branch? | ### Question
I saw the excellent performance of WebGPU, so I tried to install xenova/transformers.js#v3 as a dependency in my project.
I found that v3 uses the top-level await syntax. If I can't restrict users to using the latest browser version, I have to make it compatible (using `vite-plugin-top-level-await` or `rollup-plugin-tla`).
Is it possible to use other methods instead of top-level await? Or is this project not intended to support users who do not have support for top-level await?
Thanks. | https://github.com/huggingface/transformers.js/issues/690 | closed | [
"question"
] | 2024-04-10T08:49:32Z | 2024-04-11T17:18:42Z | null | ceynri |
huggingface/optimum-quanto | 158 | How dose quanto support int8 conv2d and linear? | Hi, I look into the code and didn't find any cuda kernel related to conv2d and linear. How did you implement the cuda backend for conv2d/linear? Thanks | https://github.com/huggingface/optimum-quanto/issues/158 | closed | [
"question"
] | 2024-04-10T05:41:43Z | 2024-04-11T09:26:35Z | null | zhexinli |
huggingface/transformers.js | 689 | Abort the audio recognition process | ### Question
Hello! How can I stop the audio file recognition process while leaving the loaded model? If I terminate the worker I have to reload the model to start the process of recognizing a new audio file. I need either functionality to be able to send a pipeline command to stop the recognition process, or the ability to first load the model and then pass it as an object to the pipeline. Thank you. | https://github.com/huggingface/transformers.js/issues/689 | open | [
"question"
] | 2024-04-10T02:51:37Z | 2024-04-20T06:09:11Z | null | innoware11 |
huggingface/transformers | 30,154 | Question about how to write code for trainer and dataset for multi-gpu | ### System Info
- Platform: Linux-5.15.0-1026-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.27.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi I have a quick question on how to write code for dataset and trainer for multi-gpu setting.
Here is my workflow.
I have a dataset where I called
```
dataset = dataset.load_dataset(...)
```
I need to do some preprocessing for it and the dataset becomes an Iterable dataset.
and then I pass the dataset into the trainer like
```
trainer = Trainer(train_data=dataset)
trainer.train()
```
My question is since I am running on multi-gpu and use command
```
torchrun --standalone --nnodes=1 --nproc_per_node=2 train_lora.py
```
Two process is executing the same code above and this cause dataset and trainer created twice. Should the dataset and trainer be created once or twice? If created once, should I wrapper all the code like that?
```
if accelerator.is_main_process:
dataset = dataset.load_dataset(...)
trainer = Trainer(train_data=dataset)
trainer.train()
````
I do observe that we only use 1 dataset for generating the samples even if we create two dataset object and do not wrap accelerator.is_main_process. That is because the dataset already convert by trainer for distributed training. So I think there is no point for creating dataset twice since we only use the first dataset. How to write the code such that there is no error on the second process? if I make second process dataset is None, the trainer will give error for dataset is empty
Do we need to create two trainer where each trainer is corresponding to one gpu or should we only have one trainer that is in charge for two gpu? What is the best way to write the code to achieve this in this case?
### Expected behavior
the correct way of implement this situation. | https://github.com/huggingface/transformers/issues/30154 | closed | [] | 2024-04-10T00:08:00Z | 2024-04-10T22:57:53Z | null | zch-cc |
huggingface/accelerate | 2,643 | How to use gather_for_metrics for object detection models? | ### Reproduction
I used the `gather_for_metrics` function as follows:
```python
predictions, ground_truths = accelerator.gather_for_metrics((predictions, ground_truths))
```
And i've got the error:
```
accelerate.utils.operations.DistributedOperationException: Impossible to apply the desired operation due to inadequate shapes. All shapes on the devices must be valid.
```
* ground_truths are dictionaries of torch.tensor with keys: `boxes`, `labels`, `image_id`, `area`, `iscrowd` following pytorch conventions: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html.
* predictions are dictionaries of torch.tensor with `boxes`, `labels` and `scores` keys.
I use 3 gpus, and in each I have 120 dictionaries of predictions and ground truths, but as expected inside each dictionary the tensor size should vary from 0 to n bbox predictions/ground truths.
But during gather_predictions, the `verify_operation` decorator raises an because all the tensor shapes inside the different dictionaries vary.
### Expected behavior
Have the possibility to gather complex objects like dictionaries of torch.tensor with different shapes!
Thank you for your help and for this amazing framework 🙏 | https://github.com/huggingface/accelerate/issues/2643 | closed | [] | 2024-04-09T23:15:20Z | 2024-04-30T07:48:36Z | null | yann-rdgz |
huggingface/candle | 2,033 | How to use CUDA as the backend in `candle-wasm-examples/llama2-c` ? | How to use CUDA as the backend in `candle-wasm-examples/llama2-c` ?
In `candle-wasm-examples/llama2-c`, I do some changes shown below.
```diff
--- a/candle-wasm-examples/llama2-c/Cargo.toml
+++ b/candle-wasm-examples/llama2-c/Cargo.toml
@@ -9,7 +9,7 @@ categories.workspace = true
license.workspace = true
[dependencies]
-candle = { workspace = true }
+candle = { workspace = true, features = ["cuda"] }
candle-nn = { workspace = true }
candle-transformers = { workspace = true }
num-traits = { workspace = true }
```
```diff
--- a/candle-wasm-examples/llama2-c/src/bin/m.rs
+++ b/candle-wasm-examples/llama2-c/src/bin/m.rs
@@ -14,7 +14,7 @@ pub struct Model {
impl Model {
fn process(&mut self, tokens: &[u32]) -> candle::Result<String> {
const REPEAT_LAST_N: usize = 64;
- let dev = Device::Cpu;
+ let dev = Device::new_cuda(0)?;
let input = Tensor::new(tokens, &dev)?.unsqueeze(0)?;
let logits = self.inner.llama.forward(&input, tokens.len())?;
let logits = logits.squeeze(0)?;
```
```diff
--- a/candle-wasm-examples/llama2-c/src/worker.rs
+++ b/candle-wasm-examples/llama2-c/src/worker.rs
@@ -65,7 +65,7 @@ impl Model {
top_p: f64,
prompt: String,
) -> Result<()> {
- let dev = Device::Cpu;
+ let dev = Device::new_cuda(0)?;
let temp = if temp <= 0. { None } else { Some(temp) };
let top_p = if top_p <= 0. || top_p >= 1.0 {
None
@@ -248,7 +248,7 @@ impl TransformerWeights {
impl Model {
pub fn load(md: ModelData) -> Result<Self> {
- let dev = Device::Cpu;
+ let dev = Device::new_cuda(0)?;
let mut model = std::io::Cursor::new(md.model);
let config = Config::from_reader(&mut model)?;
let weights = TransformerWeights::from_reader(&mut model, &config, &dev)?;
```
But when I execute `trunk serve --release --public-url / --port 8080`, some errors occur.
```shell
= note: rust-lld: error: unable to find library -lcuda
rust-lld: error: unable to find library -lnvrtc
rust-lld: error: unable to find library -lcurand
rust-lld: error: unable to find library -lcublas
rust-lld: error: unable to find library -lcublasLt
error: could not compile `candle-wasm-example-llama2` (bin "worker") due to 1 previous error
2024-04-09T16:12:09.062364Z ERROR error
error from build pipeline
Caused by:
0: HTML build pipeline failed (2 errors), showing first
1: error from asset pipeline
2: running cargo build
3: error during cargo build execution
4: cargo call to executable 'cargo' with args: '["build", "--target=wasm32-unknown-unknown", "--manifest-path", "/work/training/candle/candle-wasm-examples/llama2-c/Cargo.toml", "--bin", "worker"]' returned a bad status: exit status: 101
```
How should I solve the above problem?
I confirm that my CUDA installed correctly and I'm able to execute the following commands.
```shell
cargo new myapp
cd myapp
cargo add --git https://github.com/huggingface/candle.git candle-core --features "cuda"
cargo build
```
| https://github.com/huggingface/candle/issues/2033 | closed | [] | 2024-04-09T16:16:55Z | 2024-04-12T08:26:24Z | null | wzzju |
huggingface/optimum | 1,804 | advice for simple onnxruntime script for ORTModelForVision2Seq (or separate encoder/decoder) | I am trying to use implement this [class ](https://github.com/huggingface/optimum/blob/69af5dbab133f2e0ae892721759825d06f6cb3b7/optimum/onnxruntime/modeling_seq2seq.py#L1832) in C++ because unfortunately I didn't find any C++ implementation for this.
Therefore, my current approach is to revert this class and the auxiliary classes to a simple onnxruntime prediction, to make things easier to port to C++.
Does anyone have any advice in this matter? Thank you
| https://github.com/huggingface/optimum/issues/1804 | open | [
"question",
"onnxruntime"
] | 2024-04-09T15:14:40Z | 2024-10-14T12:41:15Z | null | eduardatmadenn |
huggingface/chat-ui | 997 | Community Assistants | Hi, I've looked through all the possible issues but I didn't find what I was looking for.
On self-hosted is the option to have the community assistants such as the ones on https://huggingface.co/chat/ not available? I've also noticed that when I create Assistants on my side they do not show up on community tabs either they are purely user restricted, I am missing something? I've configured the hf token and the API base, any hints are appreciated.

| https://github.com/huggingface/chat-ui/issues/997 | closed | [
"help wanted",
"assistants"
] | 2024-04-09T12:44:49Z | 2024-04-23T06:09:47Z | 2 | Coinficient |
huggingface/evaluate | 570 | [Question] How to have no preset values sent into `.compute()` | We've a use-case https://huggingface.co/spaces/alvations/llm_harness_mistral_arc/blob/main/llm_harness_mistral_arc.py
where default feature input types for `evaluate.Metric` is nothing and we get something like this in our `llm_harness_mistral_arc/llm_harness_mistral_arc.py`
```python
import evaluate
import datasets
import lm_eval
@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
class llm_harness_mistral_arc(evaluate.Metric):
def _info(self):
# TODO: Specifies the evaluate.EvaluationModuleInfo object
return evaluate.MetricInfo(
# This is the description that will appear on the modules page.
module_type="metric",
description="",
citation="",
inputs_description="",
# This defines the format of each prediction and reference
features={},
)
def _compute(self, pretrained=None, tasks=[]):
outputs = lm_eval.simple_evaluate(
model="hf",
model_args={"pretrained":pretrained},
tasks=tasks,
num_fewshot=0,
)
results = {}
for task in outputs['results']:
results[task] = {'acc':outputs['results'][task]['acc,none'],
'acc_norm':outputs['results'][task]['acc_norm,none']}
return results
```
And in our expected user-behavior is something like, [in]:
```python
import evaluate
module = evaluate.load("alvations/llm_harness_mistral_arc")
module.compute(pretrained="mistralai/Mistral-7B-Instruct-v0.2", tasks=["arc_easy"])
```
And the expected output as per our `tests.py`, https://huggingface.co/spaces/alvations/llm_harness_mistral_arc/blob/main/tests.py [out]:
```
{'arc_easy': {'acc': 0.8131313131313131, 'acc_norm': 0.7680976430976431}}
```
But the `evaluate.Metric.compute()` somehow expects a default batch and `module.compute(pretrained="mistralai/Mistral-7B-Instruct-v0.2", tasks=["arc_easy"])` throws an error:
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-20-bd94e5882ca5>](https://localhost:8080/#) in <cell line: 1>()
----> 1 module.compute(pretrained="mistralai/Mistral-7B-Instruct-v0.2",
2 tasks=["arc_easy"])
2 frames
[/usr/local/lib/python3.10/dist-packages/evaluate/module.py](https://localhost:8080/#) in _get_all_cache_files(self)
309 if self.num_process == 1:
310 if self.cache_file_name is None:
--> 311 raise ValueError(
312 "Evaluation module cache file doesn't exist. Please make sure that you call `add` or `add_batch` "
313 "at least once before calling `compute`."
ValueError: Evaluation module cache file doesn't exist. Please make sure that you call `add` or `add_batch` at least once before calling `compute`.
```
#### Q: Is it possible for the `.compute()` to expect no features?
I've also tried this but somehow the `evaluate.Metric.compute` is still looking for some sort of `predictions` variable.
```
@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
class llm_harness_mistral_arc(evaluate.Metric):
def _info(self):
# TODO: Specifies the evaluate.EvaluationModuleInfo object
return evaluate.MetricInfo(
# This is the description that will appear on the modules page.
module_type="metric",
description="",
citation="",
inputs_description="",
# This defines the format of each prediction and reference
features=[
datasets.Features(
{
"pretrained": datasets.Value("string", id="sequence"),
"tasks": datasets.Sequence(datasets.Value("string", id="sequence"), id="tasks"),
}
)]
)
def _compute(self, pretrained, tasks):
outputs = lm_eval.simple_evaluate(
model="hf",
model_args={"pretrained":pretrained},
tasks=tasks,
num_fewshot=0,
)
results = {}
for task in outputs['results']:
results[task] = {'acc':outputs['results'][task]['acc,none'],
'acc_norm':outputs['results'][task]['acc_norm,none']}
return results
````
then:
```python
import evaluate
module = evaluate.load("alvations/llm_harness_mistral_arc")
module.compute(pretrained="mistralai/Mistral-7B-Instruct-v0.2", tasks=["arc_easy"])
```
[out]:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-36-bd94e5882c | https://github.com/huggingface/evaluate/issues/570 | open | [] | 2024-04-08T22:58:41Z | 2024-04-08T23:54:42Z | null | alvations |
huggingface/transformers | 30,122 | What is the default multi-GPU training type? | ### System Info
NA
### Who can help?
@ArthurZucker , @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When running training using the transformers trainer, and setting device_map to auto, what is the default distributed training type that is used when the model is too large to fit on one GPU?
(assume that I have not yet run `accelerate config`).
Does the model just run with naive model parallel with layers split between different GPUs and with DP (not DDP) on the data side? Are the full gradients and also the optimizer state copied onto each GPU?
It would be helpful if this could be described in the Trainer section of the docs and also in the Multi-GPU docs.
### Expected behavior
NA | https://github.com/huggingface/transformers/issues/30122 | closed | [] | 2024-04-08T11:45:59Z | 2024-05-10T10:35:41Z | null | RonanKMcGovern |
huggingface/optimum | 1,798 | Issue Report: Unable to Export Qwen Model to ONNX Format in Optimum | ### System Info
```shell
Optimum Version: 1.18.0
Python Version: 3.8
Platform: Windows, x86_64
```
### Who can help?
@michaelbenayoun @JingyaHuang @echarlaix
I am writing to report an issue I encountered while attempting to export a Qwen model to ONNX format using Optimum.
Error message:
" ValueError: Trying to export a qwen model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type qwen to be supported natively in the ONNX export. "
Attached screenshot for reference.
<img width="957" alt="qwen_error_export" src="https://github.com/huggingface/optimum/assets/166393333/5b9e75fd-1839-434c-809e-5dd6832b0e05">
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
optimum-cli export onnx --model Qwen/Qwen-7B qwen_optimum_onnx/ --trust-remote-code
### Expected behavior
I would expect Optimum to successfully export the Qwen model to ONNX format without encountering any errors or issues. | https://github.com/huggingface/optimum/issues/1798 | open | [
"bug"
] | 2024-04-08T11:36:09Z | 2024-04-08T11:36:09Z | 0 | Harini-Vemula-2382 |
huggingface/chat-ui | 986 | Github actions won't push built docker images on releases | We currently have a [github actions workflow](https://github.com/huggingface/chat-ui/blob/main/.github/workflows/build-image.yml) that builds an image on every push to `main` and tags it with `latest` and the commit id. [(see here)](https://github.com/huggingface/chat-ui/pkgs/container/chat-ui/versions)
The workflow should also push images tagged for each releases, for example `v0.8` but the workflow [fails](https://github.com/huggingface/chat-ui/actions/runs/8536772524) with a `buildx failed with: ERROR: tag is needed when pushing to registry` error.
I think it would be really nice to have support for tagged images for each releases, but I'm not the best with github actions so if someone has some time and would like to look at it, that would be super appreciated 🤗 | https://github.com/huggingface/chat-ui/issues/986 | closed | [
"help wanted",
"CI/CD"
] | 2024-04-08T07:51:13Z | 2024-04-08T11:27:42Z | 2 | nsarrazin |
huggingface/candle | 2,025 | How to specify which graphics card to run a task on in a server with multiple graphics cards? | https://github.com/huggingface/candle/issues/2025 | closed | [] | 2024-04-07T10:48:35Z | 2024-04-07T11:05:52Z | null | lijingrs | |
huggingface/text-embeddings-inference | 229 | Question: How to add a prefix to the underlying server | I've managed to run the text embeddings inference perfectly using the already built docker images and I'm trying to allow it to our internal components
Right now they're sharing the following behavior
Myhost.com/modelname/v1/embeddings
I was wondering if this "model name" is possible to add as a prefix inside the application through some configuration.
How could I do that? | https://github.com/huggingface/text-embeddings-inference/issues/229 | closed | [] | 2024-04-06T17:29:59Z | 2024-04-08T09:14:40Z | null | Ryojikn |
huggingface/transformers.js | 685 | Transformers.js seems to need an internet connection when it shouldn't? (Error: no available backend found.) | ### Question
What is the recommended way to get Transformers.js to work even when, later on, there is no internet connection?
Is it using a service worker? Or are there other (perhaps hidden) settings for managing caching of files?
I'm assuming here that the `Error: no available backend found` error message is related to Transformers.js not being able to find files once Wi-Fi has been turned off. I was a bit surprised by that, since I do see a cache called `transformers-cache` being created. Is that not caching all the required files?
| https://github.com/huggingface/transformers.js/issues/685 | open | [
"question"
] | 2024-04-06T12:40:15Z | 2024-09-03T01:22:15Z | null | flatsiedatsie |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.