repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 30,827 | Using this command(optimum-cli export onnx --model Qwen1.5-0.5B-Chat --task text-generation Qwen1.5-0.5B-Chat_onnx/) to perform onnx transformation, it is found that the tensor type of the model becomes int64. How to solve this problem? | ### System Info
transformers version : 4.38.1
platform: ubuntu 22.04
python version : 3.10.14
optimum version : 1.19.2
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `exampl... | https://github.com/huggingface/transformers/issues/30827 | closed | [] | 2024-05-15T12:45:50Z | 2024-06-26T08:04:10Z | null | JameslaoA |
huggingface/chat-ui | 1,142 | Feature request, local assistants | I experimented with a few assistants on HF.
The problem I am facing is that I don't know how to get the same behaviour I get on HF from local model (which is the same model).
I tried everything I could thing of.
I think HF does some filtering or rephrasing or has an additional prompt before the assistant description... | https://github.com/huggingface/chat-ui/issues/1142 | open | [
"support"
] | 2024-05-15T11:11:29Z | 2024-05-27T06:53:21Z | 2 | Zibri |
huggingface/optimum | 1,855 | how to change optimum temporary path ? | ### Feature request
c drive less space
### Motivation
help to solve many issue
### Your contribution
dont know | https://github.com/huggingface/optimum/issues/1855 | closed | [] | 2024-05-14T11:17:14Z | 2024-10-14T12:22:35Z | null | neonarc4 |
huggingface/optimum | 1,854 | ai21labs/Jamba-tiny-random support | ### Feature request
ai21labs/Jamba-tiny-random mode, is not supported by Optimum export.
ValueError: Trying to export a jamba model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporte... | https://github.com/huggingface/optimum/issues/1854 | open | [
"feature-request",
"onnx"
] | 2024-05-14T10:22:05Z | 2024-10-09T09:10:58Z | 0 | frankia312 |
huggingface/transformers.js | 763 | Have considered using wasm technology to implement this library? | ### Question
Hello, have you ever considered using wasm technology to implement this library? For example, rust's wgpu-rs and c++'s dawn are both implementations of webgpu. They can be converted to wasm and can also be accelerated with simd. | https://github.com/huggingface/transformers.js/issues/763 | open | [
"question"
] | 2024-05-14T09:22:57Z | 2024-05-14T09:28:38Z | null | ghost |
huggingface/trl | 1,643 | How to save and resume a checkpoint from PPOTrainer | https://github.com/huggingface/trl/blob/5aeb752053876cce64f2164a178635db08d96158/trl/trainer/ppo_trainer.py#L203
It seems that every time the PPOTrainer is initialized, the accelerator is initialized as well. There's no API provided by PPOTrainer to resume checkpoints. How can we save and resume checkpoints? | https://github.com/huggingface/trl/issues/1643 | closed | [] | 2024-05-14T09:10:40Z | 2024-08-08T12:44:25Z | null | paraGONG |
huggingface/tokenizers | 1,531 | How to Batch-Encode Paired Input Sentences with Tokenizers: Seeking Clarification | Hello.
I'm using the tokenizer to encoding pair sentences in TemplateProcessing in batch_encode.
There's a confusing part where the method requires two lists for sentence A and sentence B.
According to the [guide documentation](https://huggingface.co/docs/tokenizers/quicktour): "To process a batch of sentences p... | https://github.com/huggingface/tokenizers/issues/1531 | closed | [
"Stale"
] | 2024-05-14T08:03:52Z | 2024-06-21T08:20:05Z | null | insookim43 |
huggingface/transformers.js | 762 | Options for the "translation" pipeline when using Xenova/t5-small | ### Question
The translation pipeline is [documented](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TranslationPipeline) to use {src_lang and tgt_lang} options to translate from the src language to the tgt language. However, when using Xenova/t5-small none of the options seem to be used. I... | https://github.com/huggingface/transformers.js/issues/762 | open | [
"question"
] | 2024-05-13T21:09:15Z | 2024-05-13T21:09:15Z | null | lucapivato |
huggingface/datasets | 6,894 | Better document defaults of to_json | Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891 | https://github.com/huggingface/datasets/issues/6894 | closed | [
"documentation"
] | 2024-05-13T13:30:54Z | 2024-05-16T14:31:27Z | 0 | albertvillanova |
huggingface/chat-ui | 1,134 | Websearch failed on retrieving from pdf files | On chat ui I am getting the error as shown in screenshot, on pdf files it always says "Failed to parse webpage". I set USE_LOCAL_WEBSEARCH=True in .env.local. can anyone help me.

| https://github.com/huggingface/chat-ui/issues/1134 | open | [
"support",
"websearch"
] | 2024-05-13T06:41:08Z | 2024-06-01T09:25:59Z | 2 | prateekvyas1996 |
huggingface/parler-tts | 47 | Custom pronunciation for words - any thoughts / recommendations about how best to handle them? | Hello! This is a really interesting looking project.
Currently there doesn't seem any way that users can help the model correctly pronounce custom words - for instance **JPEG** is something that speakers just need to know is broken down as "**Jay-Peg**" rather than **Jay-Pea-Ee-Gee**.
I appreciate this project is... | https://github.com/huggingface/parler-tts/issues/47 | open | [] | 2024-05-12T15:51:05Z | 2025-01-03T08:39:58Z | null | nmstoker |
huggingface/text-generation-inference | 1,875 | How to share memory among 2 GPUS for distributed inference? | # Environment Setup
Runtime environment:
Target: x86_64-unknown-linux-gnu
Cargo version: 1.75.0
Commit sha: https://github.com/huggingface/text-generation-inference/commit/c38a7d7ddd9c612e368adec1ef94583be602fc7e
Docker label: sha-6c4496a
Kubernetes Cluster deployment
2 A100 GPU with 80GB RAM
12 CPU wit... | https://github.com/huggingface/text-generation-inference/issues/1875 | closed | [
"Stale"
] | 2024-05-10T08:49:05Z | 2024-06-21T01:48:05Z | null | martinigoyanes |
huggingface/accelerate | 2,759 | How to specify the backend of Trainer | ### System Info
```Shell
accelerate 0.28.0
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_... | https://github.com/huggingface/accelerate/issues/2759 | closed | [] | 2024-05-10T03:18:08Z | 2025-01-16T10:29:19Z | null | Orion-Zheng |
huggingface/lerobot | 167 | python3.10 how to install rerun-sdk | ### System Info
```Shell
ubuntu18.04
python3.10
ERROR: Could not find a version that satisfies the requirement rerun-sdk>=0.15.1 (from lerobot) (from versions: none)
ERROR: No matching distribution found for rerun-sdk>=0.15.1
```
### Information
- [X] One of the scripts in the examples/ folder of LeRobot
- [... | https://github.com/huggingface/lerobot/issues/167 | closed | [
"dependencies"
] | 2024-05-10T03:07:30Z | 2024-05-13T01:25:09Z | null | MountainIntelligent |
huggingface/safetensors | 478 | Can't seem to skip parameter initialization while using the `safetensors.torch.load_model` API! | ### System Info
- `transformers` version: 4.40.0
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.22.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- Te... | https://github.com/huggingface/safetensors/issues/478 | closed | [
"Stale"
] | 2024-05-09T19:12:05Z | 2024-06-15T01:49:24Z | 1 | goelayu |
huggingface/tokenizers | 1,525 | How to write custom Wordpiece class? | My aim is get the rwkv5 model‘s "tokenizer.json",but it implemented through slow tokenizer(class Pretrainedtokenizer).
I want to convert "slow tokenizer" to "fast tokenizer",it needs to use "tokenizer = Tokenizer(Wordpiece())",but rwkv5 has it‘s own Wordpiece file.
So I want to create a custom Wordpiece
the code i... | https://github.com/huggingface/tokenizers/issues/1525 | closed | [
"Stale"
] | 2024-05-09T03:48:27Z | 2024-07-18T01:53:23Z | null | xinyinan9527 |
huggingface/trl | 1,635 | How to use trl\trainer\kto_trainer.py | If I want to use KTO trainer, I could set the parameter [loss_type == "kto_pair"] in dpo_trainer.py. Then what is kto_trainer.py used for? And how to use it? | https://github.com/huggingface/trl/issues/1635 | closed | [] | 2024-05-09T02:40:14Z | 2024-06-11T10:17:51Z | null | mazhengyufreedom |
huggingface/datasets | 6,882 | Connection Error When Using By-pass Proxies | ### Describe the bug
I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(M... | https://github.com/huggingface/datasets/issues/6882 | open | [] | 2024-05-08T06:40:14Z | 2024-05-17T06:38:30Z | 1 | MRNOBODY-ZST |
huggingface/datatrove | 180 | how to turn log/traceback color off? | Trying datatrove for the first time and the program spews a bunch of logs and tracebacks in yellow and cyan which are completely unreadable on the b&w console.
Does the program make an assumption that the user is using w&b (dark) console?
I tried to grep for `color` to see how it controls the colors but found no... | https://github.com/huggingface/datatrove/issues/180 | closed | [] | 2024-05-08T03:51:11Z | 2024-05-17T17:53:20Z | null | stas00 |
huggingface/candle | 2,171 | How to run LLama-3 or Phi with more then 4096 prompt tokens? | Could you please show me an example where LLama-3 model used (better GGUF quantized) and initial prompt is more then 4096 tokens long? Or better 16-64K long (for RAG). Currently everything I do ends with error:
In this code:
let logits = model.forward(&input, 0); // input is > 4096 tokens
Error:
narrow invalid a... | https://github.com/huggingface/candle/issues/2171 | open | [] | 2024-05-07T20:15:28Z | 2024-05-07T20:16:13Z | null | baleksey |
huggingface/chat-ui | 1,115 | [v0.8.4] IMPORTANT: Talking to PDFs and general Roadmap? | Hi @nsarrazin
I have a couple of questions that I could not get answers to in the repo and on the web.
1. Is there a plan to enable file uploads (PDFs, etc) so that users can talk to those files? Similar to ChatGPT, Gemini etc?
2. Is there a feature roadmap available somewhere?
Thanks! | https://github.com/huggingface/chat-ui/issues/1115 | open | [] | 2024-05-07T06:10:20Z | 2024-09-10T15:44:16Z | 4 | adhishthite |
huggingface/candle | 2,167 | How to do a Axum's sse function for Candle? | fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> {
use std::io::Write;
self.tokenizer.clear();
let mut tokens = self
.tokenizer
.tokenizer()
.encode(prompt, true)
.map_err(E::msg)?
.get_ids()
.to_vec... | https://github.com/huggingface/candle/issues/2167 | closed | [] | 2024-05-07T02:38:50Z | 2024-05-08T04:27:14Z | null | sunnyregion |
huggingface/optimum | 1,847 | Static Quantization for Seq2Seq models like T5 | I'm currently trying to static quantize T5 but it seem in the optimum doc last committed 10 months ago said it don't support static only dynamic. Is there anyone ever try this before or has optimum updated any related recently, may be help me take a look? | https://github.com/huggingface/optimum/issues/1847 | open | [
"question",
"quantization"
] | 2024-05-06T19:34:30Z | 2024-10-14T12:24:28Z | null | NQTri00 |
huggingface/optimum | 1,846 | Low performance of THUDM/chatglm3-6b onnx model | I ran the chatglm3-6b model by exporting it to ONNX framework using custom onnx configuration. Although the functionality is correct, the latency of the model is very high, much higher than the pytorch model.
I have attached a minimal reproducible code which exports and run the model. Can someone take a look into it ... | https://github.com/huggingface/optimum/issues/1846 | open | [
"inference",
"onnxruntime",
"onnx"
] | 2024-05-06T17:18:58Z | 2024-10-14T12:25:29Z | 0 | tuhinp-amd |
huggingface/dataset-viewer | 2,775 | Support LeRobot datasets? | Currently:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'VideoFrame' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image']
```
eg on https://... | https://github.com/huggingface/dataset-viewer/issues/2775 | open | [
"question",
"feature request",
"dependencies",
"P2"
] | 2024-05-06T09:16:40Z | 2025-07-24T03:36:41Z | null | severo |
huggingface/peft | 1,712 | how to finetune whisper model with 'initial_prompt' | when use 'initial_prompt', the decoding result of finetuning with my data on whisper model v2 is bad, on the contrary, the result is good.
however, when use 'initial_prompt' the decoding result of based whisper model v2 is also good, so it means If want to use 'initial_prompt' during decoding , must add it when t... | https://github.com/huggingface/peft/issues/1712 | closed | [] | 2024-05-06T06:28:20Z | 2024-06-13T15:03:43Z | null | zyb8543d |
huggingface/dataspeech | 17 | UnboundLocalError: cannot access local variable 't' where it is not associated with a value """ | ### What i do
Hello. I tried to annotate my own dataset. And I got an error that I don't understand.
I'm a newbie. He is generally unable to understand what happened and why it happened.
I am attaching all the materials that I have
I have CSV-Scheme
| audio | text | speeker_id |
| ------------- | ------... | https://github.com/huggingface/dataspeech/issues/17 | closed | [] | 2024-05-05T20:49:26Z | 2024-05-28T11:31:37Z | null | anioji |
huggingface/parler-tts | 38 | how to use common voice mozilla dataset train for Parler-TTS | how to use common voice mozilla dataset train for Parler-TTS ?can you help me ? | https://github.com/huggingface/parler-tts/issues/38 | open | [] | 2024-05-04T12:36:30Z | 2024-05-04T12:36:30Z | null | herbiel |
huggingface/setfit | 519 | how to optimize setfit inference | hi,
im currently investigating what the options we have to optimize setfit inference and have a few questions about it:
- gpu:
- torch compile: https://huggingface.co/docs/transformers/en/perf_torch_compile
is the following the only way to use setfit with torch.compile?
```
model.model_body[0].auto_model =... | https://github.com/huggingface/setfit/issues/519 | closed | [] | 2024-05-03T19:19:21Z | 2024-06-02T20:30:34Z | null | geraldstanje |
huggingface/chat-ui | 1,097 | Katex fails to render math expressions from ChatGPT4. | I am using Chat UI version 0.8.3 and ChatGPT version gpt-4-turbo-2024-04-09.
ChatGPT is outputting formula delimiters as `\[`, `\]`, `\(`, `\)` and katex in the current version of ChatUI is not rendering them correctly. Based on my experiments, katex renders only formulas with `$` delimiters correctly.
I did a qu... | https://github.com/huggingface/chat-ui/issues/1097 | closed | [
"bug",
"help wanted",
"front"
] | 2024-05-03T08:19:40Z | 2024-11-22T12:18:44Z | 5 | haje01 |
huggingface/chat-ui | 1,096 | error in login redirect | I am running chat-ui in online vps ubuntu 22
I am stuck at login redirection
I went through google authorization page and confirm my Gmail then redirect to my main domain again
The problem is simply it back with no action, not logged on and the URL been like that:
mydomain.com/login/callback?state=xxxxxxxxx
when... | https://github.com/huggingface/chat-ui/issues/1096 | open | [
"support"
] | 2024-05-02T22:19:13Z | 2024-05-07T20:50:28Z | 0 | abdalladorrah |
huggingface/trl | 1,614 | How to do fp16 training with PPOTrainer? | I modified the example from the official website to do PPO training with llama3 using lora. When I use fp16, the weights go to nan after the first update, which does not occur when using fp32.
Here is the code
```python
# 0. imports
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
fro... | https://github.com/huggingface/trl/issues/1614 | closed | [] | 2024-05-02T17:52:16Z | 2024-11-18T08:28:08Z | null | KwanWaiChung |
huggingface/optimum | 1,843 | Support for speech to text models. | ### Feature request
Hi, it would be really useful if speech to text models could be supported by optimum, specifically to ONNX. I saw a repo that managed to do it and they claimed they used optimum to do it.
https://huggingface.co/Xenova/speecht5_tts
Is there a way to do this?
### Motivation
I am finding it ve... | https://github.com/huggingface/optimum/issues/1843 | open | [
"feature-request",
"onnx"
] | 2024-05-02T11:43:49Z | 2024-10-14T12:25:52Z | 0 | JamesBowerXanda |
huggingface/datasets | 6,854 | Wrong example of usage when config name is missing for community script-datasets | As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name i... | https://github.com/huggingface/datasets/issues/6854 | closed | [
"bug"
] | 2024-05-02T06:59:39Z | 2024-05-03T15:51:59Z | 0 | albertvillanova |
huggingface/distil-whisper | 130 | How to set the target language for examples in README? | The code examples in the README do not make it obvious how to set the language of the audio to transcribe.
The default settings create garbled english text if the audio language is different. | https://github.com/huggingface/distil-whisper/issues/130 | open | [] | 2024-05-01T11:52:00Z | 2024-05-22T11:59:09Z | null | clstaudt |
huggingface/transformers | 30,596 | AutoModal how to enable TP for extremly large models? | Hi, I have 8V100s, but a single one can not fit InternVL1.5 model which has 28B parameters.
So that, I just wonder if I can fit all of them into 8 V100 with TP?
I found that Deepspeed can be used to do tensor parallel like this:
```
# create the model
if args.pre_load_checkpoint:
model = model_class.fro... | https://github.com/huggingface/transformers/issues/30596 | closed | [] | 2024-05-01T10:06:45Z | 2024-06-09T08:03:23Z | null | MonolithFoundation |
huggingface/transformers | 30,595 | i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ | ### System Info
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Who can help?
i cannot... | https://github.com/huggingface/transformers/issues/30595 | closed | [] | 2024-05-01T09:17:58Z | 2024-05-01T09:31:39Z | null | ldh127 |
huggingface/transformers.js | 732 | What does "Error: failed to call OrtRun(). error code = 6." mean? I know it is ONNX related, but how to fix? | ### Question
I keep running into the same issue when using transformers.js Automatic Speech Recognition pipeline. I've tried solving it multiple ways. But pretty much hit a wall every time. I've done lots of googling, LLMs, and used my prior knowledge of how this stuff functions in python. But I can't seem to get it t... | https://github.com/huggingface/transformers.js/issues/732 | closed | [
"question"
] | 2024-05-01T07:01:06Z | 2024-05-11T09:18:35Z | null | jquintanilla4 |
huggingface/transformers | 30,591 | i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ | ### Feature request
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Motivation
... | https://github.com/huggingface/transformers/issues/30591 | closed | [] | 2024-05-01T04:27:47Z | 2024-06-08T08:03:17Z | null | ldh127 |
huggingface/chat-ui | 1,093 | I want to get the html of a website https://bit.ly/4bgmLb9 in huggingchat web search | I want to get the html of a website https://bit.ly/4bgmLb9 in hugging-chat web search. In chrome, I can put https://bit.ly/4bgmLb9 in the address bar and get the result. But I do not know how to do that in hugging-chat web search?
I try in hugging-chat and the screenshot
 with PEFT method, I use lora、loha and lokr for PEFT in [diffuser](https://github.com/huggingface/diffusers).
I have a question, how to convert a loha safetensor trained from diffusers to webui format?
In the training process:
the loading way:
`peft_config =... | https://github.com/huggingface/peft/issues/1693 | closed | [] | 2024-04-30T07:17:48Z | 2024-06-08T15:03:44Z | null | JIAOJIAYUASD |
huggingface/safetensors | 474 | How to fully load checkpointed weights in memory? | ### System Info
- `transformers` version: 4.40.0
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.22.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- ... | https://github.com/huggingface/safetensors/issues/474 | closed | [] | 2024-04-29T21:30:37Z | 2024-04-30T22:12:29Z | null | goelayu |
huggingface/dataset-viewer | 2,754 | Return partial dataset-hub-cache instead of error? | `dataset-hub-cache` depends on multiple previous steps, and any error in one of them makes it fail. It provokes things like https://github.com/huggingface/moon-landing/issues/9799 (internal): in the datasets list, a dataset is not marked as "supporting the dataset viewer", whereas the only issue is that we didn't manag... | https://github.com/huggingface/dataset-viewer/issues/2754 | closed | [
"question",
"P2"
] | 2024-04-29T17:10:09Z | 2024-06-13T13:57:20Z | null | severo |
huggingface/datasets | 6,848 | Cant Downlaod Common Voice 17.0 hy-AM | ### Describe the bug
I want to download Common Voice 17.0 hy-AM but it returns an error.
```
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name='hfds_config', config_path=None)
/usr/local/lib/pyth... | https://github.com/huggingface/datasets/issues/6848 | open | [] | 2024-04-29T10:06:02Z | 2025-04-01T20:48:09Z | 3 | mheryerznkanyan |
huggingface/optimum | 1,839 | why does ORTModelForCausalLM assume new input length is 1 when past_key_values is passed | https://github.com/huggingface/optimum/blob/c55f8824f58db1a2f1cfc7879451b4743b8f206b/optimum/onnxruntime/modeling_decoder.py#L649
``` python
def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs):
if past_key_values is not None:
past_length = past_key_values[0][... | https://github.com/huggingface/optimum/issues/1839 | open | [
"question",
"onnxruntime"
] | 2024-04-29T07:06:04Z | 2024-10-14T12:28:51Z | null | cyh-ustc |
huggingface/diffusers | 7,813 | I feel confused about this TODO issue. how to pass timesteps as tensors? | https://github.com/huggingface/diffusers/blob/235d34cf567e78bf958344d3132bb018a8580295/src/diffusers/models/unets/unet_2d_condition.py#L918
| https://github.com/huggingface/diffusers/issues/7813 | closed | [
"stale"
] | 2024-04-29T03:46:21Z | 2024-11-23T00:19:17Z | null | ghost |
huggingface/datasets | 6,846 | Unimaginable super slow iteration | ### Describe the bug
Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the dataset……?Is there something wrong with my iteration?
### Steps to reproduce the bug
```python
import datasets
import time
import random
num_rows = 52000
n... | https://github.com/huggingface/datasets/issues/6846 | closed | [] | 2024-04-28T05:24:14Z | 2024-05-06T08:30:03Z | 1 | rangehow |
huggingface/lerobot | 112 | Do we want to use `transformers`? | I'd really go against establishing transformers as a dependency of lerobot and importing their whole library just to use the `PretrainedConfig` (or even other components). I think in this case it's very overkill and wouldn't necessarily fit our needs right now. The class is ~1000 lines of code - which we can copy into ... | https://github.com/huggingface/lerobot/issues/112 | closed | [
"question"
] | 2024-04-27T17:24:20Z | 2024-04-30T11:59:25Z | null | qgallouedec |
huggingface/evaluate | 582 | How to pass generation_kwargs to the TextGeneration evaluator ? | How can I pass the generation_kwargs to TextGeneration evaluator ? | https://github.com/huggingface/evaluate/issues/582 | open | [] | 2024-04-25T16:09:46Z | 2024-04-25T16:09:46Z | null | swarnava112 |
huggingface/chat-ui | 1,074 | 503 error | Hello, I was trying to install the chat-ui
I searched for any documentation to how to handle that on my vps
error 500 after build and not working with https although allow_insecure=false | https://github.com/huggingface/chat-ui/issues/1074 | closed | [
"support"
] | 2024-04-25T15:34:07Z | 2024-04-27T14:58:45Z | 1 | abdalladorrah |
huggingface/chat-ui | 1,073 | Support for Llama-3-8B-Instruct model | hi,
For model meta-llama/Meta-Llama-3-8B-Instruct, it is unlisted, not sure when will be supported?
https://github.com/huggingface/chat-ui/blob/3d83131e5d03e8942f9978bf595a7caca5e2b3cd/.env.template#L229
thanks. | https://github.com/huggingface/chat-ui/issues/1073 | open | [
"question",
"models",
"huggingchat"
] | 2024-04-25T14:03:35Z | 2024-04-30T05:47:05Z | null | cszhz |
huggingface/chat-ui | 1,072 | [v0.8.3] serper, serpstack API, local web search not working | ## Context
I have serper.dev API key, serpstack API key and I have put it correctly in my `.env.local` file.
<img width="478" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/5082893a-7ecd-4ab5-9cb9-059875118dcd">
## Issue
However, even if I enable Web Search, it still does not reach ... | https://github.com/huggingface/chat-ui/issues/1072 | closed | [
"support"
] | 2024-04-25T13:24:40Z | 2024-05-09T16:28:15Z | 14 | adhishthite |
huggingface/diffusers | 7,775 | How to input gradio settings in Python | Hi.
I use **realisticStockPhoto_v20** on Fooocus with **sdxl_film_photography_style** lora and I really like the results.
Fooocus and other gradio implementations come with settings inputs that I want to utilize in Python as well. In particular, if this is my code:
```
device = "cuda"
model_path = "weights/reali... | https://github.com/huggingface/diffusers/issues/7775 | closed | [] | 2024-04-25T08:43:20Z | 2024-11-20T00:07:26Z | null | levoz92 |
huggingface/chat-ui | 1,069 | CohereForAI ChatTemplate | Now that there is official support for tgi in CohereForAI/c4ai-command-r-v01. How to use the chat template found in the tokenizer config for the ui. Or alternatively, is it possible to add in PROMPTS.md the correct template for cohere? | https://github.com/huggingface/chat-ui/issues/1069 | open | [] | 2024-04-25T05:45:35Z | 2024-04-25T05:45:35Z | 0 | yanivshimoni89 |
huggingface/transformers.js | 727 | Preferred citation of Transformers.js | ### Question
Love the package, and am using it in research - I am wondering, does there exist a preferred citation format for the package to cite it in papers? | https://github.com/huggingface/transformers.js/issues/727 | open | [
"question"
] | 2024-04-24T23:07:20Z | 2024-04-24T23:21:13Z | null | ludgerpaehler |
huggingface/diarizers | 4 | How to save the finetuned model as a .bin file? | Hi,
I finetuned the pyannote-segmentation model for my usecase but it is saved as a model.safetensors file. Can I convert it to a pytorch_model.bin file? I am using whisperx to create speaker-aware transcripts and .safetensors isn't working with that library. Thanks! | https://github.com/huggingface/diarizers/issues/4 | closed | [] | 2024-04-24T20:50:19Z | 2024-04-30T21:02:32Z | null | anuragrawal2024 |
huggingface/transformers.js | 725 | How to choose a language's dialect when using `automatic-speech-recognition` pipeline? | ### Question
Hi, so I was originally using the transformers library (python version) in my backend, but when refactoring my application for scale. It made more sense to move my implementation of whisper from the backend to the frontend (for my specific usecase). So I was thrilled when I saw that transformers.js supp... | https://github.com/huggingface/transformers.js/issues/725 | closed | [
"question"
] | 2024-04-24T09:44:38Z | 2025-11-06T20:36:01Z | null | jquintanilla4 |
huggingface/text-embeddings-inference | 248 | how to support gpu version 10.1 rather than 12.2 | ### Feature request
how to support gpu version 10.1 rather than 12.2
### Motivation
how to support gpu version 10.1 rather than 12.2
### Your contribution
how to support gpu version 10.1 rather than 12.2 | https://github.com/huggingface/text-embeddings-inference/issues/248 | closed | [] | 2024-04-24T08:49:45Z | 2024-04-26T13:02:44Z | null | fanqiangwei |
huggingface/diffusers | 7,766 | IP-Adapter FaceID PLus How to use questions | https://github.com/huggingface/diffusers/blob/9ef43f38d43217f690e222a4ce0239c6a24af981/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L492
## error msg:
pipe.unet.encoder_hid_proj.image_projection_layers[0].clip_embeds = clip_embeds.to(dtype=torch.float16)
AttributeError: 'list' obje... | https://github.com/huggingface/diffusers/issues/7766 | closed | [] | 2024-04-24T07:56:38Z | 2024-11-20T00:02:30Z | null | Honey-666 |
huggingface/peft | 1,673 | How to set Lora_dropout=0 when loading trained peft model for inference? | ### System Info
peft==0.10.0
transformers==4.39.3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```pytho... | https://github.com/huggingface/peft/issues/1673 | closed | [] | 2024-04-24T07:47:19Z | 2024-05-10T02:22:17Z | null | flyliu2017 |
huggingface/optimum | 1,826 | Phi3 support | ### Feature request
Microsoft's new phi3 mode, in particular the 128K context mini model, is not supported by Optimum export.
Error is:
"ValueError: Trying to export a phi3 model, that is a custom or unsupported architecture, but no custom export configuration was passed as `custom_export_configs`. Please refer t... | https://github.com/huggingface/optimum/issues/1826 | closed | [] | 2024-04-23T15:54:21Z | 2024-05-24T13:53:08Z | 4 | martinlyons |
huggingface/datasets | 6,830 | Add a doc page for the convert_to_parquet CLI | Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova | https://github.com/huggingface/datasets/issues/6830 | closed | [
"documentation"
] | 2024-04-23T09:49:04Z | 2024-04-25T10:44:11Z | 0 | severo |
huggingface/transformers.js | 723 | 404 when trying Qwen in V3 | ### Question
This is probably just because V3 is a work in progress, but I wanted to make sure.
When trying to run Qwen 1.5 - 0.5B it works with the V2 script, but when swapping to V3 I get a 404 not found.
```
type not specified for model. Using the default dtype: q8.
GET https://huggingface.co/Xenova/Qwen1.5... | https://github.com/huggingface/transformers.js/issues/723 | open | [
"question"
] | 2024-04-22T19:14:17Z | 2024-05-28T08:26:09Z | null | flatsiedatsie |
huggingface/diffusers | 7,740 | How to get config of single_file | Hi,
Is there any way to get the equivalent of model_index.json from a single_file? | https://github.com/huggingface/diffusers/issues/7740 | closed | [] | 2024-04-22T14:00:21Z | 2024-04-22T23:26:50Z | null | suzukimain |
huggingface/diffusers | 7,724 | RuntimeError: Error(s) in loading state_dict for AutoencoderKL: Missing Keys! How to solve? | ### Describe the bug
I am trying to get a Lora to run locally on my computer by using this code: https://github.com/hollowstrawberry/kohya-colab and changing it to a local format. When I get to the loading of the models, it gives an error, It seems that the AutoEncoder model has changed but I do not know how to adjust... | https://github.com/huggingface/diffusers/issues/7724 | closed | [
"bug"
] | 2024-04-19T13:27:17Z | 2024-04-22T08:45:24Z | null | veraburg |
huggingface/optimum | 1,821 | Idefics2 Support in Optimum for ONNX export | ### Feature request
With reference to the new Idefics2 model- https://huggingface.co/HuggingFaceM4/idefics2-8b
I would like to export it to ONNX which is currently not possible.
Please enable conversion support. Current Error with pip install transformers via GIT
```
Traceback (most recent call last):
File "... | https://github.com/huggingface/optimum/issues/1821 | open | [
"feature-request",
"onnx"
] | 2024-04-19T07:12:41Z | 2025-02-18T19:25:11Z | 8 | gtx-cyber |
huggingface/alignment-handbook | 158 | How to work with local data | I downloaded a dataset from hf. I want to load it locally, but it still tries to download it from hf and place it into the cache.
How can I use the local one I already downloaded?
Thank you. | https://github.com/huggingface/alignment-handbook/issues/158 | open | [] | 2024-04-18T10:26:14Z | 2024-05-14T11:20:55Z | null | pretidav |
huggingface/optimum-quanto | 182 | Can I use quanto on AMD GPU? | Does quanto work with AMD GPUs ? | https://github.com/huggingface/optimum-quanto/issues/182 | closed | [
"question",
"Stale"
] | 2024-04-18T03:06:54Z | 2024-05-25T01:49:56Z | null | catsled |
huggingface/accelerate | 2,680 | How to get pytorch_model.bin from ckeckpoint files without zero_to_fp32.py | https://github.com/huggingface/accelerate/issues/2680 | closed | [] | 2024-04-17T11:30:32Z | 2024-04-18T22:40:14Z | null | lipiji | |
huggingface/datasets | 6,819 | Give more details in `DataFilesNotFoundError` when getting the config names | ### Feature request
After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error:
```
{
"error": "Cannot get the config names for the dataset.",
"cause_exception": "DataFilesNotFoundError",
"cause_message": "No (support... | https://github.com/huggingface/datasets/issues/6819 | open | [
"enhancement"
] | 2024-04-17T11:19:47Z | 2024-04-17T11:19:47Z | 0 | severo |
huggingface/optimum | 1,818 | Request for ONNX Export Support for Blip Model in Optimum | Hi Team,
I hope this message finds you well.
I've encountered an issue while attempting to export Blip model into the ONNX format using Optimum. I have used below command.
`! optimum-cli export onnx -m Salesforce/blip-itm-base-coco --task feature-extraction blip_onnx`
It appears that Optimum currently l... | https://github.com/huggingface/optimum/issues/1818 | open | [
"feature-request",
"question",
"onnx"
] | 2024-04-17T08:55:45Z | 2024-10-14T12:26:36Z | null | n9s8a |
huggingface/transformers.js | 715 | How to unload/destroy a pipeline? | ### Question
I tried to find how to unload a pipeline to free up memory in the documentation, but couldn't find a mention of how to do that properly.
If there a proper way to "unload" a pipeline?
I'd be happy to add the answer to the documentation. | https://github.com/huggingface/transformers.js/issues/715 | closed | [
"question"
] | 2024-04-16T09:02:05Z | 2024-05-29T09:32:23Z | null | flatsiedatsie |
huggingface/transformers.js | 714 | Reproducing model conversions | ### Question
I'm trying to reproduce the conversion of `phi-1_5_dev` to better understand the process. I'm running into a few bugs / issues along the way that I thought it'd be helpful to document.
The model [`@Xenova/phi-1_5_dev`](https://huggingface.co/Xenova/phi-1_5_dev) states:
> https://huggingface.co/sus... | https://github.com/huggingface/transformers.js/issues/714 | open | [
"question"
] | 2024-04-15T15:02:33Z | 2024-05-10T14:26:00Z | null | thekevinscott |
huggingface/sentence-transformers | 2,594 | What is the maximum number of sentences that a fast cluster can cluster? | What is the maximum number of sentences that a fast cluster can cluster? When I cluster 2 million sentences, the cluster gets killed. | https://github.com/huggingface/sentence-transformers/issues/2594 | open | [] | 2024-04-15T09:55:06Z | 2024-04-15T09:55:06Z | null | BinhMinhs10 |
huggingface/dataset-viewer | 2,721 | Help dataset owner to chose between configs and splits? | See https://huggingface.slack.com/archives/C039P47V1L5/p1713172703779839
> Am I correct in assuming that if you specify a "config" in a dataset, only the given config is downloaded, but if you specify a split, all splits for that config are downloaded? I came across it when using facebook's belebele (https://hugging... | https://github.com/huggingface/dataset-viewer/issues/2721 | open | [
"question",
"P2"
] | 2024-04-15T09:51:43Z | 2024-05-24T15:17:51Z | null | severo |
huggingface/diffusers | 7,676 | How to determine the type of file, such as checkpoint, etc. | Hello.
Is there some kind of script that determines the type of file "checkpoint", "LORA", "textual_inversion", etc.? | https://github.com/huggingface/diffusers/issues/7676 | closed | [] | 2024-04-14T23:58:08Z | 2024-04-15T02:50:43Z | null | suzukimain |
huggingface/diffusers | 7,670 | How to use IDDPM in diffusers ? | The code base is here:
https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py | https://github.com/huggingface/diffusers/issues/7670 | closed | [
"should-move-to-discussion"
] | 2024-04-14T12:30:34Z | 2024-11-20T00:17:18Z | null | jiarenyf |
huggingface/transformers.js | 713 | Help understanding logits and model vocabs | ### Question
I'm trying to write a custom `LogitsProcessor` and have some questions. For reference, I'm using [`Xenova/phi-1_5_dev`](https://huggingface.co/Xenova/phi-1_5_dev). I'm trying to implement a custom logic for white or blacklisting tokens, but running into difficulties understanding how to interpret token ... | https://github.com/huggingface/transformers.js/issues/713 | closed | [
"question"
] | 2024-04-13T21:06:14Z | 2024-04-14T15:17:43Z | null | thekevinscott |
huggingface/lighteval | 155 | How to run 30b plus model with lighteval when accelerate launch failed? OOM | CUDA Memory OOM when I launch an evaluation for 30b model using lighteval.
Whats the correct config for it? | https://github.com/huggingface/lighteval/issues/155 | closed | [] | 2024-04-13T03:49:20Z | 2024-05-04T11:18:38Z | null | xiechengmude |
huggingface/transformers | 30,213 | Mamba: which tokenizer has been saved and how to use it? | ### System Info
Hardware independent.
### Who can help?
@ArthurZucker
I described the doubts in the link below around 1 month ago, but maybe model-hub discussions are not so active. Then I post it here as repo issue. Please, let me know where to discuss it :)
https://huggingface.co/state-spaces/mamba-2.8b-hf/... | https://github.com/huggingface/transformers/issues/30213 | closed | [] | 2024-04-12T11:28:17Z | 2024-05-17T13:13:12Z | null | javiermcebrian |
huggingface/sentence-transformers | 2,587 | Implementing Embedding Quantization for Dynamic Serving Contexts | I'm currently exploring embedding quantization strategies to enhance storage and computation efficiency while maintaining high accuracy. Specifically, I'm looking at integrating these strategies with Infinity (https://github.com/michaelfeil/infinity/discussions/198), a high-throughput, low-latency REST API for serving ... | https://github.com/huggingface/sentence-transformers/issues/2587 | open | [
"question"
] | 2024-04-11T11:03:23Z | 2024-04-12T07:28:48Z | null | Nookbe |
huggingface/diffusers | 7,636 | how to use the controlnet sdxl tile model in diffusers | ### Describe the bug
I want to use [this model](https://huggingface.co/TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic_V1) to make my slightly blurry photos clear, so i found this model.
I follow the code [here](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile) , but as the model mentioned above is XL not... | https://github.com/huggingface/diffusers/issues/7636 | closed | [
"bug",
"stale"
] | 2024-04-11T03:20:42Z | 2024-06-29T13:26:58Z | null | xinli2008 |
huggingface/optimum-quanto | 161 | Question: any plan to formally support smooth quantization and make it more general | Awesome work!
I noticed there are smooth quant implemented under [external](https://github.com/huggingface/quanto/tree/main/external/smoothquant). Currently, its implementation seems to be model-specific, we can only apply smooth on special `Linear`.
However, in general, the smooth can be applied on any `Linear` ... | https://github.com/huggingface/optimum-quanto/issues/161 | closed | [
"question",
"Stale"
] | 2024-04-11T02:45:31Z | 2024-05-18T01:49:52Z | null | yiliu30 |
huggingface/accelerate | 2,647 | How to use deepspeed with dynamic batch? | ### System Info
```Shell
- `Accelerate` version: 0.29.1
- Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
- `accelerate` bash location: /home/yuchao/miniconda3/envs/TorchTTS/bin/accelerate
- Python version: 3.10.13
- Numpy version: 1.23.5
- PyTorch version (GPU?): 2.2.2+cu118 (True)
- PyTorch XPU availab... | https://github.com/huggingface/accelerate/issues/2647 | closed | [] | 2024-04-10T09:09:53Z | 2025-05-11T15:07:27Z | null | npuichigo |
huggingface/transformers.js | 690 | Is top-level await necessary in the v3 branch? | ### Question
I saw the excellent performance of WebGPU, so I tried to install xenova/transformers.js#v3 as a dependency in my project.
I found that v3 uses the top-level await syntax. If I can't restrict users to using the latest browser version, I have to make it compatible (using `vite-plugin-top-level-await` o... | https://github.com/huggingface/transformers.js/issues/690 | closed | [
"question"
] | 2024-04-10T08:49:32Z | 2024-04-11T17:18:42Z | null | ceynri |
huggingface/optimum-quanto | 158 | How dose quanto support int8 conv2d and linear? | Hi, I look into the code and didn't find any cuda kernel related to conv2d and linear. How did you implement the cuda backend for conv2d/linear? Thanks | https://github.com/huggingface/optimum-quanto/issues/158 | closed | [
"question"
] | 2024-04-10T05:41:43Z | 2024-04-11T09:26:35Z | null | zhexinli |
huggingface/transformers.js | 689 | Abort the audio recognition process | ### Question
Hello! How can I stop the audio file recognition process while leaving the loaded model? If I terminate the worker I have to reload the model to start the process of recognizing a new audio file. I need either functionality to be able to send a pipeline command to stop the recognition process, or the abil... | https://github.com/huggingface/transformers.js/issues/689 | open | [
"question"
] | 2024-04-10T02:51:37Z | 2024-04-20T06:09:11Z | null | innoware11 |
huggingface/transformers | 30,154 | Question about how to write code for trainer and dataset for multi-gpu | ### System Info
- Platform: Linux-5.15.0-1026-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.27.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Task... | https://github.com/huggingface/transformers/issues/30154 | closed | [] | 2024-04-10T00:08:00Z | 2024-04-10T22:57:53Z | null | zch-cc |
huggingface/accelerate | 2,643 | How to use gather_for_metrics for object detection models? | ### Reproduction
I used the `gather_for_metrics` function as follows:
```python
predictions, ground_truths = accelerator.gather_for_metrics((predictions, ground_truths))
```
And i've got the error:
```
accelerate.utils.operations.DistributedOperationException: Impossible to apply the desired operation due to... | https://github.com/huggingface/accelerate/issues/2643 | closed | [] | 2024-04-09T23:15:20Z | 2024-04-30T07:48:36Z | null | yann-rdgz |
huggingface/candle | 2,033 | How to use CUDA as the backend in `candle-wasm-examples/llama2-c` ? | How to use CUDA as the backend in `candle-wasm-examples/llama2-c` ?
In `candle-wasm-examples/llama2-c`, I do some changes shown below.
```diff
--- a/candle-wasm-examples/llama2-c/Cargo.toml
+++ b/candle-wasm-examples/llama2-c/Cargo.toml
@@ -9,7 +9,7 @@ categories.workspace = true
license.workspace = true
... | https://github.com/huggingface/candle/issues/2033 | closed | [] | 2024-04-09T16:16:55Z | 2024-04-12T08:26:24Z | null | wzzju |
huggingface/optimum | 1,804 | advice for simple onnxruntime script for ORTModelForVision2Seq (or separate encoder/decoder) | I am trying to use implement this [class ](https://github.com/huggingface/optimum/blob/69af5dbab133f2e0ae892721759825d06f6cb3b7/optimum/onnxruntime/modeling_seq2seq.py#L1832) in C++ because unfortunately I didn't find any C++ implementation for this.
Therefore, my current approach is to revert this class and the au... | https://github.com/huggingface/optimum/issues/1804 | open | [
"question",
"onnxruntime"
] | 2024-04-09T15:14:40Z | 2024-10-14T12:41:15Z | null | eduardatmadenn |
huggingface/chat-ui | 997 | Community Assistants | Hi, I've looked through all the possible issues but I didn't find what I was looking for.
On self-hosted is the option to have the community assistants such as the ones on https://huggingface.co/chat/ not available? I've also noticed that when I create Assistants on my side they do not show up on community tabs eit... | https://github.com/huggingface/chat-ui/issues/997 | closed | [
"help wanted",
"assistants"
] | 2024-04-09T12:44:49Z | 2024-04-23T06:09:47Z | 2 | Coinficient |
huggingface/evaluate | 570 | [Question] How to have no preset values sent into `.compute()` | We've a use-case https://huggingface.co/spaces/alvations/llm_harness_mistral_arc/blob/main/llm_harness_mistral_arc.py
where default feature input types for `evaluate.Metric` is nothing and we get something like this in our `llm_harness_mistral_arc/llm_harness_mistral_arc.py`
```python
import evaluate
import dat... | https://github.com/huggingface/evaluate/issues/570 | open | [] | 2024-04-08T22:58:41Z | 2024-04-08T23:54:42Z | null | alvations |
huggingface/transformers | 30,122 | What is the default multi-GPU training type? | ### System Info
NA
### Who can help?
@ArthurZucker , @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
... | https://github.com/huggingface/transformers/issues/30122 | closed | [] | 2024-04-08T11:45:59Z | 2024-05-10T10:35:41Z | null | RonanKMcGovern |
huggingface/optimum | 1,798 | Issue Report: Unable to Export Qwen Model to ONNX Format in Optimum | ### System Info
```shell
Optimum Version: 1.18.0
Python Version: 3.8
Platform: Windows, x86_64
```
### Who can help?
@michaelbenayoun @JingyaHuang @echarlaix
I am writing to report an issue I encountered while attempting to export a Qwen model to ONNX format using Optimum.
Error message:
" ValueError: Tryin... | https://github.com/huggingface/optimum/issues/1798 | open | [
"bug"
] | 2024-04-08T11:36:09Z | 2024-04-08T11:36:09Z | 0 | Harini-Vemula-2382 |
huggingface/chat-ui | 986 | Github actions won't push built docker images on releases | We currently have a [github actions workflow](https://github.com/huggingface/chat-ui/blob/main/.github/workflows/build-image.yml) that builds an image on every push to `main` and tags it with `latest` and the commit id. [(see here)](https://github.com/huggingface/chat-ui/pkgs/container/chat-ui/versions)
The workflow... | https://github.com/huggingface/chat-ui/issues/986 | closed | [
"help wanted",
"CI/CD"
] | 2024-04-08T07:51:13Z | 2024-04-08T11:27:42Z | 2 | nsarrazin |
huggingface/candle | 2,025 | How to specify which graphics card to run a task on in a server with multiple graphics cards? | https://github.com/huggingface/candle/issues/2025 | closed | [] | 2024-04-07T10:48:35Z | 2024-04-07T11:05:52Z | null | lijingrs | |
huggingface/text-embeddings-inference | 229 | Question: How to add a prefix to the underlying server | I've managed to run the text embeddings inference perfectly using the already built docker images and I'm trying to allow it to our internal components
Right now they're sharing the following behavior
Myhost.com/modelname/v1/embeddings
I was wondering if this "model name" is possible to add as a prefix inside ... | https://github.com/huggingface/text-embeddings-inference/issues/229 | closed | [] | 2024-04-06T17:29:59Z | 2024-04-08T09:14:40Z | null | Ryojikn |
huggingface/transformers.js | 685 | Transformers.js seems to need an internet connection when it shouldn't? (Error: no available backend found.) | ### Question
What is the recommended way to get Transformers.js to work even when, later on, there is no internet connection?
Is it using a service worker? Or are there other (perhaps hidden) settings for managing caching of files?
I'm assuming here that the `Error: no available backend found` error message is r... | https://github.com/huggingface/transformers.js/issues/685 | open | [
"question"
] | 2024-04-06T12:40:15Z | 2024-09-03T01:22:15Z | null | flatsiedatsie |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.