repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js | 866 | compat with transformers >= 4.40 and tokenizers >= 0.19 | ### Question
This is probably a known issue, as I'm aware that this project lags a bit behind the fast changes being made in the python transformers library, but I wanted to document a specific compatibility issue I hit:
Tokenizers 0.19 introduced some breaking changes which result in different outputs for (at le... | https://github.com/huggingface/transformers.js/issues/866 | open | [
"question"
] | 2024-07-27T18:56:22Z | 2024-08-30T08:34:01Z | null | joprice |
huggingface/chat-ui | 1,371 | Oogabooga server and Chat-ui producing random gibberish with OpenAI API? | Ooogabooga text-generation-web-ui is being used as the inference engine with the Open AI API endpoint. Please see below
```
**_PROMPT START_**
thorium oxide for a catalyst bed
**_PROMPT END_**
**_RESPONSE START_**
I am writing a story set in the world of Harry Potter. The main character is a Muggle-born wit... | https://github.com/huggingface/chat-ui/issues/1371 | open | [] | 2024-07-27T12:38:06Z | 2024-07-27T15:10:00Z | 2 | cody151 |
huggingface/chat-ui | 1,368 | No way to "Continue Generating" | Once the text generation finishes, there actually appears to be no way to continue generating, the submit button is greyed out and clicking it just errors out. I am using OpenAI endpoint in Koboldcpp using local Llama 3.1. | https://github.com/huggingface/chat-ui/issues/1368 | open | [
"question"
] | 2024-07-26T18:35:05Z | 2024-11-27T03:48:09Z | null | cody151 |
huggingface/huggingface-llama-recipes | 23 | How to run LLama8b/70b using FP8 | Are the instructions available to converting to FP8?
I'd like to try converting both the 8B and 70B to FP8 and compare.
Thank you! | https://github.com/huggingface/huggingface-llama-recipes/issues/23 | open | [] | 2024-07-26T15:54:29Z | 2024-10-01T06:03:49Z | null | vgoklani |
huggingface/chat-ui | 1,367 | iframe throws 403 error when sending a message | ## Issue
**Use case:** I would like to embed the Chat UI in an iframe in Qualtrics.
**Issue:** Sending a message from the Chat UI in an iframe results in 403 error with the message below.
> You don't have access to this conversation. If someone gave you this link, ask them to use the 'share' feature instead.
... | https://github.com/huggingface/chat-ui/issues/1367 | open | [
"support"
] | 2024-07-26T13:10:36Z | 2024-08-13T17:22:36Z | 6 | rodrigobdz |
huggingface/chat-ui | 1,366 | Koboldcpp Endpoint support | When trying to use koboldcpp as the endpoint it throws an error
```
[
{
"code": "invalid_union_discriminator",
"options": [
"anthropic",
"anthropic-vertex",
"aws",
"openai",
"tgi",
"llamacpp",
"ollama",
"vertex",
"genai",
"cloudfl... | https://github.com/huggingface/chat-ui/issues/1366 | closed | [
"question",
"models"
] | 2024-07-26T12:13:24Z | 2024-07-26T13:57:13Z | null | cody151 |
huggingface/datasets | 7,070 | how set_transform affects batch size? | ### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_feat... | https://github.com/huggingface/datasets/issues/7070 | open | [] | 2024-07-25T15:19:34Z | 2024-07-25T15:19:34Z | 0 | VafaKnm |
huggingface/chat-ui | 1,361 | Unhandled error event upon start with Koboldcpp | I have mongodb set up as well as koboldcpp running Llama 3.1 8b on windows for inference but chat-ui will not start
```
yas@zen:~/chat-ui$ npm run dev -- --open
> chat-ui@0.9.1 dev
> vite dev --open
VITE v4.5.3 ready in 2735 ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expos... | https://github.com/huggingface/chat-ui/issues/1361 | closed | [
"support"
] | 2024-07-25T14:32:44Z | 2024-07-26T12:11:50Z | 1 | cody151 |
huggingface/lighteval | 238 | What is `qem` for gsm8k evaluation? | As titled.
Thank you! | https://github.com/huggingface/lighteval/issues/238 | closed | [] | 2024-07-25T14:30:44Z | 2024-09-15T02:19:57Z | null | shizhediao |
huggingface/optimum | 1,972 | Whisper-large-v3 transcript is trimmed | ### System Info
```shell
optimum 1.21.2
Ubuntu 22.04.4 LTS
CUDA 12.3
cuda-toolkit 11.7
onnxruntime 1.18.1
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/S... | https://github.com/huggingface/optimum/issues/1972 | open | [
"bug"
] | 2024-07-25T12:04:18Z | 2024-07-31T08:05:02Z | 4 | yv0vaa |
huggingface/lerobot | 341 | question: expected performance of vq-bet? | Hi,
Thank you to the LeRobot community for maintaining such a fantastic codebase. My research group and I have greatly benefited from your efforts. In my current project, I am using the repository primarily for analyzing algorithms across different environments. I wanted to raise an issue I am encountering with VQ-B... | https://github.com/huggingface/lerobot/issues/341 | closed | [
"question",
"policies",
"stale"
] | 2024-07-25T04:35:06Z | 2025-10-07T02:27:24Z | null | Jubayer-Hamid |
huggingface/text-generation-inference | 2,302 | how to use the model's checkpoint in local fold? | ### System Info
ghcr.io/huggingface/text-generation-inference 2.0.4
platform windows10
Docker version 27.0.3
llm model:lllyasviel/omost-llama-3-8b-4bits
cuda 12.3
gpu nvidia rtx A6000
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modificatio... | https://github.com/huggingface/text-generation-inference/issues/2302 | open | [
"Stale"
] | 2024-07-25T04:26:44Z | 2024-08-25T01:57:54Z | null | zk19971101 |
huggingface/diffusers | 8,957 | StableDiffusionSafetyChecker ignores `attn_implementation` load kwarg | ### Describe the bug
`transformers` added `sdpa` and FA2 for CLIP model in https://github.com/huggingface/transformers/pull/31940. It now initializes the vision model like https://github.com/huggingface/transformers/blob/85a1269e19af022e04bc2aad82572cd5a9e8cdd9/src/transformers/models/clip/modeling_clip.py#L1143.
... | https://github.com/huggingface/diffusers/issues/8957 | closed | [
"bug",
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-07-24T19:38:23Z | 2024-11-19T21:06:53Z | 8 | jambayk |
huggingface/transformers.js | 862 | how to retain spiece token markers | ### Question
When evaluating a model that uses sentencepiece using transformer.js, I do not get the `▁` marker included in the output as I do when running from python. I'm using the qanastek/pos-french-camembert model with to do POS tagging and have situations where a single word such as a verb with a tense suffix is... | https://github.com/huggingface/transformers.js/issues/862 | open | [
"question"
] | 2024-07-24T16:01:44Z | 2024-07-24T17:14:58Z | null | joprice |
huggingface/transformers | 32,186 | callback to implement how the predictions should be stored | https://github.com/huggingface/transformers/issues/32186 | closed | [] | 2024-07-24T11:36:26Z | 2024-07-24T11:39:13Z | null | Imran-imtiaz48 | |
huggingface/optimum | 1,969 | Latest Optimum library does not compatible with latest Transformers | ### System Info
```shell
Any system that can install those libraries
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (... | https://github.com/huggingface/optimum/issues/1969 | closed | [
"bug"
] | 2024-07-24T06:49:07Z | 2024-08-20T09:06:19Z | 1 | lanking520 |
huggingface/diffusers | 8,953 | Why loading a lora weights so low? | I used diffusers to load lora weights but it much slow to finish.
diffusers version: 0.29.2
I test another version of diffusers 0.23.0 without peft installation, and the time is decent.
```
t1 = time.time()
pipe.load_lora_weights("/data/**/lora_weights/lcm-lora-sdxl/", weight_name="pytorch_lora_weights.safeten... | https://github.com/huggingface/diffusers/issues/8953 | closed | [
"peft"
] | 2024-07-24T06:16:42Z | 2024-10-15T15:23:34Z | 18 | zengjie617789 |
huggingface/accelerate | 2,956 | How to run Vision Model(Like llava) based on pippy? | Currently I tried to apply model parallelism based on pippy and I refer to the given example,
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from accelerate import PartialState, prepare_pippy
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-chat-hf", low_... | https://github.com/huggingface/accelerate/issues/2956 | closed | [] | 2024-07-24T03:13:21Z | 2024-09-13T15:06:32Z | null | JerryLu991223 |
huggingface/transformers.js | 859 | JavaScript code completion model | ### Question
Currently we have two Python code completion models:
https://github.com/xenova/transformers.js/blob/7f5081da29c3f77ee830269ab801344776e61bcb/examples/code-completion/src/App.jsx#L9-L13
And since we are doing JavaScript here, I would like a model optimized on JavaScript. Does anyone have a JavaScript... | https://github.com/huggingface/transformers.js/issues/859 | open | [
"question"
] | 2024-07-23T13:51:58Z | 2024-07-23T13:51:58Z | null | kungfooman |
huggingface/dataset-viewer | 2,994 | Compute leaks between splits? | See https://huggingface.co/blog/lbourdois/lle
Also: should we find the duplicate rows? | https://github.com/huggingface/dataset-viewer/issues/2994 | open | [
"question",
"feature request",
"P2"
] | 2024-07-23T13:00:39Z | 2025-06-24T11:39:37Z | null | severo |
huggingface/datasets | 7,066 | One subset per file in repo ? | Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
├── train0.jsonl
├── train1.jsonl
└── train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
├── animals.jso... | https://github.com/huggingface/datasets/issues/7066 | open | [] | 2024-07-23T12:43:59Z | 2025-06-26T08:24:50Z | 1 | lhoestq |
huggingface/transformers | 32,145 | callback to implement how the predictions should be stored. | I am exploring distributed inference capabilities with the Hugging Face Trainer for transformers. I need to do distributed inference across multiple devices or nodes and save the predictions to a file. However, after reviewing the available callbacks, I did not find any that facilitate this specific task. Furthermore, ... | https://github.com/huggingface/transformers/issues/32145 | open | [
"Feature request"
] | 2024-07-22T21:32:22Z | 2024-07-24T09:23:07Z | null | sachinya00 |
huggingface/diffusers | 8,930 | StableDiffusionXLControlNetImg2ImgPipeline often fails to respect "pose" control images | ### Describe the bug
Hello,
Using [StableDiffusionXLControlNetImg2ImgPipeline](https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_sdxl#diffusers.StableDiffusionXLControlNetImg2ImgPipeline), and passing a "pose" control image often fails to produce an output image that maintains the pose.
I couldn'... | https://github.com/huggingface/diffusers/issues/8930 | open | [
"bug",
"stale"
] | 2024-07-22T13:48:48Z | 2024-09-21T07:48:04Z | 14 | Clement-Lelievre |
huggingface/diffusers | 8,924 | Adding Differential Diffusion to Kolors, Auraflow, HunyuanDiT | Diffusers recently added support for the following models:
- [x] [Kolors](https://github.com/huggingface/diffusers/pull/8812) (@tuanh123789)
- [x] [AuraFlow](https://github.com/huggingface/diffusers/pull/8796)
- [x] [HunyuanDiT](https://github.com/huggingface/diffusers/pull/8240) (@MnCSSJ4x)
A few weeks ago, we a... | https://github.com/huggingface/diffusers/issues/8924 | closed | [
"good first issue",
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-07-22T07:17:58Z | 2024-10-31T19:18:32Z | 28 | a-r-r-o-w |
huggingface/candle | 2,349 | What is the equivalent of interpolate from torch.nn | Hi,
I need some help with translating things written in Python:
f.e. I have such a statement:
```
import torch.nn.functional as F
result[mask] = result[mask] + F.interpolate(cur_result.permute(3,0,1,2).unsqueeze(0).contiguous(), (H, W, D), mode='trilinear', align_corners=False).squeeze(0).permute(1,2,3,0).co... | https://github.com/huggingface/candle/issues/2349 | open | [] | 2024-07-21T22:14:33Z | 2024-07-21T22:14:33Z | null | wiktorkujawa |
huggingface/candle | 2,347 | how to specify generator for randn function | pytorch
```python
noise = torch.randn(x_start.size(), dtype=x_start.dtype, layout=x_start.layout, generator=torch.manual_seed(seed)).to(x_start.device)
```
how to specify seed in candle? | https://github.com/huggingface/candle/issues/2347 | closed | [] | 2024-07-21T10:30:35Z | 2024-07-21T12:33:23Z | null | jk2K |
huggingface/chat-ui | 1,354 | How do I use chat ui with RAG(RETRIEVAL AUGMENTED GENERATOR) | I currently applied the rag technique to the "HuggingFaceH4/zephyr-7b-beta" model and used mongo atlas as a knowledge base, but I didn't find anything about how to connect the chat ui to pass the top k documents to the model so that it can use context to answer questions | https://github.com/huggingface/chat-ui/issues/1354 | open | [] | 2024-07-21T01:19:37Z | 2024-08-22T11:25:50Z | 1 | pedro21900 |
huggingface/chat-ui | 1,353 | Llama-3-70b - Together.ai failure | 
This config used to work on the older hugging chat 0.8.2
All my other models (OpenAI, Anthropic) work fine, its just the Llama-3-70b from Together that fails.
```
{
"name" : "meta-llama/Meta-Llama-3-70B-Instruct-... | https://github.com/huggingface/chat-ui/issues/1353 | open | [
"support",
"models"
] | 2024-07-20T19:30:16Z | 2024-07-25T13:45:54Z | 4 | gururise |
huggingface/diffusers | 8,907 | [Tests] Improve transformers model test suite coverage | Currently, we have different variants of transformers: https://github.com/huggingface/diffusers/tree/main/src/diffusers/models/transformers/. However, we don't have test suites for each of them: https://github.com/huggingface/diffusers/tree/main/tests/models/transformers/.
We are seeking contributions from the comm... | https://github.com/huggingface/diffusers/issues/8907 | closed | [
"Good second issue",
"contributions-welcome"
] | 2024-07-19T10:14:34Z | 2024-08-19T03:00:12Z | 6 | sayakpaul |
huggingface/diffusers | 8,906 | there is no qk_norm in SD3Transformer2DModel. Is that right? | ### Describe the bug
there is no qk_norm in SD3Transformer2DModel. Is that right?
self.attn = Attention(
query_dim=dim,
cross_attention_dim=None,
added_kv_proj_dim=dim,
dim_head=attention_head_dim // num_attention_heads,
heads=num_attention_he... | https://github.com/huggingface/diffusers/issues/8906 | closed | [
"bug"
] | 2024-07-19T09:18:05Z | 2024-10-31T19:19:24Z | 3 | heart-du |
huggingface/lerobot | 334 | where to set the initial joint (position + angle) information when controlling real aloha robot? | ### System Info
```Shell
ubuntu 20
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
Hi Guys, I am using the pr #316 written by Cadene to control the real aloha robot, when running cmd : python control_robot.py teleop... | https://github.com/huggingface/lerobot/issues/334 | closed | [
"question",
"stale"
] | 2024-07-19T08:53:39Z | 2025-10-23T02:29:22Z | null | cong1024 |
huggingface/distil-whisper | 145 | How to load a fine-tuned model for inference? | @sanchit-gandhi
I used the script from https://github.com/huggingface/distil-whisper/tree/main/training/flax/finetuning_scripts to fine-tune a model and obtained a model named flax_model.msgpack. How can I load this model for inference? Additionally, why did the size of the fine-tuned model increase? | https://github.com/huggingface/distil-whisper/issues/145 | open | [] | 2024-07-19T02:21:10Z | 2024-10-21T17:13:45Z | null | xinliu9451 |
huggingface/diffusers | 8,900 | How to load sd_xl_refiner_1.0.safetensors use from_single_file | ### Describe the bug
```
Traceback (most recent call last):
File "/workspace/work/private/TensorRT/demo/Diffusion/st_base.py", line 300, in <module>
A1111(local_dir, 'sd_xl_base_1.0.safetensors', steps=50, cfs_scale=8)
File "/workspace/work/private/TensorRT/demo/Diffusion/st_base.py", line 235, in A1111
... | https://github.com/huggingface/diffusers/issues/8900 | closed | [
"bug"
] | 2024-07-19T01:58:05Z | 2024-07-26T10:39:07Z | null | 631068264 |
huggingface/transformers.js | 854 | How do you delete a downloaded model? | ### Question
How do you delete a downloaded model that was downloaded to the IndexDB?
Thanks,
Ash | https://github.com/huggingface/transformers.js/issues/854 | closed | [
"question"
] | 2024-07-18T22:10:51Z | 2024-07-19T16:23:21Z | null | AshD |
huggingface/candle | 2,341 | how to use system prompt with the llama example? | Hi, I'm trying to pass a chat dialog in the [LLama3 format](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py#L222) to the [llama example](https://github.com/huggingface/candle/tree/main/candle-examples/examples/llama) via -prompt, the string is as follows:
```
<|begin_of_text|><|start_header_id|>sy... | https://github.com/huggingface/candle/issues/2341 | open | [] | 2024-07-18T10:44:54Z | 2024-07-18T14:35:09Z | null | evilsocket |
huggingface/text-generation-inference | 2,246 | can't start server with small --max-total-tokens. But works fine with big stting | when I try to run CUDA_VISIBLE_DEVICES=0,1,2,3 text-generation-launcher --port 6634 --model-id /models/ --max-concurrent-requests 128 --max-input-length 64--max-total-tokens 128 --max-batch-prefill-tokens 128 --cuda-memory-fraction 0.95. It says
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.0... | https://github.com/huggingface/text-generation-inference/issues/2246 | closed | [
"question",
"Stale"
] | 2024-07-18T07:03:31Z | 2024-08-24T01:52:30Z | null | rooooc |
huggingface/diffusers | 8,881 | How to Generate Multiple Image Inference in Instruct Pix2Pix | Hello, I am currently working on how to utilize Instruct Pix2Pix for augmentation.
For this purpose, I want to generate images by putting a Tensor of shape [64,3,84,84] (batch,channel,width,height)shape into the Instruct Pix2Pix pipeline, but the Instruct Pix2Pix provided by diffusers can only edit for one image.
Is ... | https://github.com/huggingface/diffusers/issues/8881 | closed | [] | 2024-07-17T07:47:09Z | 2024-09-02T00:45:15Z | null | E-SJ |
huggingface/transformers.js | 849 | AutoModel.from_pretrained - Which model is loaded | ### Question
I am using AutoModel.from_pretrained("Xenova/yolos-tiny") to load the Yolos model for object detection. Does transformers.js load the model_quantized.onnx by default? Would I be able to load model.onnx?
A related question: Is there a way to check which model is loaded once the model is loaded? | https://github.com/huggingface/transformers.js/issues/849 | open | [
"question"
] | 2024-07-16T22:45:15Z | 2024-08-09T09:45:37Z | null | mram0509 |
huggingface/text-generation-inference | 2,239 | Can I somehow change attention type from 'FlashAttention' in the text-server-launcher? | https://github.com/huggingface/text-generation-inference/issues/2239 | closed | [
"question",
"Stale"
] | 2024-07-16T18:37:45Z | 2024-08-24T01:52:31Z | null | wasifmasood | |
huggingface/diarizers | 13 | How to solve `CUDA error: out of memory while doing inference for my diarization model` | ERROR - An error occurred: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
I'm using a `12GB ... | https://github.com/huggingface/diarizers/issues/13 | open | [] | 2024-07-16T06:23:28Z | 2024-08-18T04:20:16Z | null | Ataullha |
huggingface/datasets | 7,051 | How to set_epoch with interleave_datasets? | Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples.
I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch)
Of course I... | https://github.com/huggingface/datasets/issues/7051 | closed | [] | 2024-07-15T18:24:52Z | 2024-08-05T20:58:04Z | null | jonathanasdf |
huggingface/accelerate | 2,933 | How to apply model parallel on multi machines? | Currently, I want to do llm inference on multi machines. Due to limited memory, I hope to use all machines to load the model and I'm blocked with this point. I only find that based on device_map, I can do model parallel on single machine with multi cards.
May I have some ideas about how to use Accelerate to realize?... | https://github.com/huggingface/accelerate/issues/2933 | closed | [] | 2024-07-15T14:09:10Z | 2025-03-08T06:48:09Z | null | JerryLu991223 |
huggingface/chat-ui | 1,344 | Ollama chatPromptTemplate and parameters | Hi,
I have tried adding phi3-3.8b, as an ollama model, hosted on my own prem ollama server.
I have basically copied the prompt template and parameters from microsoft/Phi-3-mini-4k-instruct used in hugging face - but it does not seem to work, I always get "no output was generated".
sending a generate/chat http reques... | https://github.com/huggingface/chat-ui/issues/1344 | open | [
"support"
] | 2024-07-15T12:38:12Z | 2024-09-18T17:57:30Z | 7 | ran-haim |
huggingface/transformers | 31,963 | How to manually stop the LLM output? | I'm using `TextIteratorStreamer` for streaming output.
Since LLM may repeat its output indefinitely, I would like to be able to have LLM stop generating when it receives a request to cancel.
Is there any way to accomplish this?
model: glm-4-9b-chat
```python
async def predict(messages, model_id: str, raw_r... | https://github.com/huggingface/transformers/issues/31963 | closed | [] | 2024-07-15T07:09:43Z | 2024-07-16T00:34:41Z | null | invokerbyxv |
huggingface/chat-ui | 1,343 | vllm 400 status code (no body) error | Hello everyone, I use the vllm openapi service, but I encountered a 400 status code (no body) error. How can I change it? Thanks
vllm:
```
python -m vllm.entrypoints.openai.api_server --model /home/rickychen/桌面/llm/models/Infinirc-Llama3-8B-5G-v1.0 --dtype auto --worker-use-ray --tensor-parallel-size 2 --port 8001... | https://github.com/huggingface/chat-ui/issues/1343 | open | [
"support"
] | 2024-07-14T12:49:59Z | 2024-09-19T12:26:36Z | 3 | rickychen-infinirc |
huggingface/chat-ui | 1,342 | undeclared node version depedancy | Using the current chat-ui dockerhub image I am unable to connect to localhost:3000 to run a simple instance of chat ui. The webservice returns 'Not Found for all routes'. Included below is my docker-compose file. if I change the chat-ui image to build with node 22 as the version everything works as expected. Does chat-... | https://github.com/huggingface/chat-ui/issues/1342 | closed | [
"support"
] | 2024-07-13T21:06:53Z | 2024-07-16T14:53:34Z | 2 | slmagus |
huggingface/diffusers | 8,858 | how to know variants=fp16 beforehand | ### Describe the bug
In some diffusion checkponts, some are fp16 and some are not.
```
pipe = DiffusionPipeline.from_pretrained(
'model_id_1',
torch_dtype=torch.float16,
variant='fp16'
)
```
```
pipe = DiffusionPipeline.from_pretrained(
'model_id_2',
torch_dtype=torch.float16,
)
```
How to k... | https://github.com/huggingface/diffusers/issues/8858 | closed | [
"bug",
"stale"
] | 2024-07-13T08:52:13Z | 2025-01-27T01:45:50Z | null | pure-rgb |
huggingface/dataset-viewer | 2,986 | Include code snippets for other libraries? | For example, in https://github.com/huggingface/huggingface.js/pull/797, we add `distilabel`, `fiftyone` and `argilla` to the list of libraries the Hub knows. However, the aim is only to handle the user-defined tags better, not to show code snippets.
In this issue, I propose to discuss if we should expand the list of... | https://github.com/huggingface/dataset-viewer/issues/2986 | open | [
"question",
"P2"
] | 2024-07-12T11:57:43Z | 2024-07-12T14:39:59Z | null | severo |
huggingface/trl | 1,830 | How to use `predict` function in `DPOTrainer` | I want to get the logp and reward of the data through `predict`, but the prediction seems only include one data.
What is the correct usage of `predict`?

| https://github.com/huggingface/trl/issues/1830 | closed | [
"❓ question"
] | 2024-07-12T06:30:20Z | 2024-10-07T12:13:22Z | null | AIR-hl |
huggingface/datatrove | 248 | solved: how to launch a slurm executor from an interactive slurm job | I forget where I saw it in the docs/code where it said not to launch a slurm executor from an `srun` interactive session - which is not quite always possible.
There is a simple workaround - unset `SLURM_*` env vars and then launch and it works just fine.
```
unset $(printenv | grep SLURM | sed -E 's/(.*)=.*/\1/'... | https://github.com/huggingface/datatrove/issues/248 | open | [] | 2024-07-12T04:08:02Z | 2024-07-13T01:15:56Z | null | stas00 |
huggingface/diffusers | 8,843 | variable (per frame) IP Adapter weights in video | is there a (planned or existing) way to have variable IP Adapter weights for videos (e.g. with AnimateDiff)?
that means setting different values for different frames, as both scaling and masking currently seem to work with the whole generation at once (be it video or still image). | https://github.com/huggingface/diffusers/issues/8843 | open | [
"stale",
"low-priority",
"consider-for-modular-diffusers"
] | 2024-07-11T16:49:43Z | 2024-12-13T15:05:24Z | 6 | eps696 |
huggingface/transformers.js | 846 | range error: array buffer allocation failed <- how to catch this error? | ### Question
While Transformers.js rocks on Desktop, My Pixel with 6Gb of ram almost always crashes the webpage when trying to run things like Whisper or TTS.
<img width="531" alt="Screenshot 2024-07-11 at 14 27 08" src="https://github.com/xenova/transformers.js/assets/805405/f8862561-7618-4c80-87e2-06c86f262698">
... | https://github.com/huggingface/transformers.js/issues/846 | open | [
"question"
] | 2024-07-11T12:32:46Z | 2024-07-11T12:32:46Z | null | flatsiedatsie |
huggingface/diffusers | 8,834 | Will the training code of SD3 Controlnet be released? | **Is your feature request related to a problem? Please describe.**
Training code of SD3 ControlNet
**Describe the solution you'd like.**
Could you please release training code of SD3 controlnet? I tried to train it but failed so I want to check whats the reason
| https://github.com/huggingface/diffusers/issues/8834 | closed | [] | 2024-07-11T03:32:55Z | 2024-09-11T01:34:38Z | 3 | ChenhLiwnl |
huggingface/optimum | 1,953 | Export AWQ models to ONNX | ### System Info
```shell
python==3.10
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reprod... | https://github.com/huggingface/optimum/issues/1953 | closed | [
"feature-request",
"onnx"
] | 2024-07-11T02:18:56Z | 2024-07-25T12:42:38Z | 1 | Toan-it-mta |
huggingface/optimum | 1,951 | how can I get a onnx format int4 model? | ### System Info
```shell
Could you please tell me how I can obtain an int type model in ONNX format?
I’ve used the following code to quantize an ONNX model into QUINT8, but when I tried to quantize it into INT4, I found there were no relevant parameters to choose. As far as I know, GPTQ allows selecting n-bit quanti... | https://github.com/huggingface/optimum/issues/1951 | open | [
"bug"
] | 2024-07-10T14:00:19Z | 2024-07-10T14:00:19Z | 0 | zhangyu68 |
huggingface/diffusers | 8,824 | [Solved] How to make custom datasets for instruct-pix2pix? | ### Describe the bug
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/builder.py", line 1750, in _prepare_split_single
[rank0]: for key, record in generator:
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/packaged_modules/folde... | https://github.com/huggingface/diffusers/issues/8824 | closed | [
"bug"
] | 2024-07-10T05:35:38Z | 2024-07-11T02:18:40Z | null | jeonga0303 |
huggingface/optimum | 1,949 | ValueError: Trying to export a florence2 model | Hello,
I am attempting to export and quantize the Florence-2 model for CPU usage but encountered the following error:
```
ValueError: Trying to export a florence2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://hug... | https://github.com/huggingface/optimum/issues/1949 | open | [
"feature-request",
"onnx"
] | 2024-07-10T04:59:06Z | 2024-10-23T10:07:05Z | 1 | ghost |
huggingface/transformers.js | 842 | Trying to run the Modnet example with nodejs on macOS result in Unknown model class "modnet", attempting to construct from base class. Model type for 'modnet' not found, assuming encoder-only architecture. | ### Question
Hello,
How one can run the modnet example ?
```
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model = await AutoModel.from_pretrained('Xenova/modnet', { quantized: false });
const processor = await AutoProcessor.from_pretrained('Xe... | https://github.com/huggingface/transformers.js/issues/842 | closed | [
"question"
] | 2024-07-09T16:19:22Z | 2025-03-27T18:58:03Z | null | gabrielstuff |
huggingface/chat-ui | 1,335 | [v0.9.1] Switch the LLM model mid-conversation? | ## Description
Currently, **chat-ui** does not support changing the language model once a conversation has started. For example, if I begin a chat with _Llama 3_, I cannot switch to _Gemini 1.5_ mid-conversation, even if I change the setting in the UI.
## Steps to Reproduce
* Start a conversation with one lang... | https://github.com/huggingface/chat-ui/issues/1335 | open | [] | 2024-07-09T13:43:16Z | 2024-09-13T16:45:23Z | 3 | adhishthite |
huggingface/transformers.js | 841 | Support opus-mt-mul-en translation in WebGPU | ### Question
I've been having some trouble where translation sometimes wasn't working. For example, I just tried translating Polish into English using `opus-mt-mul-en`. But if outputs empty strings.
So I started looking for what could be wrong, and in the Transformers.js source code I found this `marian.py` file:
... | https://github.com/huggingface/transformers.js/issues/841 | closed | [
"question"
] | 2024-07-09T11:52:12Z | 2024-10-07T15:34:54Z | null | flatsiedatsie |
huggingface/parler-tts | 83 | How big a dataset is needed to train the model? | I used 560+ hours of libritts_R data to train the model (187M) from scratch, but the audio synthesized by the model is not correct.
Is this because the size od the dataset is not enough? | https://github.com/huggingface/parler-tts/issues/83 | open | [] | 2024-07-09T03:56:42Z | 2024-09-21T10:46:39Z | null | zyy-fc |
huggingface/datatrove | 242 | how to postpone filter init till it's running | So it appears that currently I can't instantiate a model on a gpu because the filter object is created by the launcher, which either doesn't have a gpu, or it is most likely the wrong gpu even if it has one, since we would need a dedicated gpu(s) for each task.
Is it possible to add a 2nd init which would be the use... | https://github.com/huggingface/datatrove/issues/242 | open | [] | 2024-07-09T01:11:13Z | 2024-07-10T01:36:02Z | null | stas00 |
huggingface/hub-docs | 1,328 | Document how to filter and save searches on the hub (e.g. by model format, only LoRAs, by date range etc...) | **Doc request**
I'd really like to see documentation that clarifies how users can filter searches and when browsing models on the Hub.
Things I can't seem to find that I would expect / would make our lives better:
- A selection list or drop down to filter by popular model formats (GGUF, EXL2 etc...)
- A filte... | https://github.com/huggingface/hub-docs/issues/1328 | open | [] | 2024-07-08T22:51:55Z | 2024-07-10T19:17:42Z | null | sammcj |
huggingface/candle | 2,323 | How to do freeze VarMap Vars? | Hello everybody,
Is there away to freeze all Var Tensors in the VarMap like the below snippet ?
means something like implement the `Iterator` trait and detach the contained tensors from the graph and add a Var which can be trained !!!
```
# Freeze all the pre-trained layers
for param in model.par... | https://github.com/huggingface/candle/issues/2323 | open | [] | 2024-07-08T15:14:54Z | 2024-07-08T15:14:54Z | null | mohamed-180 |
huggingface/trl | 1,815 | How to use DoRA with ORPO | Hi! I'm running experiments where I'm comparing SFT to ORPO.
For SFT I currently initialize a `trl.SFTTrainer`, and pass `args=transformers.TrainingArguments(..., use_dora=True, ...)`.
For ORPO I'm supposed to pass `args=trl.ORPOConfig`, but according to the documentation this doesn't seem to support passing `use... | https://github.com/huggingface/trl/issues/1815 | closed | [] | 2024-07-08T11:12:48Z | 2024-07-08T15:39:42Z | null | julianstastny |
huggingface/text-generation-inference | 2,200 | How to clean the TGI guidance cache? | I use TGI guidance to enforce LLM choose a tool.
However, when I change the description of the tool, I find TGI does not re-compile the new grammar.
Therefore, I want to know how to clean the compiled grammar. | https://github.com/huggingface/text-generation-inference/issues/2200 | closed | [] | 2024-07-08T05:37:55Z | 2024-07-18T15:01:07Z | null | EdisonE3 |
huggingface/transformers.js | 837 | Model downloads or running on server? | ### Question
Hey there,
I am using simple hosting with cPanel view as the admin. If I upload the ONNX model files to the file manager as well as the JS script to run the model, will it still need to download the model or will it not, since the file is uploaded there, along with the script. Provided of course that I d... | https://github.com/huggingface/transformers.js/issues/837 | closed | [
"question"
] | 2024-07-06T23:07:15Z | 2025-01-20T19:50:12Z | null | moses-mbaga |
huggingface/lerobot | 305 | how to eval the policy trained by lerobot in real env? | ### System Info
```Shell
how to eval the policy trained by lerobot in real env?
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
in the code, i have not found any solution to transfer policy rollout to ... | https://github.com/huggingface/lerobot/issues/305 | closed | [] | 2024-07-05T03:23:01Z | 2024-07-23T09:08:27Z | null | cong1024 |
huggingface/transformers.js | 836 | How do I free up memory after transliteration | ### Question
After I executed the translation in the worker, it seems that the memory could not be reclaimed when I called pipely. dispose(), and the memory would be reclaimed only when the woker was closed. Can you help me with this question? | https://github.com/huggingface/transformers.js/issues/836 | closed | [
"question"
] | 2024-07-04T15:16:33Z | 2024-07-05T07:19:31Z | null | raodaqi |
huggingface/transformers | 31,790 | How to implement bind_tools to custom LLM from huggingface pipeline(Llama-3) for a custom agent |
Example Code
```
name = "meta-llama/Meta-Llama-3-8B-Instruct"
auth_token = ""
tokenizer = AutoTokenizer.from_pretrained(name,use_auth_token=auth_token)
bnb_config = BitsAndBytesConfig(
load_in_8bit=True,
)
model_config = AutoConfig.from_pretrained(
name,
use_auth_token=auth_token,
... | https://github.com/huggingface/transformers/issues/31790 | closed | [] | 2024-07-04T08:59:38Z | 2024-08-13T08:04:24Z | null | talhaty |
huggingface/diffusers | 8,788 | VAE Tiling not supported with SD3 for non power of 2 images? | ### Describe the bug
VAE tiling works for SD3 with power of 2 images, but for no other alignments.
The mentioned issues with VAE tiling are due to: [vae/config.json](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/blob/main/vae/config.json)
Having:
```
"use_post_quant_conv": false,
"... | https://github.com/huggingface/diffusers/issues/8788 | closed | [
"bug"
] | 2024-07-04T03:52:54Z | 2024-07-11T20:41:37Z | 2 | Teriks |
huggingface/diffusers | 8,785 | adding PAG Support for Hunyuan-DIT and Pixart-Sigma | we recently added PAG support for SDXL. Is Anyone interested in extending PAG support to Hunyuan-DIT and Pixart-Sigma?
There is no implementation available, so it is a bit of a research-oriented project (= fun!!). and you can get directly feedbacks from the authors @sunovivid @HyoungwonCho
to add PAG support to n... | https://github.com/huggingface/diffusers/issues/8785 | closed | [
"help wanted",
"contributions-welcome",
"advanced"
] | 2024-07-03T18:17:32Z | 2024-08-30T11:09:04Z | 4 | yiyixuxu |
huggingface/diffusers | 8,780 | Model and input data type is not same | **Is your feature request related to a problem? Please describe.**
Hi, when I trained sdv1.5 model with fp16 mode by using the `examples/text_to_image/train_text_to_image.py` file, I found there is a mismatch between unet model and input data. Specificaly, In this [line](https://github.com/huggingface/diffusers/blob/... | https://github.com/huggingface/diffusers/issues/8780 | open | [
"stale"
] | 2024-07-03T06:57:44Z | 2024-09-14T15:07:36Z | 1 | andyjiang1116 |
huggingface/peft | 1,903 | How to use multiple GPUs | ### System Info
peft=0.11.1
python=3.10
### Who can help?
When I run this script, there is no problem with a single GPU. When I try to run 2 GPUs, the system resources show that the utilization rate of each GPU is only half. When I try to increase per-device_train_batch_size and gradient-accumulation_steps, t... | https://github.com/huggingface/peft/issues/1903 | closed | [] | 2024-07-03T02:25:36Z | 2024-08-11T15:03:29Z | null | Lihwnlp |
huggingface/text-embeddings-inference | 320 | how to deploy bge-reranker-v2-m3 on Text-embeddings-inference | https://github.com/huggingface/text-embeddings-inference/issues/320 | closed | [] | 2024-07-02T15:18:48Z | 2024-07-08T10:20:05Z | null | kennard520 | |
huggingface/text-embeddings-inference | 318 | How to deploy bge-reranker-v2-m3 for multiple threads? | https://github.com/huggingface/text-embeddings-inference/issues/318 | closed | [] | 2024-07-02T14:56:33Z | 2024-07-08T10:20:01Z | null | kennard520 | |
huggingface/diffusers | 8,771 | Removing LoRAAttnProcessor causes many dependencies to fail | ### Describe the bug
https://github.com/huggingface/diffusers/pull/8623 removed obsolete `LoRAAttnProcessor` which in principle is a good thing, but it was done without considerations where is that feature currently in-use so it breaks many (and i mean many) community pipelines
it also breaks some core libraries s... | https://github.com/huggingface/diffusers/issues/8771 | closed | [
"bug"
] | 2024-07-02T13:11:33Z | 2024-07-03T16:37:08Z | 1 | vladmandic |
huggingface/candle | 2,307 | How to get all layers attentions? | I only see that candle returns last_hidden_state, but not all_hidden_states and attentions. I want to get attentions. Can I submit a PR to do this? I originally wanted to define the Model myself, but I found that all its methods are private | https://github.com/huggingface/candle/issues/2307 | open | [] | 2024-07-02T02:16:52Z | 2024-07-02T02:16:52Z | null | kitty-eu-org |
huggingface/diffusers | 8,760 | Clarification Needed on Hardcoded Value in Conditional Statement in LeditPP | Hello @manuelbrack,
I was reviewing the source code and came across a line that seems to have a hardcoded value in a conditional statement. The line in question is:
https://github.com/huggingface/diffusers/blob/0bae6e447cba0459456c4f7e7e87d7db141d3235/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_dif... | https://github.com/huggingface/diffusers/issues/8760 | open | [
"stale"
] | 2024-07-01T20:12:20Z | 2024-12-13T15:05:35Z | 3 | ardofski |
huggingface/diffusers | 8,748 | SD3 cannot finetunes a better model (hand and face deformation)? | ### Describe the bug
I want to finetune sd3 to improve its human generation quality with 3million high-quality human datasets (which has been proven useful on sdxl and other models). But hand and face deformation doesn't improve much after two days of training.
I am using [train](https://github.com/huggingface/di... | https://github.com/huggingface/diffusers/issues/8748 | closed | [
"bug"
] | 2024-07-01T07:21:19Z | 2024-07-17T06:01:31Z | 4 | KaiWU5 |
huggingface/transformers.js | 833 | convert.py has errors when i use yolov9 | ### Question
your repo
https://huggingface.co/Xenova/gelan-c
is really good and helpful for me
but i need to use the gelan-t, gelan-s edition , coz of mobile phone depoyment
when i u convert.py to convert to onnx edition , errors happen
The checkpoint you are trying to load has model type `yolov9` but Tra... | https://github.com/huggingface/transformers.js/issues/833 | open | [
"question"
] | 2024-07-01T03:51:53Z | 2024-07-18T07:04:10Z | null | jifeng632 |
huggingface/transformers | 31,722 | how to generate router_logits in moe models using model.generate()? | ### System Info
- `transformers` version: 4.41.2
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.3
- Accelerate version: 0.31.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0+cu121 (True)
- Tensor... | https://github.com/huggingface/transformers/issues/31722 | closed | [
"Generation"
] | 2024-07-01T03:48:09Z | 2024-09-13T08:07:40Z | null | Jimmy-Lu |
huggingface/transformers.js | 832 | How to load version 3 from CDN? | ### Question
The [README.md file on v3 branch](https://github.com/xenova/transformers.js/tree/v3?tab=readme-ov-file#installation) has a html snippet to import transformers version 3 from a CDN.
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@3.0.0-alp... | https://github.com/huggingface/transformers.js/issues/832 | closed | [
"question"
] | 2024-06-30T23:39:08Z | 2024-10-10T12:23:41Z | null | geoffroy-noel-ddh |
huggingface/transformers | 31,717 | how to remove kv cache? | ### Feature request
When I use the generate() function of a language model for inference, the kv-cache is also stored in the GPU memory. Is there any way to clear this kv-cache before continuing to call generate()?
### Motivation
I have a lot of text to process, so I use a for loop to call generate(). To avoid OOM, ... | https://github.com/huggingface/transformers/issues/31717 | closed | [
"Feature request",
"Generation",
"Cache"
] | 2024-06-30T12:09:48Z | 2024-11-05T01:34:42Z | null | TuuSiwei |
huggingface/accelerate | 2,904 | How to merge Qlora FSDP weights with an LLM and save model. | https://github.com/huggingface/accelerate/issues/2904 | closed | [] | 2024-06-30T07:00:50Z | 2024-07-01T14:20:53Z | null | Minami-su | |
huggingface/transformers.js | 830 | Error while using the library in nextjs (app based route) | ### Question
Hello
I was going through the issues section to find out an solution for the issue i am facing.. I did tried some of the solutions provided by xenova but it seems like I am getting some wasm fallback error which I have no idea whats happening.. I doubt its on webpack but I wanted a clarity.
Th... | https://github.com/huggingface/transformers.js/issues/830 | closed | [
"question"
] | 2024-06-29T15:00:09Z | 2025-02-10T02:00:25Z | null | rr-jino-jose |
huggingface/candle | 2,294 | How to get raw tensor data? | I am trying to implement an adaptive avg pool in candle. However, I guess my implementation will require an API to get the raw data/storage (storaged in plain/flatten array format).
Wondering if there is such an API for that?
Thanks! | https://github.com/huggingface/candle/issues/2294 | open | [] | 2024-06-28T19:19:45Z | 2024-06-28T21:51:57Z | null | WenheLI |
huggingface/diffusers | 8,730 | Implementation of DDIM, why taking Xt and (t-1) as input? | ### Describe the bug
I have tried to infer a diffusion model with DDIM with the number of timesteps = 10 and maximize timesteps as 1000.
I have printed the t in the for-loop, and the result is 901, 801, 801, 701, 601, 501, 401, 301, 201, 101, 1. It's really weird to me why 801 appears two times, and why we start f... | https://github.com/huggingface/diffusers/issues/8730 | closed | [
"bug"
] | 2024-06-28T18:45:55Z | 2024-07-01T17:24:49Z | 1 | EPIC-Lab-sjtu |
huggingface/safetensors | 490 | How to save model checkpoint from a distributed training from multiple nodes? | Hello,
When I use accelerator and deepspeed Zero3 to train the model in one node with 8 GPUs, the following code smoothly saves the model checkpoint
```
ds_state_dict = model._zero3_consolidated_16bit_state_dict() # here model is sharded
if self.accelerator.is_main_process:
save_file(ds_state_dict, f"{ou... | https://github.com/huggingface/safetensors/issues/490 | closed | [
"Stale"
] | 2024-06-28T04:59:45Z | 2024-07-31T11:46:06Z | null | Emerald01 |
huggingface/diffusers | 8,728 | Using `torchsde.BrownianInterval` instead of `torchsde.BrownianTree` in class `BatchedBrownianTree` | **Is your feature request related to a problem? Please describe.**
When I was doing some optimization for my pipeline, i found that the BrownianTree somehow took a bit more time.
**Describe the solution you'd like.**
I further dig into torchsde document, and found that they encouraged to use `BrownianInterval` to ... | https://github.com/huggingface/diffusers/issues/8728 | closed | [] | 2024-06-28T04:33:55Z | 2024-09-12T08:46:54Z | 5 | dianyo |
huggingface/transformers.js | 826 | Support for GLiNER models? | ### Question
is there a reason why models from the GLiNER family can't be supported?
I see they use a specialized library, does it take a lot of code to make them work? | https://github.com/huggingface/transformers.js/issues/826 | open | [
"question"
] | 2024-06-28T01:54:37Z | 2024-10-04T07:59:16Z | null | Madd0g |
huggingface/diffusers | 8,721 | how to unload a pipeline | how to unload a pipeline and release the gpu memory | https://github.com/huggingface/diffusers/issues/8721 | closed | [] | 2024-06-27T10:04:39Z | 2024-07-02T14:40:39Z | null | nono909090 |
huggingface/transformers.js | 825 | Are there any examples on how to use paligemma model with transformer.js | ### Question
First of all, thanks for this amazing library!
So my questions is, I happened to see this model available on transformers.js:
https://huggingface.co/Xenova/paligemma-3b-mix-224
But unfortunately I can't find any example on how to run the `image-text-to-text` pipeline. Are there are resources you c... | https://github.com/huggingface/transformers.js/issues/825 | open | [
"question"
] | 2024-06-27T09:49:22Z | 2024-06-29T02:39:27Z | null | alextanhongpin |
huggingface/lerobot | 294 | after training using lerobot framework,how to infer the trained policy directly in real environment(ep. aloha code)? i have not found a solution yet | ### System Info
```Shell
os ubuntu20.04,
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
not yet
### Expected behavior
how to directly eval the policy trained by lerobot in aloha ? | https://github.com/huggingface/lerobot/issues/294 | closed | [
"question",
"policies",
"robots",
"stale"
] | 2024-06-27T03:16:19Z | 2025-10-23T02:29:25Z | null | cong1024 |
huggingface/chat-ui | 1,312 | [v0.9.1] Error: "Cannot resolve directory $env" | ## Issue
For all client-side components, I get this:
```
"Cannot resolve directory $env"
```
<img width="589" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/26fa2eef-dbff-44f6-bb86-7700387abdf2">
<img width="837" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769... | https://github.com/huggingface/chat-ui/issues/1312 | open | [
"support"
] | 2024-06-26T13:24:42Z | 2024-06-26T15:14:48Z | 2 | adhishthite |
huggingface/chat-ui | 1,311 | 400 (no body) trying to reach openai compatible server | Hi everyone,
I have the following setup (containers are on the same device):
- Container 1: Nvidia NIM (openai-compatible) with Llama3 8B Instruct, port 8000;
- Container 2: chat-ui, port 3000.
This is the content of the `.env` file:
```
MONGODB_URL=mongodb://localhost:27017
MONGODB_DB_NAME=chat-ui
MODELS=`... | https://github.com/huggingface/chat-ui/issues/1311 | open | [
"support"
] | 2024-06-26T12:34:44Z | 2024-07-22T13:03:18Z | 2 | edesalve |
huggingface/diffusers | 8,710 | Add PAG support to SD1.5 | We recently integrated PAG into diffusers! See this PR [here] (https://github.com/huggingface/diffusers/pull/7944) we added PAG to SDXL
we also want to add PAG support to SD1.5 pipelines! we will need:
- [x] StableDiffusionPAGPipeline (assigned to @shauray8, PR https://github.com/huggingface/diffusers/pull/8725)
... | https://github.com/huggingface/diffusers/issues/8710 | closed | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-06-26T08:23:17Z | 2024-10-09T20:40:59Z | 17 | yiyixuxu |
huggingface/chat-ui | 1,309 | "404 Resource Not Found" when using Azure OpenAI model endpoint | I run `chat-ui` with the `chat-ui-db` docker image. I would like to connect it to my Azure OpenAI API endpoint.
I have setup the `env.local` file as stated in your docs and binded it with the docker container:
```bash
MODELS=`[{
"id": "gpt-4-1106-preview",
"name": "gpt-4-1106-preview",
"displayName": "gpt... | https://github.com/huggingface/chat-ui/issues/1309 | open | [
"support"
] | 2024-06-26T07:16:54Z | 2024-06-26T18:53:51Z | 2 | gqoew |
huggingface/chat-ui | 1,308 | Warning: To load an ES module in Azure environment | Hi Team,
We are currently facing issues deploying our Chat UI solution in Azure Web App. The error encountered in the console log is as follows:
```
npm http fetch GET 200 https://registry.npmjs.org/npm 141ms
(node:124) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs exte... | https://github.com/huggingface/chat-ui/issues/1308 | open | [
"support"
] | 2024-06-26T06:04:45Z | 2024-06-27T09:07:35Z | 3 | pronitagrawalvera |
huggingface/transformers.js | 823 | How to export q4f16.onnx | ### Question
Thanks for providing such a great project, but I have a problem converting the model.
```
For example:
model_q4f16.onnx
```
What command is used to create and export such a q4/f16.onnx model?
Can you give me more tips or help? Thank you | https://github.com/huggingface/transformers.js/issues/823 | closed | [
"question"
] | 2024-06-26T05:36:47Z | 2024-06-26T07:46:57Z | null | juntaosun |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.