repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 9,038 | how to use prompt weight in FlaxStableDiffusionPipeline | ### Describe the bug
I can see there are prompt_embeds in StableDiffusionPipeline to support Prompt weighting, But how to do that in FlaxStableDiffusionPipeline? there are not prompt_embeds in StableDiffusionPipeline
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_... | https://github.com/huggingface/diffusers/issues/9038 | closed | [
"bug",
"stale"
] | 2024-08-01T10:44:37Z | 2024-10-14T18:25:55Z | null | ghost |
pytorch/torchchat | 989 | Weird model behaviour on Server/Browser: Looks like it's not using the template | Hi,
I'm trying out the torchchat right now, started the streamlit application with llama3 model

I just texted Hi !!
- Why is this text generation behaviour unusal , Is it the problem with model being converted to torchchat... | https://github.com/pytorch/torchchat/issues/989 | open | [
"bug",
"actionable",
"Browser"
] | 2024-08-01T05:52:19Z | 2024-08-02T08:05:45Z | 2 | akhilreddy0703 |
pytorch/torchchat | 988 | Could we request support for a smallish (~4-5B param) modern vision LLM? LLava-1.6 or Nanollava? | ### 🚀 The feature, motivation and pitch
Having good basic pytorch support for inferencing LLMs is key to continued success of pytorch. Vision LLM models tend to have uneven support on mainstream inferencing engines like Llama.cpp due to the need to reimplement CLIP/SIGLIP etc. Pytorch could natively support performan... | https://github.com/pytorch/torchchat/issues/988 | open | [
"enhancement"
] | 2024-08-01T03:59:17Z | 2024-08-01T05:50:16Z | 1 | kinchahoy |
huggingface/diffusers | 9,032 | how to get the minimun working example of FlaxStableDiffusionPipeline in google colab with tpu runtime | ### Describe the bug
I try the code in google colab with tpu runtime
```
! python3 -m pip install -U diffusers[flax]
import diffusers, os
pipeline = diffusers.StableDiffusionPipeline.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/chilloutMix-Ni.safetensors')
pipeline.save_pretrained('chilloutMi... | https://github.com/huggingface/diffusers/issues/9032 | open | [
"bug",
"stale"
] | 2024-08-01T03:58:34Z | 2024-11-04T15:04:13Z | null | ghost |
huggingface/diffusers | 9,031 | how to disable safty_checker in FlaxStableDiffusionPipeline | ### Describe the bug
```
! python3 -m pip install -U tensorflow-cpu
import diffusers, os
pipeline = diffusers.StableDiffusionPipeline.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/chilloutMix-Ni.safetensors')
pipeline.save_pretrained('chilloutMix', safe_serialization=False)
pipeline, params = ... | https://github.com/huggingface/diffusers/issues/9031 | open | [
"bug",
"stale"
] | 2024-08-01T03:48:27Z | 2024-10-13T15:03:54Z | null | ghost |
huggingface/llm.nvim | 106 | How to use openai api? | I read the code, and it seems support real openai api. But When I set it up something is wrong.
Just make sure if this supports open ai api? I mean realy openai api. | https://github.com/huggingface/llm.nvim/issues/106 | closed | [] | 2024-07-31T23:51:42Z | 2024-10-18T13:49:11Z | null | 4t8dd |
huggingface/diffusers | 9,025 | how to use FlaxStableDiffusionPipeline with from_single_file in kaggle tpu vm | ### Describe the bug
I have single safetensors file and work on diffusers.StableDiffusionPipeline.from_single_file
Now I want to use FlaxStableDiffusionPipeline but there are not .from_single_file member function in FlaxStableDiffusionPipeline
I need to
```
pipeline = diffusers.StableDiffusionPipeline.from_single_... | https://github.com/huggingface/diffusers/issues/9025 | closed | [
"bug"
] | 2024-07-31T10:44:48Z | 2024-08-01T03:59:51Z | null | ghost |
pytorch/TensorRT | 3,049 | Is jetpack 6.0 for jetson agx orin supported? | I tried installing torch_tensorrt using jetpack 5.0 WORKSPACE script but it did not work for my system which is currently using jetpack 6.0 on the jetson agx orin | https://github.com/pytorch/TensorRT/issues/3049 | open | [
"question"
] | 2024-07-31T03:06:24Z | 2024-09-12T21:11:40Z | null | dhruvmsheth |
pytorch/xla | 7,774 | ddp documentation issues | ## 📚 Documentation
Our [documentations](https://pytorch.org/xla/release/2.3/index.html#how-to-use-distributeddataparallel) suggests users must use the following parameters while setting up DDP. This information is outdated. Please remove any such documentations.
```
os.environ['MASTER_ADDR'] = 'localhost'
os.e... | https://github.com/pytorch/xla/issues/7774 | closed | [
"usability",
"documentation"
] | 2024-07-30T18:53:45Z | 2024-10-30T16:46:30Z | 1 | miladm |
huggingface/transformers.js | 873 | Absolute speaker diarization? | ### Question
I've just managed to integrate the new speaker diarization feature into my project. Very cool stuff. My goal is to let people record meetings, summarize them, and then also list per-speaker tasks. This seems to be a popular feature.
One thing I'm running into is that I don't feed Whisper a single lon... | https://github.com/huggingface/transformers.js/issues/873 | closed | [
"question"
] | 2024-07-30T15:09:23Z | 2024-08-12T12:12:07Z | null | flatsiedatsie |
pytorch/torchchat | 969 | Running `torchchat export` with just the model name does not error out | ### 🐛 Describe the bug
Running `python torchchat.py export stories15M` does not error out, nor generates any export files, though it should have?
```shell
% python torchchat.py export stories15M; echo $?
lm_eval is not installed, GPTQ may not be usable
Using device=mps
Warning! Device MPS not supported for expor... | https://github.com/pytorch/torchchat/issues/969 | closed | [
"bug",
"actionable"
] | 2024-07-30T13:56:14Z | 2024-11-26T19:43:00Z | 2 | malfet |
pytorch/executorch | 4,461 | How to dispatch SDPA to XNNPACK? | ### 🐛 Describe the bug
I’m currently working on dispatching the SDPA operations to XNNPACK. To accomplish this, I’ve added `torch.nn.functional.scaled_dot_product_attention` to the `SUPPORTED_DYN_QUANT_LINEAR_MODULES` in the `backends/xnnpack/partition/configs.py` file, as shown in the code block below.
```pytho... | https://github.com/pytorch/executorch/issues/4461 | closed | [] | 2024-07-30T06:32:29Z | 2024-08-02T01:44:09Z | null | DzAvril |
huggingface/transformers.js | 872 | Please provide extensive examples of how to use langchain... | Here's an example script I'm using, which I believes leverages the ```recursivecharactertextsplitter``` from Langchain. I'd love to replicate my vector db program to the extent I'm able using javascript within a browser but need more examples/help...
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset... | https://github.com/huggingface/transformers.js/issues/872 | closed | [] | 2024-07-30T02:39:43Z | 2024-08-26T00:47:12Z | null | BBC-Esq |
pytorch/xla | 7,766 | Does PyTorch/XLA nightly provide GPU support? | ## ❓ Questions and Help
In README.md, there is nightly support on TPU
```
pip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu
pip install 'torch_xla[tpu] @ https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp310-cp310-linux_x86_64.whl' -f ... | https://github.com/pytorch/xla/issues/7766 | closed | [
"xla:gpu",
"documentation"
] | 2024-07-29T22:29:51Z | 2024-12-19T22:18:22Z | 5 | titaiwangms |
huggingface/diffusers | 9,009 | UNET slower by a factor of batch_size | ### Describe the bug
I was expecting to get faster inferences by batching images together. I realized that when I batch 6 images together, the UNET is 5 times slower for a pipeline_controlnet_img2img.py...
Is it possible or normal ? Do I miss anything ? Thanks for your help
### Reproduction
Image dim 1024.... | https://github.com/huggingface/diffusers/issues/9009 | closed | [
"bug"
] | 2024-07-29T21:01:25Z | 2024-07-30T07:37:51Z | 2 | christopher5106 |
pytorch/ao | 550 | [Question] How to effectively use the `intmm.py` and `intmm_triton.py` | Hello AO Team! Thanks for this amazing package. I am extremely interested in using the `Integer MatMul Kernels` on `A100` GPUs.
I wrote a simple matmul operation to see the effectiveness of the same.
```python
import os
import torch
from torchao.kernel.intmm import int_matmul
from tqdm import tqdm
# print... | https://github.com/pytorch/ao/issues/550 | open | [] | 2024-07-29T16:25:03Z | 2024-07-30T19:59:03Z | null | balaabhijit |
huggingface/transformers.js | 869 | PLEASE provide examples of how to use for vector/embeddings using non-"pipeline" syntax. | I'm accustomed (and most people use) non-"pipeline" syntax with ```transformers``` - e.g. ```AutoModelFromCausalLM``` and ```from_pretained``` and so on?
Also, is there a way to use the ```sentence-transformers``` library with ```transformers.js``` in a similar fashion. You'll notice at [this link](https://huggingf... | https://github.com/huggingface/transformers.js/issues/869 | closed | [] | 2024-07-29T11:55:51Z | 2024-07-30T02:37:40Z | null | BBC-Esq |
huggingface/chat-ui | 1,377 | Use refresh tokens for OAuth | Currently we use long-lived sessions that get extended when the user performs an action. In order to better manage sessions, we could switch to an OAuth flow where we have a short lived session with an access token cookie and a refresh token that we can use to refresh the sessions, since HuggingFace now supports refres... | https://github.com/huggingface/chat-ui/issues/1377 | open | [
"enhancement",
"back"
] | 2024-07-29T10:55:11Z | 2024-09-13T20:08:45Z | 4 | nsarrazin |
huggingface/datasets | 7,080 | Generating train split takes a long time | ### Describe the bug
Loading a simple webdataset takes ~45 minutes.
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M")
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebD... | https://github.com/huggingface/datasets/issues/7080 | open | [] | 2024-07-29T01:42:43Z | 2024-10-02T15:31:22Z | 2 | alexanderswerdlow |
huggingface/chat-ui | 1,375 | Chat-UI is not following prompt - producing unknown completely unrelated text? Hacked? | Oogabooga text-generation-web-ui engine used for inference (prompts directly input into the oogabooga ui produce normal results but chat-ui is doing something weird as below), Mongodb setup
_**Prompt:**_ bake a cake
_**Assistant:**_
```
I'm trying to install Ubuntu on my laptop, but it's not detecting the lang... | https://github.com/huggingface/chat-ui/issues/1375 | open | [
"support"
] | 2024-07-28T00:49:56Z | 2025-01-30T18:45:59Z | 10 | cody151 |
huggingface/chat-ui | 1,374 | Help with .env.local for AWS as an endpoint for llama3 on huggingface cloud | there seems to be no configuration for .env.local that I can get to work to connect to a Llama3 inference endpoint hosted by HuggingFace cloud (and I can find no examples).
```
MONGODB_URL=mongodb://localhost:27017
HF_TOKEN=hf_*******
MODELS=`[
{
"name": "AWS meta-llama-3-8b-pdf",
"chatPromptTem... | https://github.com/huggingface/chat-ui/issues/1374 | open | [
"support"
] | 2024-07-27T23:27:11Z | 2024-07-30T05:28:48Z | 1 | thams |
huggingface/transformers.js | 866 | compat with transformers >= 4.40 and tokenizers >= 0.19 | ### Question
This is probably a known issue, as I'm aware that this project lags a bit behind the fast changes being made in the python transformers library, but I wanted to document a specific compatibility issue I hit:
Tokenizers 0.19 introduced some breaking changes which result in different outputs for (at le... | https://github.com/huggingface/transformers.js/issues/866 | open | [
"question"
] | 2024-07-27T18:56:22Z | 2024-08-30T08:34:01Z | null | joprice |
huggingface/chat-ui | 1,371 | Oogabooga server and Chat-ui producing random gibberish with OpenAI API? | Ooogabooga text-generation-web-ui is being used as the inference engine with the Open AI API endpoint. Please see below
```
**_PROMPT START_**
thorium oxide for a catalyst bed
**_PROMPT END_**
**_RESPONSE START_**
I am writing a story set in the world of Harry Potter. The main character is a Muggle-born wit... | https://github.com/huggingface/chat-ui/issues/1371 | open | [] | 2024-07-27T12:38:06Z | 2024-07-27T15:10:00Z | 2 | cody151 |
huggingface/chat-ui | 1,368 | No way to "Continue Generating" | Once the text generation finishes, there actually appears to be no way to continue generating, the submit button is greyed out and clicking it just errors out. I am using OpenAI endpoint in Koboldcpp using local Llama 3.1. | https://github.com/huggingface/chat-ui/issues/1368 | open | [
"question"
] | 2024-07-26T18:35:05Z | 2024-11-27T03:48:09Z | null | cody151 |
huggingface/huggingface-llama-recipes | 23 | How to run LLama8b/70b using FP8 | Are the instructions available to converting to FP8?
I'd like to try converting both the 8B and 70B to FP8 and compare.
Thank you! | https://github.com/huggingface/huggingface-llama-recipes/issues/23 | open | [] | 2024-07-26T15:54:29Z | 2024-10-01T06:03:49Z | null | vgoklani |
huggingface/chat-ui | 1,367 | iframe throws 403 error when sending a message | ## Issue
**Use case:** I would like to embed the Chat UI in an iframe in Qualtrics.
**Issue:** Sending a message from the Chat UI in an iframe results in 403 error with the message below.
> You don't have access to this conversation. If someone gave you this link, ask them to use the 'share' feature instead.
... | https://github.com/huggingface/chat-ui/issues/1367 | open | [
"support"
] | 2024-07-26T13:10:36Z | 2024-08-13T17:22:36Z | 6 | rodrigobdz |
huggingface/chat-ui | 1,366 | Koboldcpp Endpoint support | When trying to use koboldcpp as the endpoint it throws an error
```
[
{
"code": "invalid_union_discriminator",
"options": [
"anthropic",
"anthropic-vertex",
"aws",
"openai",
"tgi",
"llamacpp",
"ollama",
"vertex",
"genai",
"cloudfl... | https://github.com/huggingface/chat-ui/issues/1366 | closed | [
"question",
"models"
] | 2024-07-26T12:13:24Z | 2024-07-26T13:57:13Z | null | cody151 |
huggingface/datasets | 7,070 | how set_transform affects batch size? | ### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_feat... | https://github.com/huggingface/datasets/issues/7070 | open | [] | 2024-07-25T15:19:34Z | 2024-07-25T15:19:34Z | 0 | VafaKnm |
huggingface/chat-ui | 1,361 | Unhandled error event upon start with Koboldcpp | I have mongodb set up as well as koboldcpp running Llama 3.1 8b on windows for inference but chat-ui will not start
```
yas@zen:~/chat-ui$ npm run dev -- --open
> chat-ui@0.9.1 dev
> vite dev --open
VITE v4.5.3 ready in 2735 ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expos... | https://github.com/huggingface/chat-ui/issues/1361 | closed | [
"support"
] | 2024-07-25T14:32:44Z | 2024-07-26T12:11:50Z | 1 | cody151 |
huggingface/lighteval | 238 | What is `qem` for gsm8k evaluation? | As titled.
Thank you! | https://github.com/huggingface/lighteval/issues/238 | closed | [] | 2024-07-25T14:30:44Z | 2024-09-15T02:19:57Z | null | shizhediao |
huggingface/optimum | 1,972 | Whisper-large-v3 transcript is trimmed | ### System Info
```shell
optimum 1.21.2
Ubuntu 22.04.4 LTS
CUDA 12.3
cuda-toolkit 11.7
onnxruntime 1.18.1
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/S... | https://github.com/huggingface/optimum/issues/1972 | open | [
"bug"
] | 2024-07-25T12:04:18Z | 2024-07-31T08:05:02Z | 4 | yv0vaa |
huggingface/lerobot | 341 | question: expected performance of vq-bet? | Hi,
Thank you to the LeRobot community for maintaining such a fantastic codebase. My research group and I have greatly benefited from your efforts. In my current project, I am using the repository primarily for analyzing algorithms across different environments. I wanted to raise an issue I am encountering with VQ-B... | https://github.com/huggingface/lerobot/issues/341 | closed | [
"question",
"policies",
"stale"
] | 2024-07-25T04:35:06Z | 2025-10-07T02:27:24Z | null | Jubayer-Hamid |
huggingface/text-generation-inference | 2,302 | how to use the model's checkpoint in local fold? | ### System Info
ghcr.io/huggingface/text-generation-inference 2.0.4
platform windows10
Docker version 27.0.3
llm model:lllyasviel/omost-llama-3-8b-4bits
cuda 12.3
gpu nvidia rtx A6000
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modificatio... | https://github.com/huggingface/text-generation-inference/issues/2302 | open | [
"Stale"
] | 2024-07-25T04:26:44Z | 2024-08-25T01:57:54Z | null | zk19971101 |
huggingface/diffusers | 8,957 | StableDiffusionSafetyChecker ignores `attn_implementation` load kwarg | ### Describe the bug
`transformers` added `sdpa` and FA2 for CLIP model in https://github.com/huggingface/transformers/pull/31940. It now initializes the vision model like https://github.com/huggingface/transformers/blob/85a1269e19af022e04bc2aad82572cd5a9e8cdd9/src/transformers/models/clip/modeling_clip.py#L1143.
... | https://github.com/huggingface/diffusers/issues/8957 | closed | [
"bug",
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-07-24T19:38:23Z | 2024-11-19T21:06:53Z | 8 | jambayk |
huggingface/transformers.js | 862 | how to retain spiece token markers | ### Question
When evaluating a model that uses sentencepiece using transformer.js, I do not get the `▁` marker included in the output as I do when running from python. I'm using the qanastek/pos-french-camembert model with to do POS tagging and have situations where a single word such as a verb with a tense suffix is... | https://github.com/huggingface/transformers.js/issues/862 | open | [
"question"
] | 2024-07-24T16:01:44Z | 2024-07-24T17:14:58Z | null | joprice |
huggingface/transformers | 32,186 | callback to implement how the predictions should be stored | https://github.com/huggingface/transformers/issues/32186 | closed | [] | 2024-07-24T11:36:26Z | 2024-07-24T11:39:13Z | null | Imran-imtiaz48 | |
huggingface/optimum | 1,969 | Latest Optimum library does not compatible with latest Transformers | ### System Info
```shell
Any system that can install those libraries
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (... | https://github.com/huggingface/optimum/issues/1969 | closed | [
"bug"
] | 2024-07-24T06:49:07Z | 2024-08-20T09:06:19Z | 1 | lanking520 |
huggingface/diffusers | 8,953 | Why loading a lora weights so low? | I used diffusers to load lora weights but it much slow to finish.
diffusers version: 0.29.2
I test another version of diffusers 0.23.0 without peft installation, and the time is decent.
```
t1 = time.time()
pipe.load_lora_weights("/data/**/lora_weights/lcm-lora-sdxl/", weight_name="pytorch_lora_weights.safeten... | https://github.com/huggingface/diffusers/issues/8953 | closed | [
"peft"
] | 2024-07-24T06:16:42Z | 2024-10-15T15:23:34Z | 18 | zengjie617789 |
pytorch/audio | 3,816 | Division by zero in loudness calculation | ### 🐛 Describe the bug
The following line in the functional method `loudness` results in `nan` value when the entire waveform is below the hardcoded loudness threshold value `gamma_abs = -70`.
https://github.com/pytorch/audio/blob/69b2a0adc2ec03ab99990d7e8be3d4510438c148/src/torchaudio/functional/functional.py#L16... | https://github.com/pytorch/audio/issues/3816 | open | [] | 2024-07-24T05:55:53Z | 2024-07-29T06:32:17Z | 0 | DanTremonti |
pytorch/audio | 3,815 | Division by zero in loudness calculation | The following line in the functional method `loudness` results in `nan` value when the entire waveform is below the hardcoded loudness threshold value `gamma_abs = -70`.
https://github.com/pytorch/audio/blob/69b2a0adc2ec03ab99990d7e8be3d4510438c148/src/torchaudio/functional/functional.py#L1627-L1631
An example case... | https://github.com/pytorch/audio/issues/3815 | closed | [] | 2024-07-24T05:52:03Z | 2024-07-24T05:53:28Z | 0 | dhanvanth-pk-13760 |
huggingface/accelerate | 2,956 | How to run Vision Model(Like llava) based on pippy? | Currently I tried to apply model parallelism based on pippy and I refer to the given example,
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from accelerate import PartialState, prepare_pippy
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-chat-hf", low_... | https://github.com/huggingface/accelerate/issues/2956 | closed | [] | 2024-07-24T03:13:21Z | 2024-09-13T15:06:32Z | null | JerryLu991223 |
pytorch/torchtitan | 479 | regarding torch.compile support | in coming soon, there is an item called `torch.compile support`. I'm wondering if we simply call torch.compile once to wrap the entire model, will that be enough? What's the reason we want to do something more fine-grained and customized?
| https://github.com/pytorch/torchtitan/issues/479 | closed | [
"question"
] | 2024-07-24T01:10:20Z | 2024-07-26T23:50:24Z | null | jason718 |
pytorch/torchtitan | 478 | what's the est timeline for releasing Context Parallel and 3D Pipeline | Many interesting topics are mentioned in coming soon section, I'm wondering do we have a estimated/targeted releasing date? Thanks again for the great work. | https://github.com/pytorch/torchtitan/issues/478 | closed | [
"question"
] | 2024-07-24T01:09:02Z | 2024-07-26T23:51:06Z | null | jason718 |
huggingface/transformers.js | 859 | JavaScript code completion model | ### Question
Currently we have two Python code completion models:
https://github.com/xenova/transformers.js/blob/7f5081da29c3f77ee830269ab801344776e61bcb/examples/code-completion/src/App.jsx#L9-L13
And since we are doing JavaScript here, I would like a model optimized on JavaScript. Does anyone have a JavaScript... | https://github.com/huggingface/transformers.js/issues/859 | open | [
"question"
] | 2024-07-23T13:51:58Z | 2024-07-23T13:51:58Z | null | kungfooman |
huggingface/dataset-viewer | 2,994 | Compute leaks between splits? | See https://huggingface.co/blog/lbourdois/lle
Also: should we find the duplicate rows? | https://github.com/huggingface/dataset-viewer/issues/2994 | open | [
"question",
"feature request",
"P2"
] | 2024-07-23T13:00:39Z | 2025-06-24T11:39:37Z | null | severo |
huggingface/datasets | 7,066 | One subset per file in repo ? | Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
├── train0.jsonl
├── train1.jsonl
└── train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
├── animals.jso... | https://github.com/huggingface/datasets/issues/7066 | open | [] | 2024-07-23T12:43:59Z | 2025-06-26T08:24:50Z | 1 | lhoestq |
pytorch/examples | 1,278 | Larger image size for DCGAN code with Celeba dataset | I want to test DCGAN example with a larger image size. The [default](https://github.com/pytorch/tutorials/blob/main/beginner_source/dcgan_faces_tutorial.py#L188) image size is 64x64 and in this [topic](https://github.com/pytorch/examples/issues/70), there are some proposals to modify the code to support larger images s... | https://github.com/pytorch/examples/issues/1278 | closed | [] | 2024-07-23T11:40:57Z | 2024-07-24T08:10:03Z | 0 | mahmoodn |
huggingface/transformers | 32,145 | callback to implement how the predictions should be stored. | I am exploring distributed inference capabilities with the Hugging Face Trainer for transformers. I need to do distributed inference across multiple devices or nodes and save the predictions to a file. However, after reviewing the available callbacks, I did not find any that facilitate this specific task. Furthermore, ... | https://github.com/huggingface/transformers/issues/32145 | open | [
"Feature request"
] | 2024-07-22T21:32:22Z | 2024-07-24T09:23:07Z | null | sachinya00 |
huggingface/diffusers | 8,930 | StableDiffusionXLControlNetImg2ImgPipeline often fails to respect "pose" control images | ### Describe the bug
Hello,
Using [StableDiffusionXLControlNetImg2ImgPipeline](https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_sdxl#diffusers.StableDiffusionXLControlNetImg2ImgPipeline), and passing a "pose" control image often fails to produce an output image that maintains the pose.
I couldn'... | https://github.com/huggingface/diffusers/issues/8930 | open | [
"bug",
"stale"
] | 2024-07-22T13:48:48Z | 2024-09-21T07:48:04Z | 14 | Clement-Lelievre |
pytorch/pytorch | 131,313 | How to create a custom op which can be compile by dynamo inductor? | ### 📚 The doc issue
https://pytorch.org/tutorials/advanced/cpp_extension.html
### Suggest a potential alternative/fix
A descriptive explanation and a simple example are required.
cc @svekars @brycebortree @ezyang @anijain2305 @chauhang @penguinwu | https://github.com/pytorch/pytorch/issues/131313 | closed | [
"module: docs",
"triaged",
"module: custom-operators",
"oncall: pt2"
] | 2024-07-22T08:09:10Z | 2024-07-23T14:17:40Z | null | MoFHeka |
huggingface/diffusers | 8,924 | Adding Differential Diffusion to Kolors, Auraflow, HunyuanDiT | Diffusers recently added support for the following models:
- [x] [Kolors](https://github.com/huggingface/diffusers/pull/8812) (@tuanh123789)
- [x] [AuraFlow](https://github.com/huggingface/diffusers/pull/8796)
- [x] [HunyuanDiT](https://github.com/huggingface/diffusers/pull/8240) (@MnCSSJ4x)
A few weeks ago, we a... | https://github.com/huggingface/diffusers/issues/8924 | closed | [
"good first issue",
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-07-22T07:17:58Z | 2024-10-31T19:18:32Z | 28 | a-r-r-o-w |
huggingface/candle | 2,349 | What is the equivalent of interpolate from torch.nn | Hi,
I need some help with translating things written in Python:
f.e. I have such a statement:
```
import torch.nn.functional as F
result[mask] = result[mask] + F.interpolate(cur_result.permute(3,0,1,2).unsqueeze(0).contiguous(), (H, W, D), mode='trilinear', align_corners=False).squeeze(0).permute(1,2,3,0).co... | https://github.com/huggingface/candle/issues/2349 | open | [] | 2024-07-21T22:14:33Z | 2024-07-21T22:14:33Z | null | wiktorkujawa |
huggingface/candle | 2,347 | how to specify generator for randn function | pytorch
```python
noise = torch.randn(x_start.size(), dtype=x_start.dtype, layout=x_start.layout, generator=torch.manual_seed(seed)).to(x_start.device)
```
how to specify seed in candle? | https://github.com/huggingface/candle/issues/2347 | closed | [] | 2024-07-21T10:30:35Z | 2024-07-21T12:33:23Z | null | jk2K |
huggingface/chat-ui | 1,354 | How do I use chat ui with RAG(RETRIEVAL AUGMENTED GENERATOR) | I currently applied the rag technique to the "HuggingFaceH4/zephyr-7b-beta" model and used mongo atlas as a knowledge base, but I didn't find anything about how to connect the chat ui to pass the top k documents to the model so that it can use context to answer questions | https://github.com/huggingface/chat-ui/issues/1354 | open | [] | 2024-07-21T01:19:37Z | 2024-08-22T11:25:50Z | 1 | pedro21900 |
huggingface/chat-ui | 1,353 | Llama-3-70b - Together.ai failure | 
This config used to work on the older hugging chat 0.8.2
All my other models (OpenAI, Anthropic) work fine, its just the Llama-3-70b from Together that fails.
```
{
"name" : "meta-llama/Meta-Llama-3-70B-Instruct-... | https://github.com/huggingface/chat-ui/issues/1353 | open | [
"support",
"models"
] | 2024-07-20T19:30:16Z | 2024-07-25T13:45:54Z | 4 | gururise |
pytorch/examples | 1,277 | word_language_model, is it a Transformer, Encoder-only or Decoder only? | ## 📚 Documentation
<!-- A clear and concise description of what content in any of the README.md files is an issues -->
The document says word_language_model uses RNN/Transformer but I am having trouble understanding exactly what it is.
Looking at the input target sequences, seems like it is a generative model... | https://github.com/pytorch/examples/issues/1277 | closed | [] | 2024-07-20T05:14:09Z | 2024-07-20T05:40:53Z | 1 | efg001 |
pytorch/TensorRT | 3,024 | ❓ [Question] How to deal with this error: AssertionError: cuda_ext_fp8 could not be imported. E4M3 quantization requires CUDA and cuda_ext_fp8 | ## ❓ Question
When I run the TensorRT/examples/dynamo/vgg16_fp8_ptq.py
AssertionError: cuda_ext_fp8 could not be imported. E4M3 quantization requires CUDA and cuda_ext_fp8
## What you have already tried
I transfer the cuda version:11.8/12.1/12.2,it doesn't work
## Environment
> Build information about... | https://github.com/pytorch/TensorRT/issues/3024 | closed | [
"question"
] | 2024-07-20T01:44:13Z | 2024-08-07T17:06:50Z | null | zk1009 |
pytorch/tutorials | 2,978 | 💡 [REQUEST] - Tutorial on deep survival analysis using PyTorch & TorchSurv | ### 🚀 Describe the improvement or the new tutorial
[`TorchSurv`](https://github.com/Novartis/torchsurv) is a Python package that serves as a companion tool to perform deep survival modeling within the `PyTorch` environment. Unlike existing libraries that impose specific parametric forms on users, `TorchSurv` enable... | https://github.com/pytorch/tutorials/issues/2978 | closed | [] | 2024-07-19T17:53:34Z | 2024-10-30T18:09:44Z | 3 | tcoroller |
pytorch/xla | 7,714 | How to test on a subset of TPUs in a TPU Pod | ## ❓ Questions and Help
We have some quota for TPU pods (TPU v3-8N, N>1) but not for single-node machines (TPU v3-8). As everyone knows, single-node machines are really useful for debugging. However, under the default settings, simply launching the XLA code on a single node within a pod won't work -- it will wait fo... | https://github.com/pytorch/xla/issues/7714 | closed | [] | 2024-07-19T16:29:43Z | 2024-07-31T09:29:39Z | null | Jiayi-Pan |
huggingface/diffusers | 8,907 | [Tests] Improve transformers model test suite coverage | Currently, we have different variants of transformers: https://github.com/huggingface/diffusers/tree/main/src/diffusers/models/transformers/. However, we don't have test suites for each of them: https://github.com/huggingface/diffusers/tree/main/tests/models/transformers/.
We are seeking contributions from the comm... | https://github.com/huggingface/diffusers/issues/8907 | closed | [
"Good second issue",
"contributions-welcome"
] | 2024-07-19T10:14:34Z | 2024-08-19T03:00:12Z | 6 | sayakpaul |
huggingface/diffusers | 8,906 | there is no qk_norm in SD3Transformer2DModel. Is that right? | ### Describe the bug
there is no qk_norm in SD3Transformer2DModel. Is that right?
self.attn = Attention(
query_dim=dim,
cross_attention_dim=None,
added_kv_proj_dim=dim,
dim_head=attention_head_dim // num_attention_heads,
heads=num_attention_he... | https://github.com/huggingface/diffusers/issues/8906 | closed | [
"bug"
] | 2024-07-19T09:18:05Z | 2024-10-31T19:19:24Z | 3 | heart-du |
huggingface/lerobot | 334 | where to set the initial joint (position + angle) information when controlling real aloha robot? | ### System Info
```Shell
ubuntu 20
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
Hi Guys, I am using the pr #316 written by Cadene to control the real aloha robot, when running cmd : python control_robot.py teleop... | https://github.com/huggingface/lerobot/issues/334 | closed | [
"question",
"stale"
] | 2024-07-19T08:53:39Z | 2025-10-23T02:29:22Z | null | cong1024 |
huggingface/distil-whisper | 145 | How to load a fine-tuned model for inference? | @sanchit-gandhi
I used the script from https://github.com/huggingface/distil-whisper/tree/main/training/flax/finetuning_scripts to fine-tune a model and obtained a model named flax_model.msgpack. How can I load this model for inference? Additionally, why did the size of the fine-tuned model increase? | https://github.com/huggingface/distil-whisper/issues/145 | open | [] | 2024-07-19T02:21:10Z | 2024-10-21T17:13:45Z | null | xinliu9451 |
huggingface/diffusers | 8,900 | How to load sd_xl_refiner_1.0.safetensors use from_single_file | ### Describe the bug
```
Traceback (most recent call last):
File "/workspace/work/private/TensorRT/demo/Diffusion/st_base.py", line 300, in <module>
A1111(local_dir, 'sd_xl_base_1.0.safetensors', steps=50, cfs_scale=8)
File "/workspace/work/private/TensorRT/demo/Diffusion/st_base.py", line 235, in A1111
... | https://github.com/huggingface/diffusers/issues/8900 | closed | [
"bug"
] | 2024-07-19T01:58:05Z | 2024-07-26T10:39:07Z | null | 631068264 |
huggingface/transformers.js | 854 | How do you delete a downloaded model? | ### Question
How do you delete a downloaded model that was downloaded to the IndexDB?
Thanks,
Ash | https://github.com/huggingface/transformers.js/issues/854 | closed | [
"question"
] | 2024-07-18T22:10:51Z | 2024-07-19T16:23:21Z | null | AshD |
pytorch/TensorRT | 3,018 | ❓ [Question] How do you save a unet model compiled Torch-TensorRT (Stable Diffusion XL) | ## ❓ Question
How do you save a unet model compiled Torch-TensorRT from Stable Diffusion XL?
## What you have already tried
I've tried following the compilation instructions from the tutorial ([link](https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/torch_compile_stable_diffusion.html)). It wasn... | https://github.com/pytorch/TensorRT/issues/3018 | open | [
"question"
] | 2024-07-18T18:15:06Z | 2024-09-03T06:52:33Z | null | dru10 |
pytorch/vision | 8,536 | ColorJitter results with OverflowError | ### 🐛 Describe the bug
Using `ColorJitter` augmentations in torchvision 0.18.1 results in an `OverflowError`. This was not observed in older `torchvision` versions (tested with 0.15.0).
How to reproduce:
```python
# read an image
from PIL import Image
import requests
from io import BytesIO
# I picked this i... | https://github.com/pytorch/vision/issues/8536 | closed | [] | 2024-07-18T14:00:33Z | 2024-07-28T07:06:21Z | 7 | shaibagon |
huggingface/candle | 2,341 | how to use system prompt with the llama example? | Hi, I'm trying to pass a chat dialog in the [LLama3 format](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py#L222) to the [llama example](https://github.com/huggingface/candle/tree/main/candle-examples/examples/llama) via -prompt, the string is as follows:
```
<|begin_of_text|><|start_header_id|>sy... | https://github.com/huggingface/candle/issues/2341 | open | [] | 2024-07-18T10:44:54Z | 2024-07-18T14:35:09Z | null | evilsocket |
huggingface/text-generation-inference | 2,246 | can't start server with small --max-total-tokens. But works fine with big stting | when I try to run CUDA_VISIBLE_DEVICES=0,1,2,3 text-generation-launcher --port 6634 --model-id /models/ --max-concurrent-requests 128 --max-input-length 64--max-total-tokens 128 --max-batch-prefill-tokens 128 --cuda-memory-fraction 0.95. It says
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.0... | https://github.com/huggingface/text-generation-inference/issues/2246 | closed | [
"question",
"Stale"
] | 2024-07-18T07:03:31Z | 2024-08-24T01:52:30Z | null | rooooc |
pytorch/serve | 3,253 | GPU memory not released after inference | I built the .mar file by using torch-model-archiver, and wrote a custom handler that processes batched inputs, to be more specific
I'm doing the following steps:
sending one single request with N images as a list of base64 str
converting these images into tensors in my handler's preprocess
create a batch from the... | https://github.com/pytorch/serve/issues/3253 | closed | [] | 2024-07-17T09:10:59Z | 2024-07-19T14:39:02Z | 1 | Di-Gu |
huggingface/diffusers | 8,881 | How to Generate Multiple Image Inference in Instruct Pix2Pix | Hello, I am currently working on how to utilize Instruct Pix2Pix for augmentation.
For this purpose, I want to generate images by putting a Tensor of shape [64,3,84,84] (batch,channel,width,height)shape into the Instruct Pix2Pix pipeline, but the Instruct Pix2Pix provided by diffusers can only edit for one image.
Is ... | https://github.com/huggingface/diffusers/issues/8881 | closed | [] | 2024-07-17T07:47:09Z | 2024-09-02T00:45:15Z | null | E-SJ |
huggingface/transformers.js | 849 | AutoModel.from_pretrained - Which model is loaded | ### Question
I am using AutoModel.from_pretrained("Xenova/yolos-tiny") to load the Yolos model for object detection. Does transformers.js load the model_quantized.onnx by default? Would I be able to load model.onnx?
A related question: Is there a way to check which model is loaded once the model is loaded? | https://github.com/huggingface/transformers.js/issues/849 | open | [
"question"
] | 2024-07-16T22:45:15Z | 2024-08-09T09:45:37Z | null | mram0509 |
huggingface/text-generation-inference | 2,239 | Can I somehow change attention type from 'FlashAttention' in the text-server-launcher? | https://github.com/huggingface/text-generation-inference/issues/2239 | closed | [
"question",
"Stale"
] | 2024-07-16T18:37:45Z | 2024-08-24T01:52:31Z | null | wasifmasood | |
pytorch/executorch | 4,276 | How to export a pretrained model? | Is there a way to export a pretrained model to executorch? This example https://pytorch.org/executorch/stable/getting-started-setup.html#export-a-program only shows how to export a new model instance. I tried doing it like this
```
# 1. torch.export: Defines the program with the ATen operator set.
model.eval()
at... | https://github.com/pytorch/executorch/issues/4276 | closed | [] | 2024-07-16T14:55:42Z | 2024-07-22T21:55:34Z | null | Bresenham |
huggingface/diarizers | 13 | How to solve `CUDA error: out of memory while doing inference for my diarization model` | ERROR - An error occurred: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
I'm using a `12GB ... | https://github.com/huggingface/diarizers/issues/13 | open | [] | 2024-07-16T06:23:28Z | 2024-08-18T04:20:16Z | null | Ataullha |
pytorch/torchtitan | 462 | [FP8 options] Float8Linear vs TransformerEngine | Hi team, first of all thanks for this great repo for showcasing how to leverage the latest techniques in torch ecosystem, it's been super useful and insightful :) I have a naive question about FP8 options and would like to know more about how you view it.
There's the https://github.com/NVIDIA/TransformerEngine by n... | https://github.com/pytorch/torchtitan/issues/462 | open | [
"question"
] | 2024-07-16T03:54:29Z | 2025-06-02T16:54:11Z | null | yundai424 |
pytorch/torchchat | 903 | Github code search doesnt work with folders called `build` | ### 🐛 Describe the bug
I was trying to look for the `model.py` definition
https://github.com/pytorch/torchchat/tree/main/build but it wasn't showing up
<img width="816" alt="Screenshot 2024-07-15 at 6 54 39 PM" src="https://github.com/user-attachments/assets/11021312-9e40-4ec6-adad-0a52a24f06e0">
generate.py wh... | https://github.com/pytorch/torchchat/issues/903 | open | [
"actionable"
] | 2024-07-16T01:55:45Z | 2024-07-30T15:11:19Z | 1 | msaroufim |
pytorch/serve | 3,247 | TorchServe docker image with vllm, trt-llm dependencies | ### 🚀 The feature
To have a no code solution with vllm, trt-llm, TorchServe needs a docker image with these dependencies.
Including this with TorchServe's GPU image will bloat the image for all users of TorchServe
We can instead have another image for GenAI.
### Motivation, pitch
No code solution for GenAI
###... | https://github.com/pytorch/serve/issues/3247 | open | [] | 2024-07-16T01:16:34Z | 2024-07-16T01:16:34Z | 0 | agunapal |
pytorch/xla | 7,689 | CUDA and GPU-Flavoured Docker/Container Image Missing CUDA Support | ## ❓ Questions and Help
Hi,
According to the docs [here]( https://github.com/pytorch/xla?tab=readme-ov-file#docker ), the image `us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.3.0_3.10_cuda_12.1` should have Cuda 12.1 support for use on a local GPU. I have also tried pulling `xla:nightly_3.8_cuda_1... | https://github.com/pytorch/xla/issues/7689 | closed | [
"question",
"xla:gpu"
] | 2024-07-15T22:56:55Z | 2025-04-03T13:56:12Z | null | stellarpower |
huggingface/datasets | 7,051 | How to set_epoch with interleave_datasets? | Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples.
I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch)
Of course I... | https://github.com/huggingface/datasets/issues/7051 | closed | [] | 2024-07-15T18:24:52Z | 2024-08-05T20:58:04Z | null | jonathanasdf |
huggingface/accelerate | 2,933 | How to apply model parallel on multi machines? | Currently, I want to do llm inference on multi machines. Due to limited memory, I hope to use all machines to load the model and I'm blocked with this point. I only find that based on device_map, I can do model parallel on single machine with multi cards.
May I have some ideas about how to use Accelerate to realize?... | https://github.com/huggingface/accelerate/issues/2933 | closed | [] | 2024-07-15T14:09:10Z | 2025-03-08T06:48:09Z | null | JerryLu991223 |
huggingface/chat-ui | 1,344 | Ollama chatPromptTemplate and parameters | Hi,
I have tried adding phi3-3.8b, as an ollama model, hosted on my own prem ollama server.
I have basically copied the prompt template and parameters from microsoft/Phi-3-mini-4k-instruct used in hugging face - but it does not seem to work, I always get "no output was generated".
sending a generate/chat http reques... | https://github.com/huggingface/chat-ui/issues/1344 | open | [
"support"
] | 2024-07-15T12:38:12Z | 2024-09-18T17:57:30Z | 7 | ran-haim |
pytorch/xla | 7,682 | Is there any way to directly execute the cached computational graph | ## ❓ Questions and Help
My application code is complex, but it's not computationally expensive, and the graph is consistent, so I tried to cache it with XLA_PERSISTENT_CACHE_PATH, but it took a long time to execute the logic (without performing any computation).Is there any way to execute the cached graph? I also trie... | https://github.com/pytorch/xla/issues/7682 | closed | [
"question",
"dynamo"
] | 2024-07-15T11:19:23Z | 2025-04-01T13:11:38Z | null | mars1248 |
huggingface/transformers | 31,963 | How to manually stop the LLM output? | I'm using `TextIteratorStreamer` for streaming output.
Since LLM may repeat its output indefinitely, I would like to be able to have LLM stop generating when it receives a request to cancel.
Is there any way to accomplish this?
model: glm-4-9b-chat
```python
async def predict(messages, model_id: str, raw_r... | https://github.com/huggingface/transformers/issues/31963 | closed | [] | 2024-07-15T07:09:43Z | 2024-07-16T00:34:41Z | null | invokerbyxv |
huggingface/chat-ui | 1,343 | vllm 400 status code (no body) error | Hello everyone, I use the vllm openapi service, but I encountered a 400 status code (no body) error. How can I change it? Thanks
vllm:
```
python -m vllm.entrypoints.openai.api_server --model /home/rickychen/桌面/llm/models/Infinirc-Llama3-8B-5G-v1.0 --dtype auto --worker-use-ray --tensor-parallel-size 2 --port 8001... | https://github.com/huggingface/chat-ui/issues/1343 | open | [
"support"
] | 2024-07-14T12:49:59Z | 2024-09-19T12:26:36Z | 3 | rickychen-infinirc |
huggingface/chat-ui | 1,342 | undeclared node version depedancy | Using the current chat-ui dockerhub image I am unable to connect to localhost:3000 to run a simple instance of chat ui. The webservice returns 'Not Found for all routes'. Included below is my docker-compose file. if I change the chat-ui image to build with node 22 as the version everything works as expected. Does chat-... | https://github.com/huggingface/chat-ui/issues/1342 | closed | [
"support"
] | 2024-07-13T21:06:53Z | 2024-07-16T14:53:34Z | 2 | slmagus |
huggingface/diffusers | 8,858 | how to know variants=fp16 beforehand | ### Describe the bug
In some diffusion checkponts, some are fp16 and some are not.
```
pipe = DiffusionPipeline.from_pretrained(
'model_id_1',
torch_dtype=torch.float16,
variant='fp16'
)
```
```
pipe = DiffusionPipeline.from_pretrained(
'model_id_2',
torch_dtype=torch.float16,
)
```
How to k... | https://github.com/huggingface/diffusers/issues/8858 | closed | [
"bug",
"stale"
] | 2024-07-13T08:52:13Z | 2025-01-27T01:45:50Z | null | pure-rgb |
huggingface/dataset-viewer | 2,986 | Include code snippets for other libraries? | For example, in https://github.com/huggingface/huggingface.js/pull/797, we add `distilabel`, `fiftyone` and `argilla` to the list of libraries the Hub knows. However, the aim is only to handle the user-defined tags better, not to show code snippets.
In this issue, I propose to discuss if we should expand the list of... | https://github.com/huggingface/dataset-viewer/issues/2986 | open | [
"question",
"P2"
] | 2024-07-12T11:57:43Z | 2024-07-12T14:39:59Z | null | severo |
huggingface/trl | 1,830 | How to use `predict` function in `DPOTrainer` | I want to get the logp and reward of the data through `predict`, but the prediction seems only include one data.
What is the correct usage of `predict`?

| https://github.com/huggingface/trl/issues/1830 | closed | [
"❓ question"
] | 2024-07-12T06:30:20Z | 2024-10-07T12:13:22Z | null | AIR-hl |
huggingface/datatrove | 248 | solved: how to launch a slurm executor from an interactive slurm job | I forget where I saw it in the docs/code where it said not to launch a slurm executor from an `srun` interactive session - which is not quite always possible.
There is a simple workaround - unset `SLURM_*` env vars and then launch and it works just fine.
```
unset $(printenv | grep SLURM | sed -E 's/(.*)=.*/\1/'... | https://github.com/huggingface/datatrove/issues/248 | open | [] | 2024-07-12T04:08:02Z | 2024-07-13T01:15:56Z | null | stas00 |
huggingface/diffusers | 8,843 | variable (per frame) IP Adapter weights in video | is there a (planned or existing) way to have variable IP Adapter weights for videos (e.g. with AnimateDiff)?
that means setting different values for different frames, as both scaling and masking currently seem to work with the whole generation at once (be it video or still image). | https://github.com/huggingface/diffusers/issues/8843 | open | [
"stale",
"low-priority",
"consider-for-modular-diffusers"
] | 2024-07-11T16:49:43Z | 2024-12-13T15:05:24Z | 6 | eps696 |
huggingface/transformers.js | 846 | range error: array buffer allocation failed <- how to catch this error? | ### Question
While Transformers.js rocks on Desktop, My Pixel with 6Gb of ram almost always crashes the webpage when trying to run things like Whisper or TTS.
<img width="531" alt="Screenshot 2024-07-11 at 14 27 08" src="https://github.com/xenova/transformers.js/assets/805405/f8862561-7618-4c80-87e2-06c86f262698">
... | https://github.com/huggingface/transformers.js/issues/846 | open | [
"question"
] | 2024-07-11T12:32:46Z | 2024-07-11T12:32:46Z | null | flatsiedatsie |
huggingface/diffusers | 8,834 | Will the training code of SD3 Controlnet be released? | **Is your feature request related to a problem? Please describe.**
Training code of SD3 ControlNet
**Describe the solution you'd like.**
Could you please release training code of SD3 controlnet? I tried to train it but failed so I want to check whats the reason
| https://github.com/huggingface/diffusers/issues/8834 | closed | [] | 2024-07-11T03:32:55Z | 2024-09-11T01:34:38Z | 3 | ChenhLiwnl |
huggingface/optimum | 1,953 | Export AWQ models to ONNX | ### System Info
```shell
python==3.10
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reprod... | https://github.com/huggingface/optimum/issues/1953 | closed | [
"feature-request",
"onnx"
] | 2024-07-11T02:18:56Z | 2024-07-25T12:42:38Z | 1 | Toan-it-mta |
pytorch/xla | 7,667 | Equivalent of get_worker_info to split an IterableDataset | ## ❓ Questions and Help
I have an `IterableDataset` of unknown size. I would like to use something like `torch.utils.data.get_worker_info` to split it across the spawned `xmp` processes, but AFAIK there is no equivalent in `xla_multiprocessing`. Is there a workaround? I tried randomly subsampling on each process but... | https://github.com/pytorch/xla/issues/7667 | closed | [] | 2024-07-10T18:46:08Z | 2024-08-06T01:17:46Z | 20 | davidaknowles |
huggingface/optimum | 1,951 | how can I get a onnx format int4 model? | ### System Info
```shell
Could you please tell me how I can obtain an int type model in ONNX format?
I’ve used the following code to quantize an ONNX model into QUINT8, but when I tried to quantize it into INT4, I found there were no relevant parameters to choose. As far as I know, GPTQ allows selecting n-bit quanti... | https://github.com/huggingface/optimum/issues/1951 | open | [
"bug"
] | 2024-07-10T14:00:19Z | 2024-07-10T14:00:19Z | 0 | zhangyu68 |
huggingface/diffusers | 8,824 | [Solved] How to make custom datasets for instruct-pix2pix? | ### Describe the bug
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/builder.py", line 1750, in _prepare_split_single
[rank0]: for key, record in generator:
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/packaged_modules/folde... | https://github.com/huggingface/diffusers/issues/8824 | closed | [
"bug"
] | 2024-07-10T05:35:38Z | 2024-07-11T02:18:40Z | null | jeonga0303 |
huggingface/optimum | 1,949 | ValueError: Trying to export a florence2 model | Hello,
I am attempting to export and quantize the Florence-2 model for CPU usage but encountered the following error:
```
ValueError: Trying to export a florence2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://hug... | https://github.com/huggingface/optimum/issues/1949 | open | [
"feature-request",
"onnx"
] | 2024-07-10T04:59:06Z | 2024-10-23T10:07:05Z | 1 | ghost |
huggingface/transformers.js | 842 | Trying to run the Modnet example with nodejs on macOS result in Unknown model class "modnet", attempting to construct from base class. Model type for 'modnet' not found, assuming encoder-only architecture. | ### Question
Hello,
How one can run the modnet example ?
```
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model = await AutoModel.from_pretrained('Xenova/modnet', { quantized: false });
const processor = await AutoProcessor.from_pretrained('Xe... | https://github.com/huggingface/transformers.js/issues/842 | closed | [
"question"
] | 2024-07-09T16:19:22Z | 2025-03-27T18:58:03Z | null | gabrielstuff |
huggingface/chat-ui | 1,335 | [v0.9.1] Switch the LLM model mid-conversation? | ## Description
Currently, **chat-ui** does not support changing the language model once a conversation has started. For example, if I begin a chat with _Llama 3_, I cannot switch to _Gemini 1.5_ mid-conversation, even if I change the setting in the UI.
## Steps to Reproduce
* Start a conversation with one lang... | https://github.com/huggingface/chat-ui/issues/1335 | open | [] | 2024-07-09T13:43:16Z | 2024-09-13T16:45:23Z | 3 | adhishthite |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.