repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/finetrainers | 25 | how to fix it ? training/cogvideox_text_to_video_lora.py FAILED | ### System Info / 系統信息
cuda11.8
x2 3090
linux ubuntu 22.04 lts
pytorch2.4
### Information / 问题信息
- [X] The official example scripts / 官方的示例脚本
- [X] My own modified scripts / 我自己修改的脚本和任务
### Reproduction / 复现过程
andb: You can sync this run to the cloud by running:
wandb: wandb sync /home/dev_ml/cogvideox-fac... | https://github.com/huggingface/finetrainers/issues/25 | closed | [] | 2024-10-11T08:49:23Z | 2024-12-23T07:40:41Z | null | D-Mad |
huggingface/finetrainers | 22 | What resolution size is recommended for MP4 videos? What should the bitrate be set to? Should the video use H.264 or H.265 encoding? | About Dataset Preparation,
What resolution size is recommended for MP4 videos? What should the bitrate be set to? Should the video use H.264 or H.265 encoding?
example: 1280X720, 5mbps below. recommended H.264 encoder.
Is any suggestion here? | https://github.com/huggingface/finetrainers/issues/22 | closed | [] | 2024-10-11T05:12:57Z | 2024-10-14T07:20:36Z | null | Erwin11 |
huggingface/accelerate | 3,156 | how to load model with fp8 precision for inference? | ### System Info
```Shell
is it posible to load the model using accelerate library with fp8 inference?
i have H100 gpu accesses.
```
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] One of the scripts in the examples/ folder of Accelerate or an officially supported ... | https://github.com/huggingface/accelerate/issues/3156 | closed | [] | 2024-10-11T04:31:47Z | 2024-12-02T15:07:58Z | null | imrankh46 |
huggingface/diffusers | 9,643 | Flux does not support multiple Controlnets? | ### Describe the bug
I'm encountering an issue with the FluxControlNetPipeline. The `controlnet` parameter is supposed to accept a `List[FluxControlNetModel]`. However, when I attempt to execute my code, I run into the following error:
```
Traceback (most recent call last):
File "/opt/tiger/test_1/h.py", line... | https://github.com/huggingface/diffusers/issues/9643 | closed | [
"bug"
] | 2024-10-11T03:47:06Z | 2024-10-11T17:39:20Z | 1 | RimoChan |
huggingface/diffusers | 9,639 | How to use my own trained lora in local computer? | local_model_path = r"D:\downloads\FLUX.1-schnell"
pipe = FluxPipeline.from_pretrained(local_model_path, torch_dtype=torch.bfloat16)
#lora not working by this way
pipe.load_lora_weights("XLabs-AI/flux-lora-collection", weight_name="disney_lora.safetensors")
pipe.load_lora_weights(r"D:\AI\stable-diffusion-webui-forg... | https://github.com/huggingface/diffusers/issues/9639 | closed | [] | 2024-10-10T23:19:47Z | 2024-11-10T08:49:08Z | null | derekcbr |
huggingface/evaluation-guidebook | 14 | [TOPIC] How to design a good benchmark depending on your eval goals | Eval goals can be finding a good model for you vs ranking models vs choosing a good training config.
Request by Luca Soldaini
Cf https://x.com/soldni/status/1844409854712218042 | https://github.com/huggingface/evaluation-guidebook/issues/14 | closed | [] | 2024-10-10T16:20:40Z | 2025-09-18T08:31:15Z | null | clefourrier |
huggingface/diffusers | 9,633 | Confusion about accelerator.num_processes in get_scheduler | In the example code from [train_text_to_image_sdxl.py](https://github.com/huggingface/diffusers/blob/e16fd93d0a40156c1f49fde07f6f2eb438983927/examples/text_to_image/train_text_to_image_sdxl.py#L974):
```python
num_warmup_steps = args.lr_warmup_steps * args.gradient_accumulation_steps
```
But in [train_text_to_image... | https://github.com/huggingface/diffusers/issues/9633 | closed | [
"stale"
] | 2024-10-10T08:39:12Z | 2024-11-09T15:37:33Z | 5 | hj13-mtlab |
huggingface/transformers.js | 968 | It's ready | ### Question
The project I've been working on for the part few months is now ready-enough to reveal to the world. Transformers.js is an essential part of it, and I just want to say thank you for your amazing work.
https://www.papeg.ai
As you can see in the source code, there are lots of workers that implement ... | https://github.com/huggingface/transformers.js/issues/968 | closed | [
"question"
] | 2024-10-10T04:39:48Z | 2025-05-29T22:49:24Z | null | flatsiedatsie |
huggingface/datasets | 7,211 | Describe only selected fields in README | ### Feature request
Hi Datasets team!
Is it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co/datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some f... | https://github.com/huggingface/datasets/issues/7211 | open | [
"enhancement"
] | 2024-10-09T16:25:47Z | 2024-10-09T16:25:47Z | 0 | alozowski |
huggingface/transformers.js | 965 | Error: cannot release session. invalid session id | ### Question
I'm trying to get ASR + segmentation to run on a mobile phone (Pixel 6A, 6GB ram). This time on Brave mobile ;-)
ASR alone works fine. But I have a question about also getting the speaker recognition to run (segmentation+verification).
In the example implementation a `promiseAll` is used to run bo... | https://github.com/huggingface/transformers.js/issues/965 | open | [
"question"
] | 2024-10-09T13:57:48Z | 2024-10-09T15:51:02Z | null | flatsiedatsie |
huggingface/chat-ui | 1,509 | (BUG) Oath login splash is BROKEN/does NOT work | On newer versions of chat-ui the login splash screen does not work. Say for instance you have oauth setup and are not logged in. You should get a popup prompting you to logina nd not see the interface. This used to work without a problem. I just realized this no longer working on the newer versions. I have oauth s... | https://github.com/huggingface/chat-ui/issues/1509 | closed | [
"bug"
] | 2024-10-08T18:06:01Z | 2024-11-27T15:02:46Z | 2 | bpawnzZ |
huggingface/trl | 2,196 | How to exit training when the loss is less than a specified value in SFTTrainer? | I asked this question in ChatGPT first, it gave the answer below:
```
from trl import SFTTrainer
from transformers import TrainingArguments
from unsloth import is_bfloat16_supported
# Define customized Trainer class
class CustomSFTTrainer(SFTTrainer):
def __init__(self, *args, min_loss_threshold=0.001, **k... | https://github.com/huggingface/trl/issues/2196 | closed | [
"❓ question",
"🏋 SFT"
] | 2024-10-08T03:13:27Z | 2024-10-08T10:39:51Z | null | fishfree |
huggingface/safetensors | 532 | Documentation about multipart safetensors | ### Feature request
Add examples to documentation about handling with multipart safetensors files (`*-00001.safetensors`, `*-00002.safetensors`, etc). How to load/save them?
### Motivation
This is widespread format but README and Docs don't contain enough information about it.
### Your contribution
Can't help by m... | https://github.com/huggingface/safetensors/issues/532 | closed | [] | 2024-10-07T20:14:48Z | 2025-01-03T17:36:31Z | 6 | attashe |
huggingface/diffusers | 9,599 | Why there is no LoRA only finetune example of FLUX.1? | **Is your feature request related to a problem? Please describe.**
The only example of LoRA finetune for FLUX.1 I discovered is here:
https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_flux.py
which is a dreambooth example. The dreambooth is VRAM intensive and not useful for... | https://github.com/huggingface/diffusers/issues/9599 | closed | [] | 2024-10-07T06:22:54Z | 2024-10-09T12:48:32Z | 3 | eeyrw |
huggingface/chat-ui | 1,506 | Add support for local models | ## Describe your feature request
I was looking for an open-source alternative to PocketPal, which allows to converse with local models on iOS and Android https://apps.apple.com/us/app/pocketpal-ai/id6502579498 and I was wondering if HuggingChat could be this alternative? The idea is to have an e2e open-source soluti... | https://github.com/huggingface/chat-ui/issues/1506 | closed | [
"enhancement"
] | 2024-10-06T20:18:24Z | 2024-10-07T13:45:45Z | 3 | arnaudbreton |
huggingface/tokenizers | 1,644 | How to build a custom tokenizer on top of a exsiting Llama 3.2 tokenizer? | Hi,
I was trying to create a custom tokenizer for a different language which is not included in llama 3.2 tokenizer.
I could not find exactly what tokenizer I can use from hf which is exact alternative to Llama's tokenizer [link](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py), so that I will be ... | https://github.com/huggingface/tokenizers/issues/1644 | closed | [
"training"
] | 2024-10-05T13:18:55Z | 2025-02-26T12:06:15Z | null | yakhyo |
huggingface/datasets | 7,196 | concatenate_datasets does not preserve shuffling state | ### Describe the bug
After concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156
This means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623
I also noticed th... | https://github.com/huggingface/datasets/issues/7196 | open | [] | 2024-10-03T14:30:38Z | 2025-03-18T10:56:47Z | 1 | alex-hh |
huggingface/diffusers | 9,575 | diffusers version update to 0.27.0 from 0.20.0, training code seems not work | I have trained an inpainting model using diffusers 0.20.0. The trained model works as expected. However, something seems wrong when I update the diffusers version to 0.27.0, while keeping the training code and other requirements the same. The training code runs successfully, but the inference outputs look like noise. I... | https://github.com/huggingface/diffusers/issues/9575 | closed | [] | 2024-10-03T14:30:21Z | 2024-10-15T08:58:36Z | 4 | huangjun12 |
huggingface/transformers | 33,909 | How to implement weight decay towards the pre-trained model? | Hello, let me one question.
If using HF Trainer for supervised fune-tuning, how do I implement penalizing the distance between starting and current weights? This was shown to be effective in https://arxiv.org/abs/1706.03610 | https://github.com/huggingface/transformers/issues/33909 | open | [
"Usage",
"Feature request"
] | 2024-10-03T11:18:53Z | 2024-10-22T13:16:26Z | null | sedol1339 |
huggingface/datasets | 7,189 | Audio preview in dataset viewer for audio array data without a path/filename | ### Feature request
Huggingface has quite a comprehensive set of guides for [audio datasets](https://huggingface.co/docs/datasets/en/audio_dataset). It seems, however, all these guides assume the audio array data to be decoded/inserted into a HF dataset always originates from individual files. The [Audio-dataclass](... | https://github.com/huggingface/datasets/issues/7189 | open | [
"enhancement"
] | 2024-10-02T16:38:38Z | 2024-10-02T17:01:40Z | 0 | Lauler |
huggingface/transformers.js | 958 | Zombies in memory - something is blocking (re)loading of Whisper after a page is closed and re-opened | ### Question
I've been trying to debug this issue all afternoon, but haven't gotten any further. The code runs on desktop, but not on Android Chrome.
This is with V3 Alpha 19.
<img width="571" alt="Screenshot 2024-10-02 at 16 06 16" src="https://github.com/user-attachments/assets/c5fbb2cb-0cdf-431a-8099-021d19a1... | https://github.com/huggingface/transformers.js/issues/958 | closed | [
"question"
] | 2024-10-02T14:10:27Z | 2024-10-18T12:47:17Z | null | flatsiedatsie |
huggingface/diffusers | 9,567 | [community] Improving docstrings and type hints | There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int] = None, parameter_3: float = 42.0) -> Civiliza... | https://github.com/huggingface/diffusers/issues/9567 | closed | [
"documentation",
"good first issue",
"contributions-welcome"
] | 2024-10-02T03:20:44Z | 2025-11-13T22:45:59Z | 16 | a-r-r-o-w |
huggingface/datasets | 7,186 | pinning `dill<0.3.9` without pinning `multiprocess` | ### Describe the bug
The [latest `multiprocess` release](https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17) requires `dill>=0.3.9` which causes issues when installing `datasets` without backtracking during package version resolution. Is it possible to add a pin for multiprocess so something like `multi... | https://github.com/huggingface/datasets/issues/7186 | closed | [] | 2024-10-01T22:29:32Z | 2024-10-02T06:08:24Z | 0 | shubhbapna |
huggingface/chat-ui | 1,499 | Error 500 "RPError" | OpenID Connect + SafeNet Trusted Access (STA) | Hello,
I would like to deploy OpenID Connect with SafeNet Trusted Access (STA).
From this 3-minute video, I've done all the steps, except for OAuth.tools which I don't use :
https://www.youtube.com/watch?v=hSWXFSadpQQ
Here's my bash script that deploys the containers | ```deploy.sh``` :
```bash
#!/bin/bas... | https://github.com/huggingface/chat-ui/issues/1499 | open | [
"support"
] | 2024-09-30T12:54:16Z | 2024-09-30T12:57:51Z | 0 | avirgos |
huggingface/diffusers | 9,560 | FP32 training for sd3 controlnet | Hi,
I have been use `examples\controlnet\train_controlnet_sd3.py` for controlnet training for a while, and I have some confusion and would like your advice
1. In the line 1097:
`vae.to(accelerator.device, dtype=torch.float32)`
It seems we should use fp32 for VAE, but as far as I know, SD3 currently has no fp32 ch... | https://github.com/huggingface/diffusers/issues/9560 | closed | [
"stale"
] | 2024-09-30T08:07:04Z | 2024-10-31T15:13:19Z | 11 | xduzhangjiayu |
huggingface/huggingface_hub | 2,578 | What is the highest Python version currently supported? | ### Describe the bug
I utilized Hugging Face Spaces to construct my application, which was built using Gradio, zerogpuspace, and the link is: https://huggingface.co/spaces/tanbw/CosyVoice
In the readme.md, I specified the Python version as 3.8.9, but the version of Python that the application prints out is still 3.1.... | https://github.com/huggingface/huggingface_hub/issues/2578 | closed | [
"bug"
] | 2024-09-29T14:37:38Z | 2024-09-30T07:05:29Z | null | tanbw |
huggingface/diffusers | 9,555 | [Flux Controlnet] Add control_guidance_start and control_guidance_end | It'd be nice to have `control_guidance_start` and `control_guidance_start` parameters added to flux Controlnet and Controlnet Inpainting pipelines.
I'm currently making experiments with Flux Controlnet Inpainting but the results are poor even with a `controlnet_conditioning_scale` set to 0.6.
I have to set `cont... | https://github.com/huggingface/diffusers/issues/9555 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-09-29T12:37:39Z | 2024-10-10T12:29:03Z | 8 | simbrams |
huggingface/hub-docs | 1,435 | How to check if a space is duplicated from another one using HF API? | I cannot find any related specifications in the documentation...Thanks! | https://github.com/huggingface/hub-docs/issues/1435 | open | [] | 2024-09-28T23:52:08Z | 2025-01-16T17:08:34Z | null | zhimin-z |
huggingface/diffusers | 9,551 | How to use x-labs flux controlnet models in diffusers? | ### Model/Pipeline/Scheduler description
The following controlnets are supported in Comfy UI, but was wondering how we can use these in diffusers as well for developers. Afaik, there is no from_single_file method for FluxControlNet to load the safetensors?
### Open source status
- [x] The model implementation ... | https://github.com/huggingface/diffusers/issues/9551 | closed | [] | 2024-09-28T20:01:15Z | 2024-09-29T06:59:46Z | null | neuron-party |
huggingface/text-generation-inference | 2,583 | How to turn on the KV cache when serve a model? | ### System Info
TGI 2.3.0
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
The TTFT is really slower than VLLM. Can't be improved? if so how to turn on the KV cache when launch a model?
```
model=HuggingFaceH4/zeph... | https://github.com/huggingface/text-generation-inference/issues/2583 | open | [] | 2024-09-28T19:32:15Z | 2024-10-25T12:47:02Z | null | hahmad2008 |
huggingface/transformers.js | 948 | Getting Local models/wasm working with Create React App | ### Question
I realize there's been a lot of talk about this in other issues, but I'm trying to gather if getting local-only model and wasm files will work with Create React App. I'm using `WhisperForConditionalGeneration` from `@huggingface/transformers` version `3.0.0-alpha.9`.
My setup:
```
env.allowRemoteMod... | https://github.com/huggingface/transformers.js/issues/948 | closed | [
"question"
] | 2024-09-26T20:42:33Z | 2024-09-26T21:26:30Z | null | stinoga |
huggingface/blog | 2,369 | How to finetune jina-embeddings-v3 by lora? | https://github.com/huggingface/blog/issues/2369 | open | [] | 2024-09-26T07:25:16Z | 2024-09-26T07:25:16Z | null | LIUKAI0815 | |
huggingface/text-generation-inference | 2,569 | Question: What is preferred way to cite TGI/repo? Didnt see a citation file. | https://github.com/huggingface/text-generation-inference/issues/2569 | open | [] | 2024-09-26T02:07:42Z | 2024-09-26T02:07:42Z | null | mkultraWasHere | |
huggingface/lerobot | 454 | Venv isn't needed in docker | I noticed in your docker files you are using a virtual environment. Docker is already a virtual environment at the system level. Is there a reason for using a python virtual environment as well? Typically, this is redundant/unnecessary and you'd only use venv or similar on your local machine.
If there isn't a good r... | https://github.com/huggingface/lerobot/issues/454 | closed | [
"enhancement",
"question",
"stale"
] | 2024-09-25T16:33:17Z | 2025-10-23T02:29:11Z | null | MichaelrMentele |
huggingface/diffusers | 9,528 | load_ip_adapter for distilled sd models | Is it possible to load IP-Adapter for distilled SD v1 or v2 based models such as nota-ai/bk-sdm-tiny or nota-ai/bk-sdm-v2-tiny?
When I tried to load ip adapter using bk-sdm-tiny
```python
pipe.load_ip_adapter(
"h94/IP-Adapter",
subfolder="models",
weight_name="ip-adapter-plus_sd15.bin",
low_c... | https://github.com/huggingface/diffusers/issues/9528 | closed | [
"stale"
] | 2024-09-25T04:31:00Z | 2025-01-12T06:01:40Z | 7 | kmpartner |
huggingface/chat-ui | 1,486 | Getting 403 on chat ui config for aws sagemaker endpoint |
Hi All,
Looking into configuring chat ui with aws sagemaker endpoint and getting following error:

```
DOTENV_LOCAL was found in the ENV variables. Creating .env.local file.
{"level":30,"time":1727231014113,"pid":23,"ho... | https://github.com/huggingface/chat-ui/issues/1486 | open | [
"support"
] | 2024-09-25T02:41:08Z | 2024-09-25T02:41:08Z | 0 | nauts |
huggingface/chat-macOS | 7 | Asking "what time is it?" will always return the local time of Paris, regardless of your location (⌘R+) | <img width="487" alt="Screenshot 2024-09-24 at 11 54 17 AM" src="https://github.com/user-attachments/assets/02d26c05-ae37-4caf-a3ff-5bc6aec42068">
I wonder how can we localize questions like this. I've tried ⌘R+ which always gives me the local time of Paris. Qwen2.5-72B and Llama 3.1 make up another non-specific tim... | https://github.com/huggingface/chat-macOS/issues/7 | open | [
"good first issue"
] | 2024-09-24T23:09:31Z | 2024-10-23T20:08:57Z | null | Reza2kn |
huggingface/diffusers | 9,520 | UNetMotionModel.dtype is really expensive to call, is it possible to cache it during inference? | **What API design would you like to have changed or added to the library? Why?**
we are using class UNetMotionModel(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin, PeftAdapterMixin)
and its `forward()` implementation is calling self.dtype, which is very expensive
 and [this](https://github.com/huggingface/diffusers/pull/8897#issuecomment-2255478105).
AnimateDiff SparseCtrl RGB does not work similar to other implementations and cannot replicate their outputs. This makes me ... | https://github.com/huggingface/diffusers/issues/9508 | open | [
"bug",
"help wanted",
"stale",
"contributions-welcome",
"advanced"
] | 2024-09-23T21:42:54Z | 2025-08-10T16:47:50Z | 9 | a-r-r-o-w |
huggingface/lerobot | 451 | Inquiry about Implementation of "Aloha Unleashed" | First and foremost, I would like to extend my heartfelt gratitude for your incredible work on the Lerobo project.
I recently came across the paper "Aloha Unleashed" published by the Aloha team a few months ago, and I am curious to know if there are any plans to implement the methodologies and findings from this pap... | https://github.com/huggingface/lerobot/issues/451 | open | [
"question",
"robots"
] | 2024-09-23T09:14:56Z | 2025-08-20T19:42:37Z | null | lightfate |
huggingface/text-generation-inference | 2,541 | How to serve local models with python package (not docker) | ### System Info
`pip install text-generation `with version '0.6.0'
I need to use python package not docker
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
```
from text_generation import Client
# Initialize the c... | https://github.com/huggingface/text-generation-inference/issues/2541 | open | [] | 2024-09-20T21:10:09Z | 2024-09-26T06:55:50Z | null | hahmad2008 |
huggingface/competitions | 41 | how to debug a script submission | is there way to see logs or errors of a script based submission | https://github.com/huggingface/competitions/issues/41 | closed | [] | 2024-09-20T18:04:44Z | 2024-09-30T16:08:42Z | null | ktrapeznikov |
huggingface/diffusers | 9,485 | Can we allow making everything on gpu/cuda for scheduler? | **What API design would you like to have changed or added to the library? Why?**
Is it possible to allow setting every tensor attribute of scheduler to cuda device?
In https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lcm.py
It looks like that attributes like `scheduler.alphas_cu... | https://github.com/huggingface/diffusers/issues/9485 | open | [
"stale",
"scheduler",
"performance"
] | 2024-09-20T12:38:16Z | 2024-12-17T15:04:46Z | 14 | xiang9156 |
huggingface/optimum | 2,032 | ONNX support for decision transformers | ### Feature request
I am trying to train off-line RL using decision transformer, convert to .onnx.
```
from pathlib import Path
from transformers.onnx import FeaturesManager
feature = "sequence-classification"
# load config
model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(m... | https://github.com/huggingface/optimum/issues/2032 | closed | [
"onnx"
] | 2024-09-20T08:45:28Z | 2024-11-25T13:00:02Z | 1 | ra9hur |
huggingface/setfit | 558 | How to improve the accuracy while classifying short text with less context | Hi, my usecase is to classify Job Title into Functional Areas. I finetuned `all-mpnet-base-v2` with the help of setfit by providing some 10+ examples for each class (Functional Areas).
I got `82%` accuracy on running the evaluation on my test set. I observed some of the simple & straightforward job titles are clas... | https://github.com/huggingface/setfit/issues/558 | open | [] | 2024-09-20T06:09:07Z | 2024-11-11T11:23:31Z | null | 29swastik |
huggingface/safetensors | 527 | [Question] Comparison with the zarr format? | Hi,
I know that safetensors are widely used nowadays in HF, and the comparisons made in this repo's README file make a lot of sense.
However, I am now surprised to see that there is no comparison with zarr, which is probably the most widely used format to store tensors in an universal, compressed and scalable way... | https://github.com/huggingface/safetensors/issues/527 | open | [] | 2024-09-19T13:32:17Z | 2025-01-13T17:56:46Z | 13 | julioasotodv |
huggingface/transformers | 33,584 | How to fine tune Qlora with Custum trainer. | Full model fine-tuning code is given below. How can i modify the code to train Qlora based model.
```import sys
import os
current_directory = os.path.dirname(os.path.abspath(__file__))
sys.path.append(current_directory)
from src.custom_dataset import RawFileDataset
import copy
import random
from dataclasse... | https://github.com/huggingface/transformers/issues/33584 | closed | [
"trainer",
"Quantization"
] | 2024-09-19T09:40:00Z | 2024-10-28T08:05:06Z | null | ankitprezent |
huggingface/diffusers | 9,470 | Prompt scheduling in Diffusers like A1111 | Hi everyone, I have a question that how to implement the [prompt scheduling feature](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing) in A1111 by diffusers library.
**Example prompt:** Official portrait of a smiling world war ii general, `[male:female:0.99]`, cheerful, happy, det... | https://github.com/huggingface/diffusers/issues/9470 | closed | [] | 2024-09-19T09:07:30Z | 2024-10-19T17:22:23Z | 5 | linhbeige |
huggingface/chat-ui | 1,476 | Update docs to explain how to use `tokenizer` field for chat prompt formats | ## Bug description
In README.md, it's stated that the prompts used in production for HuggingChat can be found in PROMPTS.md.
However, PROMPTS.md has not been updated for 7 months and there are several prompts missing for newer models.
| https://github.com/huggingface/chat-ui/issues/1476 | open | [
"bug",
"documentation"
] | 2024-09-18T22:49:53Z | 2024-09-20T18:05:05Z | null | horsten |
huggingface/transformers.js | 935 | Is converting a Gemma 2B quantized compatible with transformers.js/onnx? | ### Question
I'm new to dev and wanted to know if converting a gemma 2b using the Optimum converter would work for this model? | https://github.com/huggingface/transformers.js/issues/935 | open | [
"question"
] | 2024-09-18T15:57:55Z | 2024-09-24T20:26:53Z | null | iamhenry |
huggingface/dataset-viewer | 3,063 | Simplify test code where a dataset is set as gated | [huggingface_hub@0.25.0](https://github.com/huggingface/huggingface_hub/releases/tag/v0.25.0) provides an API to set a repository as gated.
We had included a custom version of `update_repo_settings` because it lacked a `gated` parameter. Now we can switch back to the `huggingface_hub` method
https://github.com/hu... | https://github.com/huggingface/dataset-viewer/issues/3063 | closed | [
"good first issue",
"tests",
"refactoring / architecture",
"dependencies"
] | 2024-09-18T09:08:14Z | 2025-07-17T15:00:40Z | null | severo |
huggingface/transformers.js | 934 | Repeating tokens in TextStreamer | ### Question
```
import {
AutoTokenizer,
AutoModelForCausalLM,
TextStreamer,
InterruptableStoppingCriteria,
} from "@huggingface/transformers";
class TextGenerationPipeline {
static model = null;
static tokenizer = null;
static streamer = null;
static async getInstance(
progress_cal... | https://github.com/huggingface/transformers.js/issues/934 | closed | [
"question"
] | 2024-09-18T02:53:36Z | 2025-10-13T04:50:11Z | null | chandeldivyam |
huggingface/transformers.js | 933 | Uncaught (in promise) TypeError: r.logits is not iterable | ### Question
Hey guys,
I have been trying to train a model for text classification then convert it to an onnx file for use in transformers js following this video
https://www.youtube.com/watch?v=W_lUGPMW_Eg
I keep getting the error Uncaught (in promise) TypeError: r.logits is not iterable
Any ideas on wher... | https://github.com/huggingface/transformers.js/issues/933 | open | [
"question"
] | 2024-09-16T20:26:02Z | 2024-09-17T19:35:26Z | null | Joseff-Evans |
huggingface/chat-ui | 1,472 | Mistral api configuration without Cloudflare | I'd like to setup a local deployment using **only the mistral API**: https://docs.mistral.ai/api.
Can i use ChatUI without an HF deployment and Cloudflare account?
I leave the .env unchanged and overwrite the env.local with the following code
```yml
AGENT_ID=<my_agent_id_from_mistral>
MISTRAL_API_KEY==<mytok... | https://github.com/huggingface/chat-ui/issues/1472 | open | [
"support"
] | 2024-09-16T18:51:09Z | 2024-09-17T08:43:40Z | 0 | JonasMedu |
huggingface/transformers.js | 932 | Best small model for text generation? | ### Question
I'm looking to build a AI Journaling app that helps you reflect from your journal entries
I'm looking for a model like (GPT or Claude) that will take the selected text and provide insights based on a prompt I provide
In this case the prompt will provide suggestions based on psychology techniques lik... | https://github.com/huggingface/transformers.js/issues/932 | open | [
"question"
] | 2024-09-16T18:06:23Z | 2024-09-26T08:06:35Z | null | iamhenry |
huggingface/distil-whisper | 149 | How to load using openai-whisper package to load the model? | How to load using openai-whisper package to load the model? | https://github.com/huggingface/distil-whisper/issues/149 | open | [] | 2024-09-15T15:08:46Z | 2024-09-15T15:08:46Z | null | lucasjinreal |
huggingface/competitions | 40 | How to modify the competition | Hi! I created a new competition using the [tool given here](https://huggingface.co/spaces/competitions/create). All good up till here.
Then I had the space automatically running. To modify the competition, I cloned the repository of the space locally with the command given on the UI
```
git clone https://huggingface... | https://github.com/huggingface/competitions/issues/40 | closed | [
"stale"
] | 2024-09-15T13:45:26Z | 2024-10-08T15:06:28Z | null | dakshvar22 |
huggingface/speech-to-speech | 101 | I am really really curious about how to set up this project on a server to serve multiple users. I have been trying for a long time but haven't come up with a very good solution. | https://github.com/huggingface/speech-to-speech/issues/101 | open | [] | 2024-09-15T13:42:18Z | 2025-02-04T15:44:31Z | null | demoBBB | |
huggingface/transformers | 33,489 | passing past_key_values as a tuple is deprecated, but unclear how to resolve | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.44.2
- Platform: Linux-5.4.0-167-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
... | https://github.com/huggingface/transformers/issues/33489 | closed | [
"bug"
] | 2024-09-14T13:58:18Z | 2025-11-29T04:50:43Z | null | RonanKMcGovern |
huggingface/lerobot | 436 | Image storage format | I am quite interested in using `LeRobotDataset` for large scale training. I am interested to get more context on the options for storing images so I am aware of the implications this might have:
- Did you by chance study if the mp4 video compression has any negative effects on the image quality in terms of model perfo... | https://github.com/huggingface/lerobot/issues/436 | closed | [
"question",
"dataset",
"stale"
] | 2024-09-12T16:38:21Z | 2025-10-23T02:29:14Z | null | nikonikolov |
huggingface/lerobot | 435 | Open-X datasets | Thanks for the great work! I am interested in converting more of the open-x datasets to `LeRobotDataset`.
- I was wondering if there was any particular reason the entire open-x wasn't added already, e.g. some difficulties you encountered with some specific datasets?
- Do you have any tips where I should be extra care... | https://github.com/huggingface/lerobot/issues/435 | closed | [
"enhancement",
"question",
"dataset"
] | 2024-09-12T16:29:40Z | 2025-10-08T08:25:55Z | null | nikonikolov |
huggingface/lerobot | 432 | some questions about real world env | ### System Info
```Shell
all software cfg match author's project
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [X] My own task or dataset (give details below)
### Reproduction
I am planning to control my own robot left-arm. I've almost figure out all the parts if lerobot-datase... | https://github.com/huggingface/lerobot/issues/432 | closed | [
"question"
] | 2024-09-12T09:53:23Z | 2025-10-08T08:27:48Z | null | NNsauce |
huggingface/chat-ui | 1,463 | Some bugs | ## Bug description
There are several issues that I have with the site, such as slow performance both on mobile and PC. When trying to select specific parts of the text, it goes back to the original message. Sometimes it occurs in errors that force me to always refresh the conversation. When I switch conversation I h... | https://github.com/huggingface/chat-ui/issues/1463 | open | [
"bug"
] | 2024-09-12T08:13:35Z | 2024-09-12T09:03:58Z | 0 | Ruyeex |
huggingface/transformers.js | 929 | what is pipeline? | https://github.com/huggingface/transformers.js/issues/929 | closed | [
"question"
] | 2024-09-12T05:09:05Z | 2024-10-04T10:24:42Z | null | chakravarthi-vatala | |
huggingface/diffusers | 9,417 | Suggestion for speeding up `index_for_timestep` by removing sequential `nonzero()` calls in samplers | **Is your feature request related to a problem? Please describe.**
First off, thanks for the great codebase and providing so many resources! I just wanted to provide some insight into an improvement I made for myself, in case you'd like to include it for all samplers. I'm using the `FlowMatchEulerDiscreteScheduler` an... | https://github.com/huggingface/diffusers/issues/9417 | open | [
"help wanted",
"wip",
"contributions-welcome",
"performance"
] | 2024-09-11T14:54:37Z | 2025-02-08T10:26:47Z | 11 | ethanweber |
huggingface/cosmopedia | 29 | What is the best way to cite the work? | This is absolutely fantastic work. Thank you very much for making it public.
What is the best way to cite this dataset/project? Is there any paper I can cite or should I cite the blog-post? | https://github.com/huggingface/cosmopedia/issues/29 | closed | [] | 2024-09-11T14:34:54Z | 2024-09-11T14:36:15Z | null | vijetadeshpande |
huggingface/diffusers | 9,416 | [Schedulers] Add SGMUniform | Thanks to @rollingcookies, we can see in this [issue](https://github.com/huggingface/diffusers/issues/9397) that this schedulers works great with the Hyper and probably also Lighting loras/unets.
It'd be fantastic if someone can contribute this scheduler to diffusers.
Please let me know if someone is willing to ... | https://github.com/huggingface/diffusers/issues/9416 | closed | [
"help wanted",
"contributions-welcome",
"advanced"
] | 2024-09-11T13:59:27Z | 2024-09-23T23:39:56Z | 12 | asomoza |
huggingface/transformers | 33,416 | The examples in the examples directory are mostly for fine-tuning pre-trained models?how to trian from scratch | ### Model description
no
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | https://github.com/huggingface/transformers/issues/33416 | open | [
"New model"
] | 2024-09-11T03:32:53Z | 2024-10-03T23:28:42Z | null | zc-Chao |
huggingface/diffusers | 9,407 | callback / cannot yield intermediate images on the fly during inference | Hi,
in advance apologies if this has been asked already, or if I'm just misusing the diffusers API.
Using `diffusers==0.30.2`
**What API design would you like to have changed or added to the library? Why?**
I will illustrate straight away the general issue with my use case: I need to call a (FLUX) diffuser... | https://github.com/huggingface/diffusers/issues/9407 | closed | [] | 2024-09-10T16:32:04Z | 2024-09-25T12:28:20Z | 8 | Clement-Lelievre |
huggingface/transformers.js | 928 | The inference speed on the mobile end is a bit slow | ### Question
If it is a mobile device that does not support WebGPU, how can we improve the inference speed of the model? I have tried WebWorker, but the results were not satisfactory | https://github.com/huggingface/transformers.js/issues/928 | open | [
"question"
] | 2024-09-10T09:14:16Z | 2024-09-11T08:46:33Z | null | Gratifyyy |
huggingface/transformers.js | 927 | Error with Using require for ES Modules in @xenova/transformers Package | ### Question
trying to use require to import the Pipeline class from the @xenova/transformers package, but encounter the following error:
const { Pipeline } = require('@xenova/transformers');
^
Error [ERR_REQUIRE_ESM]: require() of ES Module D:\Z-charity\dating_app_backend\node_modules@xenova\transformers\src\t... | https://github.com/huggingface/transformers.js/issues/927 | closed | [
"question"
] | 2024-09-10T06:02:53Z | 2024-12-08T19:17:31Z | null | qamarali205 |
huggingface/transformers.js | 925 | V3 - WebGPU Whisper in Chrome Extention | ### Question
Can [webGPU accelerated whisper](https://huggingface.co/spaces/Xenova/whisper-webgpu) run in a chrome extension?
I checked the space and found the dependency `"@xenova/transformers": "github:xenova/transformers.js#v3"` which I imported in a chrome extension. When I tried to import it, it didn't work.
... | https://github.com/huggingface/transformers.js/issues/925 | open | [
"question"
] | 2024-09-10T02:52:41Z | 2025-01-18T16:03:26Z | null | chandeldivyam |
huggingface/diffusers | 9,402 | [Flux ControlNet] Add img2img and inpaint pipelines | We recently added img2img and inpainting pipelines for Flux thanks to @Gothos contribution.
We also have controlnet support for Flux thanks to @wangqixun.
It'd be nice to have controlnet versions of these pipelines since there's been requests to have them.
Basically, we need to create two new pipelines that a... | https://github.com/huggingface/diffusers/issues/9402 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-09-10T02:08:32Z | 2024-10-25T02:22:19Z | 11 | asomoza |
huggingface/transformers.js | 924 | Steps for suppressing strings | ### Question
What is the syntax for suppressing strings from showing up in the output text? Should I be doing that in my code, or is there a config option for it? I'm trying to remove everything that isn't a word:
```
const suppressedStrings = [
"[BLANK_AUDIO]",
"[CLEARS THROAT]",
"[Coughing]",
"[inaudib... | https://github.com/huggingface/transformers.js/issues/924 | open | [
"question"
] | 2024-09-09T21:44:16Z | 2025-01-24T17:53:47Z | null | stinoga |
huggingface/diffusers | 9,395 | [Q] Possibly unused `self.final_alpha_cumprod` | Hello team, quick question to make sure I understand the behavior of the `step` function in LCM Scheduler.
https://github.com/huggingface/diffusers/blob/a7361dccdc581147620bbd74a6d295cd92daf616/src/diffusers/schedulers/scheduling_lcm.py#L534-L543
Here, it seems that the condition `prev_timestep >= 0` is always `T... | https://github.com/huggingface/diffusers/issues/9395 | open | [
"stale"
] | 2024-09-09T17:35:08Z | 2024-11-09T15:03:23Z | 7 | fdtomasi |
huggingface/chat-ui | 1,458 | Chat ui sends message prompt 404 | ```
MONGODB_URL='mongodb://localhost:27017'
PLAYWRIGHT_ADBLOCKER='false'
MODELS=`[
{
"name": "Local minicpm",
"tokenizer": "minicpm",
"preprompt": "",
"chatPromptTemplate": "<s>{{preprompt}}{{#each messages}}{{#ifUser}}<|user|>\n{{content}}<|end|>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{c... | https://github.com/huggingface/chat-ui/issues/1458 | open | [
"support"
] | 2024-09-09T13:31:56Z | 2024-09-13T09:32:24Z | 2 | nextdoorUncleLiu |
huggingface/chat-ui | 1,456 | could you provide an easy way to force output as json? | current I use
preprompt:'only output json. Do not output anything that is not json. Do not use markdown format. Must begin with {.'
But llama is not smart enough to output json form. It always begin with Here is the JSON answer or begin with ```(markdown format) for give me unvalid json string.
It seems prepr... | https://github.com/huggingface/chat-ui/issues/1456 | open | [
"enhancement"
] | 2024-09-09T11:34:17Z | 2024-10-06T18:35:29Z | 1 | ghost |
huggingface/diffusers | 9,392 | [Scheduler] Add SNR shift following SD3, would the rest of the code need to be modified? | **What API design would you like to have changed or added to the library? Why?**
With the increasing resolution of image or video generation, we need to introduce more noise at smaller T, such as SNR shift following SD3. I have observed that CogVideoX's schedule has already implemented [this](https://github.com/hugg... | https://github.com/huggingface/diffusers/issues/9392 | open | [
"stale"
] | 2024-09-09T09:19:37Z | 2025-01-05T15:05:04Z | 7 | LinB203 |
huggingface/speech-to-speech | 96 | How to designate Melo TTS model to use my trained model? | Hi,
I am using Melo as TTS. And I trained with my datasets. How to designate Melo (here at speech to speech) to use my model?
Thanks! | https://github.com/huggingface/speech-to-speech/issues/96 | closed | [] | 2024-09-08T20:36:23Z | 2024-09-10T14:42:58Z | null | insufficient-will |
huggingface/huggingface_hub | 2,526 | How can I rename folders in given repo? I need to rename folders | ### Describe the bug
I am try to rename like below but it fails :/
```
from huggingface_hub import HfApi
import os
# Initialize the Hugging Face API
api = HfApi()
# Set the repository name
repo_name = "MonsterMMORPG/3D-Cartoon-Style-FLUX"
# Define the folder renaming mappings
folder_renames = {
... | https://github.com/huggingface/huggingface_hub/issues/2526 | closed | [
"bug"
] | 2024-09-07T17:23:54Z | 2024-09-09T10:49:26Z | null | FurkanGozukara |
huggingface/transformers | 33,359 | [Docs] How to build offline HTML or Docset files for other documentation viewers? | ### Feature request
How can I build the docs into HTML files for use with other documentation viewers like [Dash](https://www.kapeli.com/dash) , [Dash-User-Contributions](https://github.com/Kapeli/Dash-User-Contributions)?
I successfully built the PyTorch docs for Dash by working directly in their `docs/` directory... | https://github.com/huggingface/transformers/issues/33359 | closed | [
"Documentation",
"Feature request"
] | 2024-09-06T15:51:35Z | 2024-09-10T23:43:57Z | null | ueoo |
huggingface/transformers | 33,343 | How to install transformers==4.45, two or three days I can install successfully, but today cannot. | ### System Info
torch2.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
pip instal... | https://github.com/huggingface/transformers/issues/33343 | closed | [
"Installation",
"bug"
] | 2024-09-06T08:23:00Z | 2024-10-16T08:04:10Z | null | HyacinthJingjing |
huggingface/optimum-nvidia | 149 | How to use TensorRT model converter | Referring to [src/optimum/nvidia/export/converter.py] -> class 'TensorRTModelConverter' this could 'Take a local model and create the TRTLLM checkpoint and engine'
Questions:
- What are applicable local model format? e.g. JAX, HuggingFace, DeepSpeed
- How to use this script individually to generate TRTLLM checkpoint... | https://github.com/huggingface/optimum-nvidia/issues/149 | open | [] | 2024-09-05T18:55:15Z | 2024-09-05T18:55:15Z | null | FortunaZhang |
huggingface/datasets | 7,139 | Use load_dataset to load imagenet-1K But find a empty dataset | ### Describe the bug
```python
def get_dataset(data_path, train_folder="train", val_folder="val"):
traindir = os.path.join(data_path, train_folder)
valdir = os.path.join(data_path, val_folder)
def transform_val_examples(examples):
transform = Compose([
Resize(256),
... | https://github.com/huggingface/datasets/issues/7139 | open | [] | 2024-09-05T15:12:22Z | 2024-10-09T04:02:41Z | 2 | fscdc |
huggingface/datasets | 7,138 | Cache only changed columns? | ### Feature request
Cache only the actual changes to the dataset i.e. changed columns.
### Motivation
I realized that caching actually saves the complete dataset again.
This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again.
#... | https://github.com/huggingface/datasets/issues/7138 | open | [
"enhancement"
] | 2024-09-05T12:56:47Z | 2024-09-20T13:27:20Z | 2 | Modexus |
huggingface/lerobot | 413 | Compatible off-the-shelf robots? | Huge thanks for making all of this available!
Can you recommend any (low-cost) off-the-shelf robots to work with? | https://github.com/huggingface/lerobot/issues/413 | closed | [
"question"
] | 2024-09-05T10:21:24Z | 2025-10-08T08:27:56Z | null | danielfriis |
huggingface/diffusers | 9,362 | IndexError: index 29 is out of bounds for dimension 0 with size 29 | ### Describe the bug
I have three problems because of the same reason.
1) TypeError: unsupported operand type(s) for +=: 'NoneType' and 'int'
# upon completion increase step index by one
self._step_index += 1 <---Error [here](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedu... | https://github.com/huggingface/diffusers/issues/9362 | open | [
"bug",
"stale"
] | 2024-09-04T11:02:49Z | 2024-11-25T15:04:22Z | 8 | Anvarka |
huggingface/tokenizers | 1,627 | Rust: How to handle models with `precompiled_charsmap = null` | Hi guys,
I'm currently working on https://github.com/supabase/edge-runtime/pull/368 that pretends to add a rust implementation of `pipeline()`.
While I was coding the `translation` task I figured out that I can't load the `Tokenizer` instance for [Xenova/opus-mt-en-fr](https://huggingface.co/Xenova/opus-mt-en-fr) ... | https://github.com/huggingface/tokenizers/issues/1627 | open | [
"Feature Request"
] | 2024-09-04T08:33:06Z | 2024-10-06T15:34:06Z | null | kallebysantos |
huggingface/optimum | 2,013 | Is it possible convert decoder_model_merged.onnx to tensorrt via trtexec command ? | At the first I convert whisper-tiny to onnx via optimum-cli
`optimum-cli export onnx --model openai/whisper-tiny --task automatic-speech-recognition-with-past whisper-tiny-onnx`
I got the some config, encoder and decoder_merged model
then I brought encoder and decoder_merged to convert to tensorrt via NGC versio... | https://github.com/huggingface/optimum/issues/2013 | closed | [] | 2024-09-03T17:52:40Z | 2024-09-15T10:16:34Z | 3 | ccyrene |
huggingface/lerobot | 407 | Multi-Image support for VQ-BeT | Hello, I wanted to ask if there is a possibility to have VQ-BeT running on multiple camera's for some environments that have different views, like Robomimic? If so can someone give me points on what exactly I need to change, I would be happy to submit a PR once I get it working on my side and finish the ICLR deadline! ... | https://github.com/huggingface/lerobot/issues/407 | closed | [
"question",
"policies"
] | 2024-09-03T17:00:23Z | 2025-10-08T08:27:39Z | null | bkpcoding |
huggingface/optimum | 2,009 | [Feature request] Add kwargs or additional options for torch.onnx.export | ### Feature request
In `optimum.exporters.onnx.convert import export_pytorch`, there could be an option to add additional kwargs to the function which could be passed to the torch.onnx.export function.
### Motivation
If such an option possible or will this ruin any of the other features, or is there a reason why the... | https://github.com/huggingface/optimum/issues/2009 | open | [
"onnx"
] | 2024-09-03T13:52:50Z | 2024-10-08T15:27:26Z | 0 | martinkorelic |
huggingface/speech-to-speech | 74 | How to integrate it with frontend | Hi, What steps should I follow to create a web app UI and integrate it?
Many thanks for considering my request. | https://github.com/huggingface/speech-to-speech/issues/74 | open | [] | 2024-09-03T12:18:52Z | 2024-09-03T13:52:08Z | null | shrinivasait |
huggingface/diffusers | 9,356 | pipeline_stable_diffusion_xl_adapter | ### Describe the bug
I want to rewrite the call function of the pipeline_stable_diffusion_xl_adapter. When I want to use the function prepare_ip_adapter_image_embeds, there is an error called "AttributeError: 'NoneType' object has no attribute 'image_projection_layers'". The error tells me that the attribution self.un... | https://github.com/huggingface/diffusers/issues/9356 | open | [
"bug",
"stale"
] | 2024-09-03T10:25:57Z | 2024-10-28T15:03:18Z | 6 | Yuhan291 |
huggingface/diffusers | 9,352 | Text generation? | Hi thanks for this great library!
There seems to be some diffusion models that generate text, instead of images. (For example, these two surveys: https://arxiv.org/abs/2303.06574, https://www.semanticscholar.org/paper/Diffusion-models-in-text-generation%3A-a-survey-Yi-Chen/41941f072db18972b610de9979e755afba35f11e). ... | https://github.com/huggingface/diffusers/issues/9352 | open | [
"wip"
] | 2024-09-03T06:54:38Z | 2024-11-23T04:57:37Z | 13 | fzyzcjy |
huggingface/speech-to-speech | 71 | How to run in ubuntu | I am trying to run it locally in my Ubuntu machine I have nvidia gpu and already setup CUDA.
```
python s2s_pipeline.py \
--recv_host 0.0.0.0 \
--send_host 0.0.0.0 \
--lm_model_name microsoft/Phi-3-mini-4k-instruct \
--init_chat_role system \
--stt_compile_mode reduce-overhead \
--tts_compile_mode defau... | https://github.com/huggingface/speech-to-speech/issues/71 | closed | [] | 2024-09-03T06:02:45Z | 2024-10-01T07:45:20Z | null | Basal-Analytics |
huggingface/optimum | 2,006 | Support for gemma2-2b-it(gemma 2nd version) Model Export in Optimum for OpenVINO | ### Feature request
please provide Support for gemma2 Model Export in Optimum for OpenVINO
version:optimum(1.21.4)
transformers:4.43.4
### Motivation
I encountered an issue while trying to export a gemma2 model using the optimum library for ONNX export. The error message suggests that the gemma2 model is either a... | https://github.com/huggingface/optimum/issues/2006 | open | [
"onnx"
] | 2024-09-03T05:54:51Z | 2025-01-22T15:40:04Z | 2 | chakka12345677 |
huggingface/transformers | 33,270 | Static KV cache status: How to use it? Does it work for all models? | I see that there are many PRs about [StaticCache](https://github.com/huggingface/transformers/pulls?q=is%3Apr+StaticCache), but I couldn't find a clear documentation on how to use it.
#### What I want
* To not have Transformers allocate memory dynamically for the KV cache when using `model.generate()`, as that le... | https://github.com/huggingface/transformers/issues/33270 | closed | [] | 2024-09-03T02:17:54Z | 2024-11-25T16:17:25Z | null | oobabooga |
huggingface/transformers.js | 917 | Where should I get `decoder_model_merged` file from? | ### Question
Hey,
I'm trying to use `whisper-web` demo with my finetuned model.
After I managed connecting my model to the demo application, I'm getting errors related to this:
https://github.com/xenova/transformers.js/blob/7f5081da29c3f77ee830269ab801344776e61bcb/src/models.js#L771
Basically, when `transforme... | https://github.com/huggingface/transformers.js/issues/917 | closed | [
"question"
] | 2024-09-02T07:30:57Z | 2025-02-26T12:05:05Z | null | abuchnick-aiola |
huggingface/diffusers | 9,339 | SD3 inpatinting | I found the StableDiffusion3InpaintPipeline, where can i found the weight of SD3 inpainting | https://github.com/huggingface/diffusers/issues/9339 | closed | [
"stale"
] | 2024-09-02T05:00:19Z | 2024-10-02T15:43:24Z | 5 | ucasyjz |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.