repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js | 973 | I would like to help | ### Question
Hi, I would like to help with the project. Is there anything that needs to be done?
Currently I found an issue, probably in ONNXRuntime. I will look into it next week.
Here is example of WebGPU Whisper that works with mobile platforms including iPhone and Android: https://github.com/FL33TW00D/whi... | https://github.com/huggingface/transformers.js/issues/973 | open | [
"question"
] | 2024-10-12T20:29:07Z | 2024-10-14T19:37:51Z | null | cyberluke |
huggingface/diffusers | 9,661 | from_pretrained: filename argument removed? | **What API design would you like to have changed or added to the library? Why?**
I do believe there was a `filename` argument in the past to load a specific checkpoint in a huggingface repository. It appears that this has been removed with no replacement.
**What use case would this enable or better enable? Can yo... | https://github.com/huggingface/diffusers/issues/9661 | closed | [
"stale"
] | 2024-10-12T20:02:31Z | 2024-11-13T00:37:52Z | 4 | oxysoft |
pytorch/torchchat | 1,297 | Can torchat call /use the models already downloaded under Ollama? | ### 🚀 The feature, motivation and pitch
Can torchat pick up the models that have already been downloaded by Ollama. Is there a way to use them without downloading them again with a hf user id?
`PS C:\Users\siva> ollama list
NAME ID SIZE
qwen2.5-coder:latest 87098ba739... | https://github.com/pytorch/torchchat/issues/1297 | closed | [] | 2024-10-12T16:35:12Z | 2024-10-15T15:22:03Z | 1 | sivaramn |
huggingface/transformers | 34,107 | How to specific customized force_token_ids in whisper | ```
ValueError: A custom logits processor of type <class 'transformers.generation.logits_process.ForceTokensLogitsProcessor'> with values <transformers.generation.logits_process.ForceTokensLogitsProcessor object at 0x7f4230cfac50> has been passed to `.generate()`, but it has already been created with the values <trans... | https://github.com/huggingface/transformers/issues/34107 | closed | [
"Generation",
"Audio"
] | 2024-10-12T07:34:38Z | 2024-12-28T08:06:48Z | null | MonolithFoundation |
pytorch/torchtitan | 610 | [Compile] Understand why FSDP2 saves both SDPA out and wo in for bwd | With FSDP2 and transformer block compile, `torch.compile` saves both the SDPA output and the contiguous transposed tensor for backward:
https://github.com/pytorch/torchtitan/blob/7e93822e402c3f470bb7ddb925bbc43701bf8573/torchtitan/models/llama/model.py#L210-L213
However, with simpleFSDP with full model compile, `torc... | https://github.com/pytorch/torchtitan/issues/610 | open | [
"question",
"module: torch.compile"
] | 2024-10-11T15:29:04Z | 2025-12-10T18:30:41Z | null | awgu |
pytorch/ao | 1,057 | How to use float8 with SM89 hardware - i.e. NVIDIA A6000 ADA? | I am running torchao: 0.5 and torch: '2.5.0a0+b465a5843b.nv24.09' on an NVIDIA A6000 ADA card (sm89) which supports FP8.
I ran the generate.py code from the benchmark:
python generate.py --checkpoint_path $CHECKPOINT_PATH --compile --compile_prefill --write_result /root/benchmark_results__baseline.txt
> Av... | https://github.com/pytorch/ao/issues/1057 | closed | [
"question",
"float8"
] | 2024-10-11T14:40:38Z | 2025-01-24T18:24:46Z | null | vgoklani |
pytorch/pytorch | 137,779 | Flex attention with mask depending on queries and keys lengths (or how to implement `causal_lower_right` masking) | ### 🐛 Describe the bug
I tried to implement the `causal_lower_right` masking in flex attention. This requires the masking function to know the difference in lengths of keys and queries:
```python
QL = query.size(2)
KL = key.size(2)
def causal_mask(b, h, q_idx, kv_idx):
return q_idx - QL >= kv_idx - KL
```... | https://github.com/pytorch/pytorch/issues/137779 | closed | [
"triaged",
"oncall: pt2",
"module: pt2-dispatcher",
"module: flex attention"
] | 2024-10-11T13:21:40Z | 2024-11-12T00:12:28Z | null | janchorowski |
huggingface/finetrainers | 25 | how to fix it ? training/cogvideox_text_to_video_lora.py FAILED | ### System Info / 系統信息
cuda11.8
x2 3090
linux ubuntu 22.04 lts
pytorch2.4
### Information / 问题信息
- [X] The official example scripts / 官方的示例脚本
- [X] My own modified scripts / 我自己修改的脚本和任务
### Reproduction / 复现过程
andb: You can sync this run to the cloud by running:
wandb: wandb sync /home/dev_ml/cogvideox-fac... | https://github.com/huggingface/finetrainers/issues/25 | closed | [] | 2024-10-11T08:49:23Z | 2024-12-23T07:40:41Z | null | D-Mad |
huggingface/finetrainers | 22 | What resolution size is recommended for MP4 videos? What should the bitrate be set to? Should the video use H.264 or H.265 encoding? | About Dataset Preparation,
What resolution size is recommended for MP4 videos? What should the bitrate be set to? Should the video use H.264 or H.265 encoding?
example: 1280X720, 5mbps below. recommended H.264 encoder.
Is any suggestion here? | https://github.com/huggingface/finetrainers/issues/22 | closed | [] | 2024-10-11T05:12:57Z | 2024-10-14T07:20:36Z | null | Erwin11 |
huggingface/accelerate | 3,156 | how to load model with fp8 precision for inference? | ### System Info
```Shell
is it posible to load the model using accelerate library with fp8 inference?
i have H100 gpu accesses.
```
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] One of the scripts in the examples/ folder of Accelerate or an officially supported ... | https://github.com/huggingface/accelerate/issues/3156 | closed | [] | 2024-10-11T04:31:47Z | 2024-12-02T15:07:58Z | null | imrankh46 |
huggingface/diffusers | 9,643 | Flux does not support multiple Controlnets? | ### Describe the bug
I'm encountering an issue with the FluxControlNetPipeline. The `controlnet` parameter is supposed to accept a `List[FluxControlNetModel]`. However, when I attempt to execute my code, I run into the following error:
```
Traceback (most recent call last):
File "/opt/tiger/test_1/h.py", line... | https://github.com/huggingface/diffusers/issues/9643 | closed | [
"bug"
] | 2024-10-11T03:47:06Z | 2024-10-11T17:39:20Z | 1 | RimoChan |
huggingface/diffusers | 9,639 | How to use my own trained lora in local computer? | local_model_path = r"D:\downloads\FLUX.1-schnell"
pipe = FluxPipeline.from_pretrained(local_model_path, torch_dtype=torch.bfloat16)
#lora not working by this way
pipe.load_lora_weights("XLabs-AI/flux-lora-collection", weight_name="disney_lora.safetensors")
pipe.load_lora_weights(r"D:\AI\stable-diffusion-webui-forg... | https://github.com/huggingface/diffusers/issues/9639 | closed | [] | 2024-10-10T23:19:47Z | 2024-11-10T08:49:08Z | null | derekcbr |
pytorch/benchmark | 2,499 | How is TorchBench applied to testing new versions of PyTorch? | Hello, may I ask what tasks will be used for end-to-end testing before the release of the new version of PyTorch?
Will the test focus on the consistency of metrics between the previous and subsequent versions, such as the loss of training tasks, iteration speed, etc | https://github.com/pytorch/benchmark/issues/2499 | open | [] | 2024-10-10T16:40:53Z | 2024-10-16T20:28:47Z | null | HLH13297997663 |
huggingface/evaluation-guidebook | 14 | [TOPIC] How to design a good benchmark depending on your eval goals | Eval goals can be finding a good model for you vs ranking models vs choosing a good training config.
Request by Luca Soldaini
Cf https://x.com/soldni/status/1844409854712218042 | https://github.com/huggingface/evaluation-guidebook/issues/14 | closed | [] | 2024-10-10T16:20:40Z | 2025-09-18T08:31:15Z | null | clefourrier |
huggingface/diffusers | 9,633 | Confusion about accelerator.num_processes in get_scheduler | In the example code from [train_text_to_image_sdxl.py](https://github.com/huggingface/diffusers/blob/e16fd93d0a40156c1f49fde07f6f2eb438983927/examples/text_to_image/train_text_to_image_sdxl.py#L974):
```python
num_warmup_steps = args.lr_warmup_steps * args.gradient_accumulation_steps
```
But in [train_text_to_image... | https://github.com/huggingface/diffusers/issues/9633 | closed | [
"stale"
] | 2024-10-10T08:39:12Z | 2024-11-09T15:37:33Z | 5 | hj13-mtlab |
huggingface/transformers.js | 968 | It's ready | ### Question
The project I've been working on for the part few months is now ready-enough to reveal to the world. Transformers.js is an essential part of it, and I just want to say thank you for your amazing work.
https://www.papeg.ai
As you can see in the source code, there are lots of workers that implement ... | https://github.com/huggingface/transformers.js/issues/968 | closed | [
"question"
] | 2024-10-10T04:39:48Z | 2025-05-29T22:49:24Z | null | flatsiedatsie |
pytorch/torchtitan | 608 | why is xformers not used for attention computation? | Curious why xformers is not used? Is it for simplicity or is there performance reason. | https://github.com/pytorch/torchtitan/issues/608 | closed | [
"question"
] | 2024-10-09T23:21:23Z | 2024-11-22T00:15:17Z | null | jason718 |
pytorch/xla | 8,245 | Improve documentation for `get_memory_info` | ## 📚 Documentation
Improve documentation for `get_memory_info`. This feature is lightly defined in [PyTorchXLA documentation page](https://pytorch.org/xla/release/r2.4/index.html#torch_xla.core.xla_model.get_memory_info). Please provide an explanation on what details it pulls and potentially offer examples.
Addi... | https://github.com/pytorch/xla/issues/8245 | open | [
"enhancement",
"usability"
] | 2024-10-09T20:33:18Z | 2025-02-27T13:10:42Z | 0 | miladm |
pytorch/TensorRT | 3,224 | ❓ [Question] How to decide if an Op should support dynamic shape or not | ## ❓ Question
<!-- Your question -->
Since only part of the ops support dynamic shapes, and some are not. What's the criteria to decide if an op supports dynamic shape or not?
For some existing ops, which are not marked as `supports_dynamic_shapes=True`, can I write a converter that wraps the existing converter,... | https://github.com/pytorch/TensorRT/issues/3224 | open | [
"question"
] | 2024-10-09T16:46:56Z | 2024-10-30T23:52:26Z | null | sean-xiang-applovin |
huggingface/datasets | 7,211 | Describe only selected fields in README | ### Feature request
Hi Datasets team!
Is it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co/datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some f... | https://github.com/huggingface/datasets/issues/7211 | open | [
"enhancement"
] | 2024-10-09T16:25:47Z | 2024-10-09T16:25:47Z | 0 | alozowski |
pytorch/xla | 8,240 | XLA2 does not work with jax 0.4.34 (but did work on jax 0.4.33) | ## 🐛 Bug
A toy example of MNIST using XLA2 does not work on the latest version of jax (0.4.34) on Trillium machine of 64 cores (V6e-64) but downgrading to 0.4.33 fixes the issue
## To Reproduce
1. Download the toy training example from [here](https://gist.githubusercontent.com/Chaosruler972/2461fe9d5a7a558f... | https://github.com/pytorch/xla/issues/8240 | closed | [
"bug",
"torchxla2"
] | 2024-10-09T14:35:32Z | 2025-03-04T18:22:21Z | 3 | zmelumian972 |
huggingface/transformers.js | 965 | Error: cannot release session. invalid session id | ### Question
I'm trying to get ASR + segmentation to run on a mobile phone (Pixel 6A, 6GB ram). This time on Brave mobile ;-)
ASR alone works fine. But I have a question about also getting the speaker recognition to run (segmentation+verification).
In the example implementation a `promiseAll` is used to run bo... | https://github.com/huggingface/transformers.js/issues/965 | open | [
"question"
] | 2024-10-09T13:57:48Z | 2024-10-09T15:51:02Z | null | flatsiedatsie |
huggingface/chat-ui | 1,509 | (BUG) Oath login splash is BROKEN/does NOT work | On newer versions of chat-ui the login splash screen does not work. Say for instance you have oauth setup and are not logged in. You should get a popup prompting you to logina nd not see the interface. This used to work without a problem. I just realized this no longer working on the newer versions. I have oauth s... | https://github.com/huggingface/chat-ui/issues/1509 | closed | [
"bug"
] | 2024-10-08T18:06:01Z | 2024-11-27T15:02:46Z | 2 | bpawnzZ |
huggingface/trl | 2,196 | How to exit training when the loss is less than a specified value in SFTTrainer? | I asked this question in ChatGPT first, it gave the answer below:
```
from trl import SFTTrainer
from transformers import TrainingArguments
from unsloth import is_bfloat16_supported
# Define customized Trainer class
class CustomSFTTrainer(SFTTrainer):
def __init__(self, *args, min_loss_threshold=0.001, **k... | https://github.com/huggingface/trl/issues/2196 | closed | [
"❓ question",
"🏋 SFT"
] | 2024-10-08T03:13:27Z | 2024-10-08T10:39:51Z | null | fishfree |
huggingface/safetensors | 532 | Documentation about multipart safetensors | ### Feature request
Add examples to documentation about handling with multipart safetensors files (`*-00001.safetensors`, `*-00002.safetensors`, etc). How to load/save them?
### Motivation
This is widespread format but README and Docs don't contain enough information about it.
### Your contribution
Can't help by m... | https://github.com/huggingface/safetensors/issues/532 | closed | [] | 2024-10-07T20:14:48Z | 2025-01-03T17:36:31Z | 6 | attashe |
pytorch/audio | 3,838 | How to train a real-time av-asr pretrain model | ### 🚀 The feature
There is an example for hubert training [here](https://github.com/pytorch/audio/tree/main/examples/self_supervised_learning), but has no example about real-time av-asr for other languages.
### Motivation, pitch
I'm woking on lipreading without a pretrained model to continue train the pretrained mo... | https://github.com/pytorch/audio/issues/3838 | open | [] | 2024-10-07T12:23:32Z | 2024-10-07T12:23:32Z | null | Zhaninh |
huggingface/diffusers | 9,599 | Why there is no LoRA only finetune example of FLUX.1? | **Is your feature request related to a problem? Please describe.**
The only example of LoRA finetune for FLUX.1 I discovered is here:
https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_flux.py
which is a dreambooth example. The dreambooth is VRAM intensive and not useful for... | https://github.com/huggingface/diffusers/issues/9599 | closed | [] | 2024-10-07T06:22:54Z | 2024-10-09T12:48:32Z | 3 | eeyrw |
huggingface/chat-ui | 1,506 | Add support for local models | ## Describe your feature request
I was looking for an open-source alternative to PocketPal, which allows to converse with local models on iOS and Android https://apps.apple.com/us/app/pocketpal-ai/id6502579498 and I was wondering if HuggingChat could be this alternative? The idea is to have an e2e open-source soluti... | https://github.com/huggingface/chat-ui/issues/1506 | closed | [
"enhancement"
] | 2024-10-06T20:18:24Z | 2024-10-07T13:45:45Z | 3 | arnaudbreton |
pytorch/torchchat | 1,278 | AOTI Export ignores user --device flag - expected behavior? | ### 🐛 Describe the bug
Hi all,
I ran into some confusion when trying to export llama3 on my system. I have a small graphics card (8GB VRAM on an AMD GPU) but a decent amount of RAM (24GB). Obviously, the model won't fit on my GPU un-quantized but it should fit into my RAM + swap.
I tried running:
```
python3... | https://github.com/pytorch/torchchat/issues/1278 | closed | [
"bug",
"good first issue",
"actionable"
] | 2024-10-06T19:06:51Z | 2024-11-16T01:15:38Z | 5 | vmpuri |
pytorch/torchchat | 1,277 | Android demo app poor model performance | ### 🐛 Describe the bug
I wanted to try the new Llama 3.2 1B parameter model on mobile. I downloaded the model and generated the `pte` like so:
```
python torchchat.py download llama3.2-1b
python torchchat.py export llama3.2-1b --quantize torchchat/quant_config/mobile.json --output-pte-path llama3_2-1b.pte
```
... | https://github.com/pytorch/torchchat/issues/1277 | closed | [
"actionable",
"Mobile - Android",
"ExecuTorch"
] | 2024-10-06T15:10:55Z | 2024-10-25T08:19:10Z | 11 | fran-aubry |
pytorch/xla | 8,223 | how to use torch.float16 in diffusers pipeline with pytorch xla | ## ❓ Questions and Help
```
import diffusers, torch, os
import torch_xla.core.xla_model as xm
pipeline = diffusers.DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", safety_checker=None, use_safetensors=True, torch_dtype=torch.float16)
# Move the model to the first TPU core
pipeline = pipeline.... | https://github.com/pytorch/xla/issues/8223 | open | [
"bug"
] | 2024-10-06T00:02:41Z | 2025-02-27T13:17:50Z | null | ghost |
huggingface/tokenizers | 1,644 | How to build a custom tokenizer on top of a exsiting Llama 3.2 tokenizer? | Hi,
I was trying to create a custom tokenizer for a different language which is not included in llama 3.2 tokenizer.
I could not find exactly what tokenizer I can use from hf which is exact alternative to Llama's tokenizer [link](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py), so that I will be ... | https://github.com/huggingface/tokenizers/issues/1644 | closed | [
"training"
] | 2024-10-05T13:18:55Z | 2025-02-26T12:06:15Z | null | yakhyo |
pytorch/xla | 8,222 | unsupported operand type(s) for %: 'int' and 'NoneType' | ## ❓ Questions and Help
I follow the https://github.com/pytorch/xla/blob/master/contrib/kaggle/pytorch-xla-2-0-on-kaggle.ipynb
but the code in image = pipeline(prompt, callback=lambda *args: xm.mark_step(), generator=generator).images[0]
get
```
TypeError Traceback (most recent ca... | https://github.com/pytorch/xla/issues/8222 | closed | [
"question",
"xla:tpu"
] | 2024-10-05T12:11:52Z | 2025-02-27T13:20:08Z | null | ghost |
pytorch/xla | 8,216 | Random OOM and crashes | ## ❓ Questions and Help
I've found that I'm unable to train more than ~20-80K steps without a crash and it's difficult to figure out how to debug this. In a typical PyTorch training run, I would get a clear OOM message at a particular line, or any other error and this would be printed to log/console.
However, abo... | https://github.com/pytorch/xla/issues/8216 | closed | [
"question",
"distributed",
"xla:tpu"
] | 2024-10-04T18:51:52Z | 2025-02-27T13:21:33Z | null | alexanderswerdlow |
pytorch/xla | 8,215 | how to use all tpu core in pytorch xla | ## ❓ Questions and Help
I follow the code in https://github.com/pytorch/xla/blob/master/contrib/kaggle/distributed-pytorch-xla-basics-with-pjrt.ipynb
But use xmp.spawn(print_device, args=(lock,), nprocs=8, start_method='fork')
the source code
```
import os
os.environ.pop('TPU_PROCESS_ADDRESSES')
import tor... | https://github.com/pytorch/xla/issues/8215 | closed | [
"question",
"distributed",
"xla:tpu"
] | 2024-10-04T02:54:18Z | 2025-02-27T13:22:25Z | null | ghost |
pytorch/torchchat | 1,262 | Support Granite Code 3B/8B | ### 🚀 The feature, motivation and pitch
The `torchchat` framework provides an excellent platform for embedding models into many different edge-centric platforms.
The [Granite Code models](https://huggingface.co/collections/ibm-granite/granite-code-models-6624c5cec322e4c148c8b330), specifically the [3B-128k](https:... | https://github.com/pytorch/torchchat/issues/1262 | closed | [] | 2024-10-03T16:18:08Z | 2024-12-19T10:13:55Z | 0 | gabe-l-hart |
huggingface/datasets | 7,196 | concatenate_datasets does not preserve shuffling state | ### Describe the bug
After concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156
This means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623
I also noticed th... | https://github.com/huggingface/datasets/issues/7196 | open | [] | 2024-10-03T14:30:38Z | 2025-03-18T10:56:47Z | 1 | alex-hh |
huggingface/diffusers | 9,575 | diffusers version update to 0.27.0 from 0.20.0, training code seems not work | I have trained an inpainting model using diffusers 0.20.0. The trained model works as expected. However, something seems wrong when I update the diffusers version to 0.27.0, while keeping the training code and other requirements the same. The training code runs successfully, but the inference outputs look like noise. I... | https://github.com/huggingface/diffusers/issues/9575 | closed | [] | 2024-10-03T14:30:21Z | 2024-10-15T08:58:36Z | 4 | huangjun12 |
pytorch/serve | 3,339 | Clarification on minWorkers and maxWorkers parameters | ### 📚 The doc issue
I have some questions related to model parameters:
1. I know there is no autoscaling in Torchserve, and looking at code, models will scale `minWorkers` number of workers on startup. `maxWorkers` seems to be only used when downscaling a model, meaning if `currentWorkers > maxWorkers`, it will kill... | https://github.com/pytorch/serve/issues/3339 | open | [] | 2024-10-03T13:07:00Z | 2024-10-03T13:07:00Z | 0 | krzwaraksa |
huggingface/transformers | 33,909 | How to implement weight decay towards the pre-trained model? | Hello, let me one question.
If using HF Trainer for supervised fune-tuning, how do I implement penalizing the distance between starting and current weights? This was shown to be effective in https://arxiv.org/abs/1706.03610 | https://github.com/huggingface/transformers/issues/33909 | open | [
"Usage",
"Feature request"
] | 2024-10-03T11:18:53Z | 2024-10-22T13:16:26Z | null | sedol1339 |
pytorch/serve | 3,338 | throughput increase non-linearly with number of workers | ### 🐛 Describe the bug
I am hosting a bert-like model using below torchserve config.
```
inference_address=http://localhost:8080
management_address=http://localhost:8081
metrics_address=http://localhost:8082
load_models=model_name=weights.mar
async_logging=true
job_queue_size=200
models={ "model_name": { "... | https://github.com/pytorch/serve/issues/3338 | open | [] | 2024-10-03T07:32:22Z | 2024-10-08T10:33:28Z | 2 | vandesa003 |
pytorch/ao | 1,002 | How to calibrate a w8a8 quantized model? | I used the following code to quantize an LLM, employing an w8a8 quantization setting:
```python
model = AutoModelForCausalLM.from_pretrained("./Qwen1.5-0.5B-Chat").to(dtype=torch.bfloat16, device='cpu')
quantize_(model, int8_dynamic_activation_int8_weight())
```
Everything is running smoothly, but the model's ... | https://github.com/pytorch/ao/issues/1002 | closed | [] | 2024-10-03T03:55:31Z | 2024-10-04T01:26:58Z | null | chenghuaWang |
huggingface/datasets | 7,189 | Audio preview in dataset viewer for audio array data without a path/filename | ### Feature request
Huggingface has quite a comprehensive set of guides for [audio datasets](https://huggingface.co/docs/datasets/en/audio_dataset). It seems, however, all these guides assume the audio array data to be decoded/inserted into a HF dataset always originates from individual files. The [Audio-dataclass](... | https://github.com/huggingface/datasets/issues/7189 | open | [
"enhancement"
] | 2024-10-02T16:38:38Z | 2024-10-02T17:01:40Z | 0 | Lauler |
huggingface/transformers.js | 958 | Zombies in memory - something is blocking (re)loading of Whisper after a page is closed and re-opened | ### Question
I've been trying to debug this issue all afternoon, but haven't gotten any further. The code runs on desktop, but not on Android Chrome.
This is with V3 Alpha 19.
<img width="571" alt="Screenshot 2024-10-02 at 16 06 16" src="https://github.com/user-attachments/assets/c5fbb2cb-0cdf-431a-8099-021d19a1... | https://github.com/huggingface/transformers.js/issues/958 | closed | [
"question"
] | 2024-10-02T14:10:27Z | 2024-10-18T12:47:17Z | null | flatsiedatsie |
pytorch/vision | 8,669 | performance degradation in to_pil_image after v0.17 | ### 🐛 Describe the bug
`torchvision.transforms.functional.to_pil_image `is much slower when converting torch.float16 image tensors to PIL Images based on my benchmarks (serializing 360 images):
Dependencies:
```
Python 3.11
Pillow 10.4.0
```
Before (torch 2.0.1, torchvision v0.15.2, [Code here](https://git... | https://github.com/pytorch/vision/issues/8669 | open | [] | 2024-10-02T08:25:01Z | 2024-10-25T13:06:15Z | 5 | seymurkafkas |
huggingface/diffusers | 9,567 | [community] Improving docstrings and type hints | There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int] = None, parameter_3: float = 42.0) -> Civiliza... | https://github.com/huggingface/diffusers/issues/9567 | closed | [
"documentation",
"good first issue",
"contributions-welcome"
] | 2024-10-02T03:20:44Z | 2025-11-13T22:45:59Z | 16 | a-r-r-o-w |
huggingface/datasets | 7,186 | pinning `dill<0.3.9` without pinning `multiprocess` | ### Describe the bug
The [latest `multiprocess` release](https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17) requires `dill>=0.3.9` which causes issues when installing `datasets` without backtracking during package version resolution. Is it possible to add a pin for multiprocess so something like `multi... | https://github.com/huggingface/datasets/issues/7186 | closed | [] | 2024-10-01T22:29:32Z | 2024-10-02T06:08:24Z | 0 | shubhbapna |
pytorch/torchchat | 1,249 | Support Huggingface models from safetensors | ### 🚀 The feature, motivation and pitch
There are many models on Huggingface that are published as `safetensors` rather than `model.pth` checkpoints. The request here is to support converting and loading those checkpoints into a format that is usable with `torchchat`.
There are several places where this limitati... | https://github.com/pytorch/torchchat/issues/1249 | closed | [] | 2024-10-01T22:07:59Z | 2024-10-04T19:18:22Z | 2 | gabe-l-hart |
pytorch/torchtitan | 594 | Support Gemma2 in torchtitan | Are there any plans to support Gemma2 in the torchtitan? I tried to use torchtitan to finetune Gemma2 model, but stuck on the following problem: how to parallelize tied layer in Gemma2 model? Maybe somebody kwon the solution for this problem 😄 | https://github.com/pytorch/torchtitan/issues/594 | closed | [
"bug",
"question"
] | 2024-10-01T11:50:15Z | 2025-03-20T18:32:31Z | null | pansershrek |
huggingface/chat-ui | 1,499 | Error 500 "RPError" | OpenID Connect + SafeNet Trusted Access (STA) | Hello,
I would like to deploy OpenID Connect with SafeNet Trusted Access (STA).
From this 3-minute video, I've done all the steps, except for OAuth.tools which I don't use :
https://www.youtube.com/watch?v=hSWXFSadpQQ
Here's my bash script that deploys the containers | ```deploy.sh``` :
```bash
#!/bin/bas... | https://github.com/huggingface/chat-ui/issues/1499 | open | [
"support"
] | 2024-09-30T12:54:16Z | 2024-09-30T12:57:51Z | 0 | avirgos |
huggingface/diffusers | 9,560 | FP32 training for sd3 controlnet | Hi,
I have been use `examples\controlnet\train_controlnet_sd3.py` for controlnet training for a while, and I have some confusion and would like your advice
1. In the line 1097:
`vae.to(accelerator.device, dtype=torch.float32)`
It seems we should use fp32 for VAE, but as far as I know, SD3 currently has no fp32 ch... | https://github.com/huggingface/diffusers/issues/9560 | closed | [
"stale"
] | 2024-09-30T08:07:04Z | 2024-10-31T15:13:19Z | 11 | xduzhangjiayu |
huggingface/huggingface_hub | 2,578 | What is the highest Python version currently supported? | ### Describe the bug
I utilized Hugging Face Spaces to construct my application, which was built using Gradio, zerogpuspace, and the link is: https://huggingface.co/spaces/tanbw/CosyVoice
In the readme.md, I specified the Python version as 3.8.9, but the version of Python that the application prints out is still 3.1.... | https://github.com/huggingface/huggingface_hub/issues/2578 | closed | [
"bug"
] | 2024-09-29T14:37:38Z | 2024-09-30T07:05:29Z | null | tanbw |
huggingface/diffusers | 9,555 | [Flux Controlnet] Add control_guidance_start and control_guidance_end | It'd be nice to have `control_guidance_start` and `control_guidance_start` parameters added to flux Controlnet and Controlnet Inpainting pipelines.
I'm currently making experiments with Flux Controlnet Inpainting but the results are poor even with a `controlnet_conditioning_scale` set to 0.6.
I have to set `cont... | https://github.com/huggingface/diffusers/issues/9555 | closed | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-09-29T12:37:39Z | 2024-10-10T12:29:03Z | 8 | simbrams |
huggingface/hub-docs | 1,435 | How to check if a space is duplicated from another one using HF API? | I cannot find any related specifications in the documentation...Thanks! | https://github.com/huggingface/hub-docs/issues/1435 | open | [] | 2024-09-28T23:52:08Z | 2025-01-16T17:08:34Z | null | zhimin-z |
huggingface/diffusers | 9,551 | How to use x-labs flux controlnet models in diffusers? | ### Model/Pipeline/Scheduler description
The following controlnets are supported in Comfy UI, but was wondering how we can use these in diffusers as well for developers. Afaik, there is no from_single_file method for FluxControlNet to load the safetensors?
### Open source status
- [x] The model implementation ... | https://github.com/huggingface/diffusers/issues/9551 | closed | [] | 2024-09-28T20:01:15Z | 2024-09-29T06:59:46Z | null | neuron-party |
huggingface/text-generation-inference | 2,583 | How to turn on the KV cache when serve a model? | ### System Info
TGI 2.3.0
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
The TTFT is really slower than VLLM. Can't be improved? if so how to turn on the KV cache when launch a model?
```
model=HuggingFaceH4/zeph... | https://github.com/huggingface/text-generation-inference/issues/2583 | open | [] | 2024-09-28T19:32:15Z | 2024-10-25T12:47:02Z | null | hahmad2008 |
pytorch/torchchat | 1,222 | Clear model download documents | ### 🐛 Describe the bug
From the README, its not very clear how to download different flavor/sizes of the models from HF, unless someone go to the next section and find the inventory list https://github.com/pytorch/torchchat#download-weights
might be helpful to add the inventory list command upper before the the d... | https://github.com/pytorch/torchchat/issues/1222 | closed | [
"documentation",
"actionable"
] | 2024-09-27T22:16:38Z | 2024-09-30T16:02:55Z | 4 | HamidShojanazeri |
pytorch/xla | 8,088 | Is this content still relevant? | ## 📚 Documentation
xla/docs/README contains the following text. Is this text still relevant? The link to CircleCi is broken and I'm not sure if this information is useful:
-------------------------------
## Publish documentation for a new release.
CI job `pytorch_xla_linux_debian11_and_push_doc` is specified t... | https://github.com/pytorch/xla/issues/8088 | closed | [
"question",
"documentation"
] | 2024-09-27T22:02:37Z | 2025-03-06T13:05:38Z | null | mikegre-google |
pytorch/TensorRT | 3,192 | ❓ [Question] When should I use Torch-TensorRT instead of TensorRT ? | I generally use NVIDIA's TensorRT as the inference framework. I want to know the advantages and disadvantages of Torch-TensorRT compared to TensorRT, so that I can decide when to use Torch-TensorRT. I guess Torch-TensorRT might be simpler and more user-friendly. Also, have you tested and compared their inference speed ... | https://github.com/pytorch/TensorRT/issues/3192 | closed | [
"question"
] | 2024-09-27T15:51:32Z | 2024-10-02T16:22:54Z | null | EmmaThompson123 |
huggingface/transformers.js | 948 | Getting Local models/wasm working with Create React App | ### Question
I realize there's been a lot of talk about this in other issues, but I'm trying to gather if getting local-only model and wasm files will work with Create React App. I'm using `WhisperForConditionalGeneration` from `@huggingface/transformers` version `3.0.0-alpha.9`.
My setup:
```
env.allowRemoteMod... | https://github.com/huggingface/transformers.js/issues/948 | closed | [
"question"
] | 2024-09-26T20:42:33Z | 2024-09-26T21:26:30Z | null | stinoga |
huggingface/blog | 2,369 | How to finetune jina-embeddings-v3 by lora? | https://github.com/huggingface/blog/issues/2369 | open | [] | 2024-09-26T07:25:16Z | 2024-09-26T07:25:16Z | null | LIUKAI0815 | |
pytorch/vision | 8,661 | references/segmentation/coco_utils might require merging rles? | https://github.com/pytorch/vision/blob/6d7851bd5e2bedc294e40e90532f0e375fcfee04/references/segmentation/coco_utils.py#L27-L41 Above seems to assume that objects are not occluded, not merging rles from `frPyObjects`. In such case, i think it must be changed to
```python
rles = coco_mask.frPyObjects(polygons, height, ... | https://github.com/pytorch/vision/issues/8661 | open | [] | 2024-09-26T02:53:47Z | 2024-10-11T13:36:25Z | 1 | davidgill97 |
huggingface/text-generation-inference | 2,569 | Question: What is preferred way to cite TGI/repo? Didnt see a citation file. | https://github.com/huggingface/text-generation-inference/issues/2569 | open | [] | 2024-09-26T02:07:42Z | 2024-09-26T02:07:42Z | null | mkultraWasHere | |
huggingface/lerobot | 454 | Venv isn't needed in docker | I noticed in your docker files you are using a virtual environment. Docker is already a virtual environment at the system level. Is there a reason for using a python virtual environment as well? Typically, this is redundant/unnecessary and you'd only use venv or similar on your local machine.
If there isn't a good r... | https://github.com/huggingface/lerobot/issues/454 | closed | [
"enhancement",
"question",
"stale"
] | 2024-09-25T16:33:17Z | 2025-10-23T02:29:11Z | null | MichaelrMentele |
pytorch/xla | 8,071 | Optimizer Memory in AdamW/Adam vs SGD | ## ❓ Questions and Help
It is to my understanding that Adam should use more memory than SGD because it keeps track of more parameters. However, when I look at my profiles between Adam and SGD optimizers and see that they use roughly the same amount of memory.
Does torch XLA somehow do optimizations on the optimi... | https://github.com/pytorch/xla/issues/8071 | closed | [] | 2024-09-25T16:01:53Z | 2024-11-16T20:30:20Z | 1 | dangthatsright |
pytorch/audio | 3,835 | Not building CUDA 12.6 | ### 🐛 Describe the bug
It's not building with last version of cuda 12.6.1 in jetson agx orin
```bash
#!/usr/bin/env bash
set -ex
echo "Building torchaudio ${TORCHAUDIO_VERSION}"
apt-get update
apt-get install -y --no-install-recommends \
git \
pkg-config \
libffi-dev \
libsndfile1
rm -rf /... | https://github.com/pytorch/audio/issues/3835 | closed | [] | 2024-09-25T10:10:21Z | 2025-01-08T12:54:20Z | 2 | johnnynunez |
huggingface/diffusers | 9,528 | load_ip_adapter for distilled sd models | Is it possible to load IP-Adapter for distilled SD v1 or v2 based models such as nota-ai/bk-sdm-tiny or nota-ai/bk-sdm-v2-tiny?
When I tried to load ip adapter using bk-sdm-tiny
```python
pipe.load_ip_adapter(
"h94/IP-Adapter",
subfolder="models",
weight_name="ip-adapter-plus_sd15.bin",
low_c... | https://github.com/huggingface/diffusers/issues/9528 | closed | [
"stale"
] | 2024-09-25T04:31:00Z | 2025-01-12T06:01:40Z | 7 | kmpartner |
pytorch/examples | 1,289 | Does torchrun + FSDP create multiple copies of the same dataset and model? | In the [example T5 training code](https://github.com/pytorch/examples/blob/cdef4d43fb1a2c6c4349daa5080e4e8731c34569/distributed/FSDP/T5_training.py#L77C24-L77C35), the main function creates a copy of the model and dataset regardless of the worker rank before passing it to FSDP. Does this mean that there are n copies of... | https://github.com/pytorch/examples/issues/1289 | open | [] | 2024-09-25T03:59:24Z | 2024-09-25T04:25:55Z | 1 | tsengalb99 |
huggingface/chat-ui | 1,486 | Getting 403 on chat ui config for aws sagemaker endpoint |
Hi All,
Looking into configuring chat ui with aws sagemaker endpoint and getting following error:

```
DOTENV_LOCAL was found in the ENV variables. Creating .env.local file.
{"level":30,"time":1727231014113,"pid":23,"ho... | https://github.com/huggingface/chat-ui/issues/1486 | open | [
"support"
] | 2024-09-25T02:41:08Z | 2024-09-25T02:41:08Z | 0 | nauts |
huggingface/chat-macOS | 7 | Asking "what time is it?" will always return the local time of Paris, regardless of your location (⌘R+) | <img width="487" alt="Screenshot 2024-09-24 at 11 54 17 AM" src="https://github.com/user-attachments/assets/02d26c05-ae37-4caf-a3ff-5bc6aec42068">
I wonder how can we localize questions like this. I've tried ⌘R+ which always gives me the local time of Paris. Qwen2.5-72B and Llama 3.1 make up another non-specific tim... | https://github.com/huggingface/chat-macOS/issues/7 | open | [
"good first issue"
] | 2024-09-24T23:09:31Z | 2024-10-23T20:08:57Z | null | Reza2kn |
huggingface/diffusers | 9,520 | UNetMotionModel.dtype is really expensive to call, is it possible to cache it during inference? | **What API design would you like to have changed or added to the library? Why?**
we are using class UNetMotionModel(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin, PeftAdapterMixin)
and its `forward()` implementation is calling self.dtype, which is very expensive
, and then the core dump.
https://github.com/pytorch/xla/blob/master/torch_xla/csrc/runtime/pjrt_computation_client.cc#L806
I wrote a test function earlier that tried to transform all arguments... | https://github.com/pytorch/xla/issues/8057 | open | [
"question",
"distributed"
] | 2024-09-24T10:35:31Z | 2025-03-31T21:30:22Z | null | mars1248 |
pytorch/audio | 3,834 | Ability to build manylinux2014 compliant wheels for other archs (ppc64le) | ### 🚀 The feature
I'd like to have the possibility to create manylinux2014 compliant wheels for ppc64le. Is there a documentation for this?
### Motivation, pitch
PowerPC has in-core accelerator engines (MMA, Matrix-mulitply assist) which focused on AI inferencing and packages such as torch/audio/vision are preferre... | https://github.com/pytorch/audio/issues/3834 | open | [] | 2024-09-23T21:59:39Z | 2024-09-23T21:59:39Z | 0 | mgiessing |
huggingface/diffusers | 9,508 | AnimateDiff SparseCtrl RGB does not work as expected | Relevant comments are [this](https://github.com/huggingface/diffusers/pull/8897#issuecomment-2255416318) and [this](https://github.com/huggingface/diffusers/pull/8897#issuecomment-2255478105).
AnimateDiff SparseCtrl RGB does not work similar to other implementations and cannot replicate their outputs. This makes me ... | https://github.com/huggingface/diffusers/issues/9508 | open | [
"bug",
"help wanted",
"stale",
"contributions-welcome",
"advanced"
] | 2024-09-23T21:42:54Z | 2025-08-10T16:47:50Z | 9 | a-r-r-o-w |
pytorch/xla | 8,049 | How to run XLA with CPU offloaded models | ## ❓ Questions and Help
How do you run models that are offloaded to the CPU, Trying to work with ```enable_sequential_cpu_offload``` or ```enable_model_cpu_offload```, when running ```torch_xla.sync()/xm.mark_step() ```, the graph seems to not exclude such factor, and in turn takes much more memory than when only r... | https://github.com/pytorch/xla/issues/8049 | open | [
"enhancement",
"performance"
] | 2024-09-23T10:59:06Z | 2025-03-31T15:42:09Z | null | radna0 |
huggingface/lerobot | 451 | Inquiry about Implementation of "Aloha Unleashed" | First and foremost, I would like to extend my heartfelt gratitude for your incredible work on the Lerobo project.
I recently came across the paper "Aloha Unleashed" published by the Aloha team a few months ago, and I am curious to know if there are any plans to implement the methodologies and findings from this pap... | https://github.com/huggingface/lerobot/issues/451 | open | [
"question",
"robots"
] | 2024-09-23T09:14:56Z | 2025-08-20T19:42:37Z | null | lightfate |
pytorch/TensorRT | 3,173 | ❓ [Question] torchscript int8 quantization degradation in recent versions | TS INT8 degradation later versions
Hi all, I get a degradation in results after an INT8 quantization with torchscript, after updating my torch_tensorrt, torch and tensorrt versions. I have listed the dependencies for both cases below, is this expected?
Earlier Version (Works Well):
Torch: 2.0.1
CUDA: 11.8
torc... | https://github.com/pytorch/TensorRT/issues/3173 | open | [
"question"
] | 2024-09-22T14:46:00Z | 2024-09-23T16:44:03Z | null | seymurkafkas |
huggingface/text-generation-inference | 2,541 | How to serve local models with python package (not docker) | ### System Info
`pip install text-generation `with version '0.6.0'
I need to use python package not docker
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
```
from text_generation import Client
# Initialize the c... | https://github.com/huggingface/text-generation-inference/issues/2541 | open | [] | 2024-09-20T21:10:09Z | 2024-09-26T06:55:50Z | null | hahmad2008 |
huggingface/competitions | 41 | how to debug a script submission | is there way to see logs or errors of a script based submission | https://github.com/huggingface/competitions/issues/41 | closed | [] | 2024-09-20T18:04:44Z | 2024-09-30T16:08:42Z | null | ktrapeznikov |
huggingface/diffusers | 9,485 | Can we allow making everything on gpu/cuda for scheduler? | **What API design would you like to have changed or added to the library? Why?**
Is it possible to allow setting every tensor attribute of scheduler to cuda device?
In https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lcm.py
It looks like that attributes like `scheduler.alphas_cu... | https://github.com/huggingface/diffusers/issues/9485 | open | [
"stale",
"scheduler",
"performance"
] | 2024-09-20T12:38:16Z | 2024-12-17T15:04:46Z | 14 | xiang9156 |
pytorch/serve | 3,325 | Kserve management api for registering new models | I have a setup where the Kserve endpoint is mounted to PVC, which reads model files on startup and loads them.
Is it possible to register a new version of the model (after I added it to PVC) without restarting whole Kserve endpoints with other models and expanding config.properties?
Torchserve supports this use c... | https://github.com/pytorch/serve/issues/3325 | open | [
"question"
] | 2024-09-20T10:47:44Z | 2024-09-20T19:28:03Z | null | matej14086 |
huggingface/optimum | 2,032 | ONNX support for decision transformers | ### Feature request
I am trying to train off-line RL using decision transformer, convert to .onnx.
```
from pathlib import Path
from transformers.onnx import FeaturesManager
feature = "sequence-classification"
# load config
model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(m... | https://github.com/huggingface/optimum/issues/2032 | closed | [
"onnx"
] | 2024-09-20T08:45:28Z | 2024-11-25T13:00:02Z | 1 | ra9hur |
huggingface/setfit | 558 | How to improve the accuracy while classifying short text with less context | Hi, my usecase is to classify Job Title into Functional Areas. I finetuned `all-mpnet-base-v2` with the help of setfit by providing some 10+ examples for each class (Functional Areas).
I got `82%` accuracy on running the evaluation on my test set. I observed some of the simple & straightforward job titles are clas... | https://github.com/huggingface/setfit/issues/558 | open | [] | 2024-09-20T06:09:07Z | 2024-11-11T11:23:31Z | null | 29swastik |
huggingface/safetensors | 527 | [Question] Comparison with the zarr format? | Hi,
I know that safetensors are widely used nowadays in HF, and the comparisons made in this repo's README file make a lot of sense.
However, I am now surprised to see that there is no comparison with zarr, which is probably the most widely used format to store tensors in an universal, compressed and scalable way... | https://github.com/huggingface/safetensors/issues/527 | open | [] | 2024-09-19T13:32:17Z | 2025-01-13T17:56:46Z | 13 | julioasotodv |
huggingface/transformers | 33,584 | How to fine tune Qlora with Custum trainer. | Full model fine-tuning code is given below. How can i modify the code to train Qlora based model.
```import sys
import os
current_directory = os.path.dirname(os.path.abspath(__file__))
sys.path.append(current_directory)
from src.custom_dataset import RawFileDataset
import copy
import random
from dataclasse... | https://github.com/huggingface/transformers/issues/33584 | closed | [
"trainer",
"Quantization"
] | 2024-09-19T09:40:00Z | 2024-10-28T08:05:06Z | null | ankitprezent |
huggingface/diffusers | 9,470 | Prompt scheduling in Diffusers like A1111 | Hi everyone, I have a question that how to implement the [prompt scheduling feature](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing) in A1111 by diffusers library.
**Example prompt:** Official portrait of a smiling world war ii general, `[male:female:0.99]`, cheerful, happy, det... | https://github.com/huggingface/diffusers/issues/9470 | closed | [] | 2024-09-19T09:07:30Z | 2024-10-19T17:22:23Z | 5 | linhbeige |
huggingface/chat-ui | 1,476 | Update docs to explain how to use `tokenizer` field for chat prompt formats | ## Bug description
In README.md, it's stated that the prompts used in production for HuggingChat can be found in PROMPTS.md.
However, PROMPTS.md has not been updated for 7 months and there are several prompts missing for newer models.
| https://github.com/huggingface/chat-ui/issues/1476 | open | [
"bug",
"documentation"
] | 2024-09-18T22:49:53Z | 2024-09-20T18:05:05Z | null | horsten |
huggingface/transformers.js | 935 | Is converting a Gemma 2B quantized compatible with transformers.js/onnx? | ### Question
I'm new to dev and wanted to know if converting a gemma 2b using the Optimum converter would work for this model? | https://github.com/huggingface/transformers.js/issues/935 | open | [
"question"
] | 2024-09-18T15:57:55Z | 2024-09-24T20:26:53Z | null | iamhenry |
huggingface/dataset-viewer | 3,063 | Simplify test code where a dataset is set as gated | [huggingface_hub@0.25.0](https://github.com/huggingface/huggingface_hub/releases/tag/v0.25.0) provides an API to set a repository as gated.
We had included a custom version of `update_repo_settings` because it lacked a `gated` parameter. Now we can switch back to the `huggingface_hub` method
https://github.com/hu... | https://github.com/huggingface/dataset-viewer/issues/3063 | closed | [
"good first issue",
"tests",
"refactoring / architecture",
"dependencies"
] | 2024-09-18T09:08:14Z | 2025-07-17T15:00:40Z | null | severo |
huggingface/transformers.js | 934 | Repeating tokens in TextStreamer | ### Question
```
import {
AutoTokenizer,
AutoModelForCausalLM,
TextStreamer,
InterruptableStoppingCriteria,
} from "@huggingface/transformers";
class TextGenerationPipeline {
static model = null;
static tokenizer = null;
static streamer = null;
static async getInstance(
progress_cal... | https://github.com/huggingface/transformers.js/issues/934 | closed | [
"question"
] | 2024-09-18T02:53:36Z | 2025-10-13T04:50:11Z | null | chandeldivyam |
huggingface/transformers.js | 933 | Uncaught (in promise) TypeError: r.logits is not iterable | ### Question
Hey guys,
I have been trying to train a model for text classification then convert it to an onnx file for use in transformers js following this video
https://www.youtube.com/watch?v=W_lUGPMW_Eg
I keep getting the error Uncaught (in promise) TypeError: r.logits is not iterable
Any ideas on wher... | https://github.com/huggingface/transformers.js/issues/933 | open | [
"question"
] | 2024-09-16T20:26:02Z | 2024-09-17T19:35:26Z | null | Joseff-Evans |
huggingface/chat-ui | 1,472 | Mistral api configuration without Cloudflare | I'd like to setup a local deployment using **only the mistral API**: https://docs.mistral.ai/api.
Can i use ChatUI without an HF deployment and Cloudflare account?
I leave the .env unchanged and overwrite the env.local with the following code
```yml
AGENT_ID=<my_agent_id_from_mistral>
MISTRAL_API_KEY==<mytok... | https://github.com/huggingface/chat-ui/issues/1472 | open | [
"support"
] | 2024-09-16T18:51:09Z | 2024-09-17T08:43:40Z | 0 | JonasMedu |
huggingface/transformers.js | 932 | Best small model for text generation? | ### Question
I'm looking to build a AI Journaling app that helps you reflect from your journal entries
I'm looking for a model like (GPT or Claude) that will take the selected text and provide insights based on a prompt I provide
In this case the prompt will provide suggestions based on psychology techniques lik... | https://github.com/huggingface/transformers.js/issues/932 | open | [
"question"
] | 2024-09-16T18:06:23Z | 2024-09-26T08:06:35Z | null | iamhenry |
pytorch/xla | 8,022 | Add documentation for `pip install[pallas]` | ## 📚 Documentation
Please add installation documentation for `pip install[pallas]` to the landing page README instructions: https://github.com/pytorch/xla/blob/master/setup.py#L318
Accordingly, this documentation should clearly explain how users choose between the two: https://pypi.org/project/torch-xla/
cc @... | https://github.com/pytorch/xla/issues/8022 | open | [
"documentation"
] | 2024-09-16T15:50:14Z | 2024-09-16T15:50:15Z | 0 | miladm |
huggingface/distil-whisper | 149 | How to load using openai-whisper package to load the model? | How to load using openai-whisper package to load the model? | https://github.com/huggingface/distil-whisper/issues/149 | open | [] | 2024-09-15T15:08:46Z | 2024-09-15T15:08:46Z | null | lucasjinreal |
huggingface/competitions | 40 | How to modify the competition | Hi! I created a new competition using the [tool given here](https://huggingface.co/spaces/competitions/create). All good up till here.
Then I had the space automatically running. To modify the competition, I cloned the repository of the space locally with the command given on the UI
```
git clone https://huggingface... | https://github.com/huggingface/competitions/issues/40 | closed | [
"stale"
] | 2024-09-15T13:45:26Z | 2024-10-08T15:06:28Z | null | dakshvar22 |
huggingface/speech-to-speech | 101 | I am really really curious about how to set up this project on a server to serve multiple users. I have been trying for a long time but haven't come up with a very good solution. | https://github.com/huggingface/speech-to-speech/issues/101 | open | [] | 2024-09-15T13:42:18Z | 2025-02-04T15:44:31Z | null | demoBBB | |
pytorch/torchchat | 1,147 | [distributed][perf] ensure that all decoding ops are happening on gpu with no cpu sync | ### 🐛 Describe the bug
per @kwen2501 - when we are doing decoding step:
~~~
next_token = torch.tensor([decode_results[0][0]], device=device)
~~~
"nit: I am not sure if the use of torch.tensor here would cause a sync from GPU to CPU (to get the scalar) then move to the GPU again (to create the tensor).
If there ... | https://github.com/pytorch/torchchat/issues/1147 | open | [
"performance",
"Distributed"
] | 2024-09-15T00:09:56Z | 2024-09-17T22:57:11Z | 0 | lessw2020 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.