repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js | 841 | Support opus-mt-mul-en translation in WebGPU | ### Question
I've been having some trouble where translation sometimes wasn't working. For example, I just tried translating Polish into English using `opus-mt-mul-en`. But if outputs empty strings.
So I started looking for what could be wrong, and in the Transformers.js source code I found this `marian.py` file:
... | https://github.com/huggingface/transformers.js/issues/841 | closed | [
"question"
] | 2024-07-09T11:52:12Z | 2024-10-07T15:34:54Z | null | flatsiedatsie |
huggingface/parler-tts | 83 | How big a dataset is needed to train the model? | I used 560+ hours of libritts_R data to train the model (187M) from scratch, but the audio synthesized by the model is not correct.
Is this because the size od the dataset is not enough? | https://github.com/huggingface/parler-tts/issues/83 | open | [] | 2024-07-09T03:56:42Z | 2024-09-21T10:46:39Z | null | zyy-fc |
huggingface/datatrove | 242 | how to postpone filter init till it's running | So it appears that currently I can't instantiate a model on a gpu because the filter object is created by the launcher, which either doesn't have a gpu, or it is most likely the wrong gpu even if it has one, since we would need a dedicated gpu(s) for each task.
Is it possible to add a 2nd init which would be the use... | https://github.com/huggingface/datatrove/issues/242 | open | [] | 2024-07-09T01:11:13Z | 2024-07-10T01:36:02Z | null | stas00 |
huggingface/hub-docs | 1,328 | Document how to filter and save searches on the hub (e.g. by model format, only LoRAs, by date range etc...) | **Doc request**
I'd really like to see documentation that clarifies how users can filter searches and when browsing models on the Hub.
Things I can't seem to find that I would expect / would make our lives better:
- A selection list or drop down to filter by popular model formats (GGUF, EXL2 etc...)
- A filte... | https://github.com/huggingface/hub-docs/issues/1328 | open | [] | 2024-07-08T22:51:55Z | 2024-07-10T19:17:42Z | null | sammcj |
huggingface/candle | 2,323 | How to do freeze VarMap Vars? | Hello everybody,
Is there away to freeze all Var Tensors in the VarMap like the below snippet ?
means something like implement the `Iterator` trait and detach the contained tensors from the graph and add a Var which can be trained !!!
```
# Freeze all the pre-trained layers
for param in model.par... | https://github.com/huggingface/candle/issues/2323 | open | [] | 2024-07-08T15:14:54Z | 2024-07-08T15:14:54Z | null | mohamed-180 |
huggingface/trl | 1,815 | How to use DoRA with ORPO | Hi! I'm running experiments where I'm comparing SFT to ORPO.
For SFT I currently initialize a `trl.SFTTrainer`, and pass `args=transformers.TrainingArguments(..., use_dora=True, ...)`.
For ORPO I'm supposed to pass `args=trl.ORPOConfig`, but according to the documentation this doesn't seem to support passing `use... | https://github.com/huggingface/trl/issues/1815 | closed | [] | 2024-07-08T11:12:48Z | 2024-07-08T15:39:42Z | null | julianstastny |
pytorch/pytorch | 130,238 | how to simplify torch.fx like using onnxsim? | ### 🚀 The feature, motivation and pitch
lack of the corresponding tools to simplify the exported FX model and count the flops, memory, etc.
### Alternatives
_No response_
### Additional context
_No response_ | https://github.com/pytorch/pytorch/issues/130238 | open | [
"triaged"
] | 2024-07-08T08:30:28Z | 2024-08-16T13:40:42Z | null | MaltoseFlower |
huggingface/text-generation-inference | 2,200 | How to clean the TGI guidance cache? | I use TGI guidance to enforce LLM choose a tool.
However, when I change the description of the tool, I find TGI does not re-compile the new grammar.
Therefore, I want to know how to clean the compiled grammar. | https://github.com/huggingface/text-generation-inference/issues/2200 | closed | [] | 2024-07-08T05:37:55Z | 2024-07-18T15:01:07Z | null | EdisonE3 |
pytorch/data | 1,283 | best practice for `snapshot_every_n_steps` | Hello,
Thank you for your awesome implementation of StatefulDataloader.
I have a question about `snapshot_every_n_steps`. It seems there is not much detailed explanation about this argument.
* Will frequent snapshots (i.e., `snapshot_every_n_steps=1`) cause a data loading burden?
* What is the best practice f... | https://github.com/meta-pytorch/data/issues/1283 | open | [
"documentation"
] | 2024-07-07T03:56:03Z | 2024-11-17T19:41:33Z | 5 | ShoufaChen |
huggingface/transformers.js | 837 | Model downloads or running on server? | ### Question
Hey there,
I am using simple hosting with cPanel view as the admin. If I upload the ONNX model files to the file manager as well as the JS script to run the model, will it still need to download the model or will it not, since the file is uploaded there, along with the script. Provided of course that I d... | https://github.com/huggingface/transformers.js/issues/837 | closed | [
"question"
] | 2024-07-06T23:07:15Z | 2025-01-20T19:50:12Z | null | moses-mbaga |
pytorch/vision | 8,515 | How to write your own v2 transforms example does not work | ### 🐛 Describe the bug
I copy pasted the custom transform from your [tutorial page](https://pytorch.org/vision/stable/auto_examples/transforms/plot_custom_transforms.html#:~:text=How%20to%20write%20your%20own%20v2%20transforms%20Note,from%20torchvision%20import%20tv_tensors%20from%20torchvision.transforms%20import%20... | https://github.com/pytorch/vision/issues/8515 | open | [] | 2024-07-06T23:04:22Z | 2024-07-10T21:59:25Z | null | TonyCongqianWang |
pytorch/xla | 7,635 | Inconsistency between xla/examples/train_resnet_base.py and docs | ## 📚 Documentation
This isn't necessarily an issue with the documentation, but an inconsistency between the documentation and the simplest [Pytorch XLA example](https://github.com/pytorch/xla/blob/master/examples/train_resnet_base.py). The [docs](https://pytorch.org/xla/release/2.3/index.html) say that the one key ... | https://github.com/pytorch/xla/issues/7635 | closed | [
"question"
] | 2024-07-06T19:52:50Z | 2025-04-03T14:51:15Z | null | davidaknowles |
pytorch/pytorch | 130,137 | How to get stream operators in custom backend compiler ? | ### 🐛 Describe the bug
Hi, when I use a custom backend, I find that the fx graph that custom compiler gets does not have the stream related operations.
Then I found that the fx graph dropped those stream operations after aot_module_simplified.
So, I want to know how can we get a fx graph that contains stream-rela... | https://github.com/pytorch/pytorch/issues/130137 | open | [
"oncall: distributed",
"triaged",
"oncall: pt2"
] | 2024-07-05T03:41:24Z | 2024-11-27T05:20:33Z | null | wbigat |
huggingface/lerobot | 305 | how to eval the policy trained by lerobot in real env? | ### System Info
```Shell
how to eval the policy trained by lerobot in real env?
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
in the code, i have not found any solution to transfer policy rollout to ... | https://github.com/huggingface/lerobot/issues/305 | closed | [] | 2024-07-05T03:23:01Z | 2024-07-23T09:08:27Z | null | cong1024 |
pytorch/xla | 7,634 | Failed to install xla gpu | ## ❓ Questions and Help
pip install torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl
But got the error:
ERROR: torch_xla-2.2.0-cp310-cp310-manylinux_2_28_x86_64.whl is not a supported wheel on this platform.
How can i install torch_xla on GPU ? | https://github.com/pytorch/xla/issues/7634 | closed | [
"xla:gpu"
] | 2024-07-05T02:37:12Z | 2024-08-05T21:40:28Z | 1 | Beakboomboom |
huggingface/transformers.js | 836 | How do I free up memory after transliteration | ### Question
After I executed the translation in the worker, it seems that the memory could not be reclaimed when I called pipely. dispose(), and the memory would be reclaimed only when the woker was closed. Can you help me with this question? | https://github.com/huggingface/transformers.js/issues/836 | closed | [
"question"
] | 2024-07-04T15:16:33Z | 2024-07-05T07:19:31Z | null | raodaqi |
huggingface/transformers | 31,790 | How to implement bind_tools to custom LLM from huggingface pipeline(Llama-3) for a custom agent |
Example Code
```
name = "meta-llama/Meta-Llama-3-8B-Instruct"
auth_token = ""
tokenizer = AutoTokenizer.from_pretrained(name,use_auth_token=auth_token)
bnb_config = BitsAndBytesConfig(
load_in_8bit=True,
)
model_config = AutoConfig.from_pretrained(
name,
use_auth_token=auth_token,
... | https://github.com/huggingface/transformers/issues/31790 | closed | [] | 2024-07-04T08:59:38Z | 2024-08-13T08:04:24Z | null | talhaty |
pytorch/xla | 7,633 | Multiprocess inference warning: ignoring nprocs | ## ❓ Questions and Help
When I made multiprocess inference of huggingface transformers frame, I used xmp.spawn(perform_inference, args=(args,), nprocs=4), and I wanted to run 4 scripts once. However, it reported a warning that WARNING:root:Unsupported nprocs (4), ignoring... I wonder if it is a bug or it has any mista... | https://github.com/pytorch/xla/issues/7633 | closed | [
"question",
"distributed"
] | 2024-07-04T08:28:47Z | 2025-04-03T14:52:10Z | null | SileonQuinn |
huggingface/diffusers | 8,788 | VAE Tiling not supported with SD3 for non power of 2 images? | ### Describe the bug
VAE tiling works for SD3 with power of 2 images, but for no other alignments.
The mentioned issues with VAE tiling are due to: [vae/config.json](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/blob/main/vae/config.json)
Having:
```
"use_post_quant_conv": false,
"... | https://github.com/huggingface/diffusers/issues/8788 | closed | [
"bug"
] | 2024-07-04T03:52:54Z | 2024-07-11T20:41:37Z | 2 | Teriks |
huggingface/diffusers | 8,785 | adding PAG Support for Hunyuan-DIT and Pixart-Sigma | we recently added PAG support for SDXL. Is Anyone interested in extending PAG support to Hunyuan-DIT and Pixart-Sigma?
There is no implementation available, so it is a bit of a research-oriented project (= fun!!). and you can get directly feedbacks from the authors @sunovivid @HyoungwonCho
to add PAG support to n... | https://github.com/huggingface/diffusers/issues/8785 | closed | [
"help wanted",
"contributions-welcome",
"advanced"
] | 2024-07-03T18:17:32Z | 2024-08-30T11:09:04Z | 4 | yiyixuxu |
huggingface/diffusers | 8,780 | Model and input data type is not same | **Is your feature request related to a problem? Please describe.**
Hi, when I trained sdv1.5 model with fp16 mode by using the `examples/text_to_image/train_text_to_image.py` file, I found there is a mismatch between unet model and input data. Specificaly, In this [line](https://github.com/huggingface/diffusers/blob/... | https://github.com/huggingface/diffusers/issues/8780 | open | [
"stale"
] | 2024-07-03T06:57:44Z | 2024-09-14T15:07:36Z | 1 | andyjiang1116 |
huggingface/peft | 1,903 | How to use multiple GPUs | ### System Info
peft=0.11.1
python=3.10
### Who can help?
When I run this script, there is no problem with a single GPU. When I try to run 2 GPUs, the system resources show that the utilization rate of each GPU is only half. When I try to increase per-device_train_batch_size and gradient-accumulation_steps, t... | https://github.com/huggingface/peft/issues/1903 | closed | [] | 2024-07-03T02:25:36Z | 2024-08-11T15:03:29Z | null | Lihwnlp |
pytorch/xla | 7,622 | How to avoid compilation in a section of code? | ## ❓ Questions and Help
We are using Pytorch XLA w/ TPU to train a multi-modal language models.
We can make most of the code, such as image encoding and the forward pass in the LLM backbone, in a static shape, which XLA handles well. However, making the part that fuses image and text embeddings into the input embed... | https://github.com/pytorch/xla/issues/7622 | closed | [
"question"
] | 2024-07-03T00:15:12Z | 2025-04-03T14:54:28Z | null | Jiayi-Pan |
pytorch/xla | 7,614 | Dynamo persistent cache real-time look-up | ## 🚀 Feature
As described in https://github.com/pytorch/pytorch/issues/125958, we are integrating with vLLM on TPUs. We see that in the warm up phase of the vLLM, it needs to pre-compile ~30 different input shape combinations. PyTorch/XLA does not support dynamic shapes today so torch.compile will keep compiling the ... | https://github.com/pytorch/xla/issues/7614 | closed | [] | 2024-07-02T21:01:36Z | 2024-07-23T01:18:34Z | 2 | wonjoo-wj |
pytorch/vision | 8,510 | Obscure error messages using VideoReader when PyAV version too old/not installed | ### 🐛 Describe the bug
When a sufficiently recent version of PyAV is not installed, the script `vision/torchvision/io/video_reader.py` initialises the variable `av` to an `ImportError` object that contains a description of the issue, either at line 38:
```python
av = ImportError(
"""\
PyAV is not installe... | https://github.com/pytorch/vision/issues/8510 | open | [] | 2024-07-02T18:51:34Z | 2024-07-04T10:43:51Z | 1 | occipita |
huggingface/text-embeddings-inference | 320 | how to deploy bge-reranker-v2-m3 on Text-embeddings-inference | https://github.com/huggingface/text-embeddings-inference/issues/320 | closed | [] | 2024-07-02T15:18:48Z | 2024-07-08T10:20:05Z | null | kennard520 | |
huggingface/text-embeddings-inference | 318 | How to deploy bge-reranker-v2-m3 for multiple threads? | https://github.com/huggingface/text-embeddings-inference/issues/318 | closed | [] | 2024-07-02T14:56:33Z | 2024-07-08T10:20:01Z | null | kennard520 | |
huggingface/diffusers | 8,771 | Removing LoRAAttnProcessor causes many dependencies to fail | ### Describe the bug
https://github.com/huggingface/diffusers/pull/8623 removed obsolete `LoRAAttnProcessor` which in principle is a good thing, but it was done without considerations where is that feature currently in-use so it breaks many (and i mean many) community pipelines
it also breaks some core libraries s... | https://github.com/huggingface/diffusers/issues/8771 | closed | [
"bug"
] | 2024-07-02T13:11:33Z | 2024-07-03T16:37:08Z | 1 | vladmandic |
pytorch/pytorch | 129,949 | How to get stream operators in custom backend compiler ? | ### 🐛 Describe the bug
Hi, when I use a custom backend, I find that the fx graph that custom compiler gets does not have the stream related operations.
Then I found that the fx graph dropped those stream operations after aot_module_simplified.
So, I want to know how can we get a fx graph that contains stream-rela... | https://github.com/pytorch/pytorch/issues/129949 | closed | [
"oncall: pt2"
] | 2024-07-02T09:05:54Z | 2024-07-05T06:31:36Z | null | wbigat2 |
pytorch/xla | 7,607 | How to use spmd to support hybrid shard data parallelism? | ## ❓ Questions and Help
Fsdp can be well expressed by spmd, but hsdp seems to be unable to be expressed. Is there any way to express hsdp in spmd? | https://github.com/pytorch/xla/issues/7607 | closed | [
"question"
] | 2024-07-02T08:05:47Z | 2025-04-03T14:54:52Z | null | mars1248 |
huggingface/candle | 2,307 | How to get all layers attentions? | I only see that candle returns last_hidden_state, but not all_hidden_states and attentions. I want to get attentions. Can I submit a PR to do this? I originally wanted to define the Model myself, but I found that all its methods are private | https://github.com/huggingface/candle/issues/2307 | open | [] | 2024-07-02T02:16:52Z | 2024-07-02T02:16:52Z | null | kitty-eu-org |
huggingface/diffusers | 8,760 | Clarification Needed on Hardcoded Value in Conditional Statement in LeditPP | Hello @manuelbrack,
I was reviewing the source code and came across a line that seems to have a hardcoded value in a conditional statement. The line in question is:
https://github.com/huggingface/diffusers/blob/0bae6e447cba0459456c4f7e7e87d7db141d3235/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_dif... | https://github.com/huggingface/diffusers/issues/8760 | open | [
"stale"
] | 2024-07-01T20:12:20Z | 2024-12-13T15:05:35Z | 3 | ardofski |
pytorch/pytorch | 129,877 | Eager and PT2 inconsistent on whether or not scalar tensor is allowed as input where int is expected | ### 🐛 Describe the bug
Internal xref: https://fb.workplace.com/groups/1075192433118967/posts/1454391288532411/
The error looks like this:
```
TorchRuntimeError: Failed running call_function fbgemm.jagged_1d_to_dense(*(), **{'values': FakeTensor(..., device='cuda:7', size=(260039,), dtype=torch.int64), 'offsets... | https://github.com/pytorch/pytorch/issues/129877 | closed | [
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 2024-07-01T14:11:55Z | 2025-07-30T17:43:13Z | null | ezyang |
huggingface/diffusers | 8,748 | SD3 cannot finetunes a better model (hand and face deformation)? | ### Describe the bug
I want to finetune sd3 to improve its human generation quality with 3million high-quality human datasets (which has been proven useful on sdxl and other models). But hand and face deformation doesn't improve much after two days of training.
I am using [train](https://github.com/huggingface/di... | https://github.com/huggingface/diffusers/issues/8748 | closed | [
"bug"
] | 2024-07-01T07:21:19Z | 2024-07-17T06:01:31Z | 4 | KaiWU5 |
huggingface/transformers.js | 833 | convert.py has errors when i use yolov9 | ### Question
your repo
https://huggingface.co/Xenova/gelan-c
is really good and helpful for me
but i need to use the gelan-t, gelan-s edition , coz of mobile phone depoyment
when i u convert.py to convert to onnx edition , errors happen
The checkpoint you are trying to load has model type `yolov9` but Tra... | https://github.com/huggingface/transformers.js/issues/833 | open | [
"question"
] | 2024-07-01T03:51:53Z | 2024-07-18T07:04:10Z | null | jifeng632 |
huggingface/transformers | 31,722 | how to generate router_logits in moe models using model.generate()? | ### System Info
- `transformers` version: 4.41.2
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.3
- Accelerate version: 0.31.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0+cu121 (True)
- Tensor... | https://github.com/huggingface/transformers/issues/31722 | closed | [
"Generation"
] | 2024-07-01T03:48:09Z | 2024-09-13T08:07:40Z | null | Jimmy-Lu |
huggingface/transformers.js | 832 | How to load version 3 from CDN? | ### Question
The [README.md file on v3 branch](https://github.com/xenova/transformers.js/tree/v3?tab=readme-ov-file#installation) has a html snippet to import transformers version 3 from a CDN.
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@3.0.0-alp... | https://github.com/huggingface/transformers.js/issues/832 | closed | [
"question"
] | 2024-06-30T23:39:08Z | 2024-10-10T12:23:41Z | null | geoffroy-noel-ddh |
huggingface/transformers | 31,717 | how to remove kv cache? | ### Feature request
When I use the generate() function of a language model for inference, the kv-cache is also stored in the GPU memory. Is there any way to clear this kv-cache before continuing to call generate()?
### Motivation
I have a lot of text to process, so I use a for loop to call generate(). To avoid OOM, ... | https://github.com/huggingface/transformers/issues/31717 | closed | [
"Feature request",
"Generation",
"Cache"
] | 2024-06-30T12:09:48Z | 2024-11-05T01:34:42Z | null | TuuSiwei |
huggingface/accelerate | 2,904 | How to merge Qlora FSDP weights with an LLM and save model. | https://github.com/huggingface/accelerate/issues/2904 | closed | [] | 2024-06-30T07:00:50Z | 2024-07-01T14:20:53Z | null | Minami-su | |
huggingface/transformers.js | 830 | Error while using the library in nextjs (app based route) | ### Question
Hello
I was going through the issues section to find out an solution for the issue i am facing.. I did tried some of the solutions provided by xenova but it seems like I am getting some wasm fallback error which I have no idea whats happening.. I doubt its on webpack but I wanted a clarity.
Th... | https://github.com/huggingface/transformers.js/issues/830 | closed | [
"question"
] | 2024-06-29T15:00:09Z | 2025-02-10T02:00:25Z | null | rr-jino-jose |
pytorch/data | 1,280 | Importing `torchdata.stateful_dataloader` hides `torch` RandomSampler and BatchSampler | ### 🐛 Describe the bug
### Description
In `torchdata.stateful_dataloader.sampler.py`, several Sampler classes in `torch.utils.data` are overwritten:
1. https://github.com/pytorch/data/blob/main/torchdata/stateful_dataloader/sampler.py#L61-L62
2. https://github.com/pytorch/data/blob/main/torchdata/stateful_datalo... | https://github.com/meta-pytorch/data/issues/1280 | closed | [] | 2024-06-28T23:28:50Z | 2024-07-03T18:23:06Z | 8 | byi8220 |
huggingface/candle | 2,294 | How to get raw tensor data? | I am trying to implement an adaptive avg pool in candle. However, I guess my implementation will require an API to get the raw data/storage (storaged in plain/flatten array format).
Wondering if there is such an API for that?
Thanks! | https://github.com/huggingface/candle/issues/2294 | open | [] | 2024-06-28T19:19:45Z | 2024-06-28T21:51:57Z | null | WenheLI |
huggingface/diffusers | 8,730 | Implementation of DDIM, why taking Xt and (t-1) as input? | ### Describe the bug
I have tried to infer a diffusion model with DDIM with the number of timesteps = 10 and maximize timesteps as 1000.
I have printed the t in the for-loop, and the result is 901, 801, 801, 701, 601, 501, 401, 301, 201, 101, 1. It's really weird to me why 801 appears two times, and why we start f... | https://github.com/huggingface/diffusers/issues/8730 | closed | [
"bug"
] | 2024-06-28T18:45:55Z | 2024-07-01T17:24:49Z | 1 | EPIC-Lab-sjtu |
pytorch/torchtitan | 434 | Question about custom cuda operators for tensor parallelism | We are currently trying to apply torchtitan to MoE models. MoE models require using grouped_gemm https://github.com/fanshiqing/grouped_gemm. GroupedGemm ops basically follow the same rule as in ColumnLinear and RowLinear. Is there any way to make custom ops dtensor compatible? Great thanks for help! | https://github.com/pytorch/torchtitan/issues/434 | open | [
"question"
] | 2024-06-28T12:29:43Z | 2024-11-22T00:04:50Z | null | vermouth1992 |
huggingface/safetensors | 490 | How to save model checkpoint from a distributed training from multiple nodes? | Hello,
When I use accelerator and deepspeed Zero3 to train the model in one node with 8 GPUs, the following code smoothly saves the model checkpoint
```
ds_state_dict = model._zero3_consolidated_16bit_state_dict() # here model is sharded
if self.accelerator.is_main_process:
save_file(ds_state_dict, f"{ou... | https://github.com/huggingface/safetensors/issues/490 | closed | [
"Stale"
] | 2024-06-28T04:59:45Z | 2024-07-31T11:46:06Z | null | Emerald01 |
huggingface/diffusers | 8,728 | Using `torchsde.BrownianInterval` instead of `torchsde.BrownianTree` in class `BatchedBrownianTree` | **Is your feature request related to a problem? Please describe.**
When I was doing some optimization for my pipeline, i found that the BrownianTree somehow took a bit more time.
**Describe the solution you'd like.**
I further dig into torchsde document, and found that they encouraged to use `BrownianInterval` to ... | https://github.com/huggingface/diffusers/issues/8728 | closed | [] | 2024-06-28T04:33:55Z | 2024-09-12T08:46:54Z | 5 | dianyo |
huggingface/transformers.js | 826 | Support for GLiNER models? | ### Question
is there a reason why models from the GLiNER family can't be supported?
I see they use a specialized library, does it take a lot of code to make them work? | https://github.com/huggingface/transformers.js/issues/826 | open | [
"question"
] | 2024-06-28T01:54:37Z | 2024-10-04T07:59:16Z | null | Madd0g |
pytorch/torchtitan | 431 | Question about Pipeline parallelism | Just wonder does the current PipelineStage API supports variable length input shapes like in Megatron? https://github.com/NVIDIA/Megatron-LM/blob/e33c8f78a35765d5aa37475a144da60e8a2349d1/megatron/core/model_parallel_config.py#L212 This is particular useful for packed inputs where all the paddings are removed. | https://github.com/pytorch/torchtitan/issues/431 | open | [
"enhancement",
"question",
"post training"
] | 2024-06-27T15:31:52Z | 2025-10-02T02:32:07Z | null | vermouth1992 |
huggingface/diffusers | 8,721 | how to unload a pipeline | how to unload a pipeline and release the gpu memory | https://github.com/huggingface/diffusers/issues/8721 | closed | [] | 2024-06-27T10:04:39Z | 2024-07-02T14:40:39Z | null | nono909090 |
huggingface/transformers.js | 825 | Are there any examples on how to use paligemma model with transformer.js | ### Question
First of all, thanks for this amazing library!
So my questions is, I happened to see this model available on transformers.js:
https://huggingface.co/Xenova/paligemma-3b-mix-224
But unfortunately I can't find any example on how to run the `image-text-to-text` pipeline. Are there are resources you c... | https://github.com/huggingface/transformers.js/issues/825 | open | [
"question"
] | 2024-06-27T09:49:22Z | 2024-06-29T02:39:27Z | null | alextanhongpin |
huggingface/lerobot | 294 | after training using lerobot framework,how to infer the trained policy directly in real environment(ep. aloha code)? i have not found a solution yet | ### System Info
```Shell
os ubuntu20.04,
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
not yet
### Expected behavior
how to directly eval the policy trained by lerobot in aloha ? | https://github.com/huggingface/lerobot/issues/294 | closed | [
"question",
"policies",
"robots",
"stale"
] | 2024-06-27T03:16:19Z | 2025-10-23T02:29:25Z | null | cong1024 |
pytorch/serve | 3,206 | Docker swarm with TorchServe workflow | I want to scale the workflows through "Docker Swarm". (I hope it is possible, if not please tell me how one can achieve this? I know it is not supported yet through TorchServe directly, that is why I'm using docker to scale the workflow.)
I have few questions related to using TorchServe as a docker service in swarm mo... | https://github.com/pytorch/serve/issues/3206 | closed | [
"triaged",
"workflowx"
] | 2024-06-26T16:20:40Z | 2024-07-25T14:54:32Z | 6 | KD1994 |
huggingface/chat-ui | 1,312 | [v0.9.1] Error: "Cannot resolve directory $env" | ## Issue
For all client-side components, I get this:
```
"Cannot resolve directory $env"
```
<img width="589" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/26fa2eef-dbff-44f6-bb86-7700387abdf2">
<img width="837" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769... | https://github.com/huggingface/chat-ui/issues/1312 | open | [
"support"
] | 2024-06-26T13:24:42Z | 2024-06-26T15:14:48Z | 2 | adhishthite |
huggingface/chat-ui | 1,311 | 400 (no body) trying to reach openai compatible server | Hi everyone,
I have the following setup (containers are on the same device):
- Container 1: Nvidia NIM (openai-compatible) with Llama3 8B Instruct, port 8000;
- Container 2: chat-ui, port 3000.
This is the content of the `.env` file:
```
MONGODB_URL=mongodb://localhost:27017
MONGODB_DB_NAME=chat-ui
MODELS=`... | https://github.com/huggingface/chat-ui/issues/1311 | open | [
"support"
] | 2024-06-26T12:34:44Z | 2024-07-22T13:03:18Z | 2 | edesalve |
huggingface/diffusers | 8,710 | Add PAG support to SD1.5 | We recently integrated PAG into diffusers! See this PR [here] (https://github.com/huggingface/diffusers/pull/7944) we added PAG to SDXL
we also want to add PAG support to SD1.5 pipelines! we will need:
- [x] StableDiffusionPAGPipeline (assigned to @shauray8, PR https://github.com/huggingface/diffusers/pull/8725)
... | https://github.com/huggingface/diffusers/issues/8710 | closed | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-06-26T08:23:17Z | 2024-10-09T20:40:59Z | 17 | yiyixuxu |
huggingface/chat-ui | 1,309 | "404 Resource Not Found" when using Azure OpenAI model endpoint | I run `chat-ui` with the `chat-ui-db` docker image. I would like to connect it to my Azure OpenAI API endpoint.
I have setup the `env.local` file as stated in your docs and binded it with the docker container:
```bash
MODELS=`[{
"id": "gpt-4-1106-preview",
"name": "gpt-4-1106-preview",
"displayName": "gpt... | https://github.com/huggingface/chat-ui/issues/1309 | open | [
"support"
] | 2024-06-26T07:16:54Z | 2024-06-26T18:53:51Z | 2 | gqoew |
huggingface/chat-ui | 1,308 | Warning: To load an ES module in Azure environment | Hi Team,
We are currently facing issues deploying our Chat UI solution in Azure Web App. The error encountered in the console log is as follows:
```
npm http fetch GET 200 https://registry.npmjs.org/npm 141ms
(node:124) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs exte... | https://github.com/huggingface/chat-ui/issues/1308 | open | [
"support"
] | 2024-06-26T06:04:45Z | 2024-06-27T09:07:35Z | 3 | pronitagrawalvera |
huggingface/transformers.js | 823 | How to export q4f16.onnx | ### Question
Thanks for providing such a great project, but I have a problem converting the model.
```
For example:
model_q4f16.onnx
```
What command is used to create and export such a q4/f16.onnx model?
Can you give me more tips or help? Thank you | https://github.com/huggingface/transformers.js/issues/823 | closed | [
"question"
] | 2024-06-26T05:36:47Z | 2024-06-26T07:46:57Z | null | juntaosun |
pytorch/pytorch | 129,542 | How to Convert pytorch qat model to tensorrt |
I find that the converted qat model in pytorch can't use GPU Kernel, But I don't find the function or ways to convert to tensorrt. How to Convert pytorch qat model to tensorrt?
| https://github.com/pytorch/pytorch/issues/129542 | closed | [] | 2024-06-26T02:46:20Z | 2024-06-26T15:42:38Z | null | AnnaTrainingG |
pytorch/xla | 7,466 | Register python implementation for the aten ops | ## ❓ Questions and Help
Currently `F.interpolate(mode='tilinear)'` will be dispatched to `aten::upsample_trilinear3d` which we don't have c++ lowering. There is a python decomp for this op in https://github.com/pytorch/pytorch/blob/ad76da6c16c5dc465e8aac8d913532251db7b400/torch/_decomp/decompositions.py#L3591-L3602 so... | https://github.com/pytorch/xla/issues/7466 | closed | [
"question",
"lowering"
] | 2024-06-25T21:00:50Z | 2025-04-07T12:46:14Z | null | JackCaoG |
pytorch/TensorRT | 2,955 | ❓ [Question] How do you compile a chunk operator with TensorRT? | ## ❓ Question
How do you compile a chunk operator with TensorRT? I have been trying a basic example in a Jupyter Notebook but get an unbroadcastable dimension error. The below code executes in PyTorch inference and torchscript, but cannot be compiled with TensorRT.
## What you have already tried
```import ... | https://github.com/pytorch/TensorRT/issues/2955 | open | [
"question"
] | 2024-06-25T20:37:51Z | 2024-06-25T21:45:45Z | null | joshuageddes |
huggingface/diffusers | 8,700 | [PAG] add `StableDiffusionXLControlNetPAGImg2ImgPipeline` | We recently integrated PAG into diffusers! See the PR here: https://github.com/huggingface/diffusers/pull/7944
Does anyone want to add a `StableDiffusionXLControlNetPAGImg2ImgPipeline`?
1. You should put it under the [pag folder](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/pag)
2. yo... | https://github.com/huggingface/diffusers/issues/8700 | closed | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-06-25T18:52:18Z | 2024-08-21T17:24:23Z | 6 | yiyixuxu |
huggingface/sentence-transformers | 2,779 | what is the default tokenizer when "No sentence-transformers model found with name"? | I'm trying to use the sentence-transformer dangvantuan/sentence-camembert-large model and I'm getting a "no model found" error. This error is probably because some Sentence-Transformers-specific files are missing in their Huggingface (modules.json and config_sentence_transformers.json).
But then, Sentence Transformer... | https://github.com/huggingface/sentence-transformers/issues/2779 | closed | [] | 2024-06-25T15:17:58Z | 2024-07-05T10:42:27Z | null | Hortatori |
huggingface/accelerate | 2,891 | How to set a custom Config in python code using Accelerate? | Hello everyone!
Could you please advise how to replace the console command for setting a config
```
accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2}
```
with code in the Python file script_name.py?
I am expecting something like the following functionality... | https://github.com/huggingface/accelerate/issues/2891 | closed | [] | 2024-06-25T11:56:10Z | 2024-10-07T15:08:01Z | null | konstantinator |
pytorch/ao | 436 | what if below condition? about OCP Microscaling | assume we have a fp32 tensor like [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 127.99999], and set k to 32(default, convert to fp8 e5m2 mx block.
btw asfloat(0x42FFFFFF) = 127.9999f
from current code, the max absolute value is 127.9999, the u... | https://github.com/pytorch/ao/issues/436 | closed | [
"question",
"mx"
] | 2024-06-25T08:37:17Z | 2024-07-05T16:31:23Z | null | avater210 |
huggingface/diffusers | 8,693 | SD3 + SDXL refine fix lying on grass. How to do in diffusers colab workflow? | this is comfy workflow

how can i do in diffusers colab workflow? | https://github.com/huggingface/diffusers/issues/8693 | closed | [
"stale"
] | 2024-06-25T07:30:55Z | 2024-09-23T11:37:25Z | null | s9anus98a |
huggingface/text-generation-inference | 2,113 | how to launch a service using downloaded model weights? | ### System Info
I have downloaded model weights of bge-models, and I want to launch a model service using TGI, the command is :
```
model=/storage/nfs2/ModelHub/embedding/BAAI/bge-small-zh-v1.5
revision=refs/pr/5
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
... | https://github.com/huggingface/text-generation-inference/issues/2113 | closed | [] | 2024-06-25T03:18:14Z | 2024-06-28T03:50:10Z | null | chenchunhui97 |
huggingface/chat-ui | 1,302 | Assistant feature: Send user query as part of template variable GET request | Trying to integrate RAG as an assistant. Thinking of using a template variable that makes a GET request (with the prompt as the request body), to get the relevant documents as context. Is this possible (i.e. there is a special variable in the system prompt page for the user query), or is there a better way of doing thi... | https://github.com/huggingface/chat-ui/issues/1302 | closed | [] | 2024-06-24T22:27:02Z | 2025-01-02T12:09:23Z | 2 | ethayu |
huggingface/diffusers | 8,683 | Why do Diffusers schedulers produce lower quality outputs compared to ComfyUI? | ### Discussed in https://github.com/huggingface/diffusers/discussions/8682
<sup>Originally posted by **nducthang** June 24, 2024</sup>
Hi,
I'm encountering an issue when comparing the quality of ComfyUI and Diffusers. I've noticed that the output of Diffusers is consistently lower than ComfyUI in many cases, des... | https://github.com/huggingface/diffusers/issues/8683 | closed | [] | 2024-06-24T14:37:19Z | 2024-06-25T06:06:12Z | 20 | nducthang |
pytorch/serve | 3,204 | WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. | Hi, I've been running models with Torchserve 0.11.0 on Sagemaker and noticed following warning:
`WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.` when starting the Torchserve.
I read that this method was removed in Java8 (https://stackoverflow.com/questions/23808803/s... | https://github.com/pytorch/serve/issues/3204 | open | [
"java"
] | 2024-06-24T13:58:02Z | 2024-06-26T21:20:17Z | 1 | aalbersk |
pytorch/ao | 430 | Understanding 8da4w | Hi there,
I'm new to quantization. From my understanding, "8da4w" means that the weights are pre-quantized to 4 bits, and the activations are quantized to 8 bits at runtime. Following this, the GEMM (General Matrix Multiply) operation between weights and activations is computed in the `int8` data type. Do I have thi... | https://github.com/pytorch/ao/issues/430 | closed | [
"question"
] | 2024-06-24T08:43:44Z | 2024-07-23T17:32:41Z | null | DzAvril |
pytorch/vision | 8,503 | Can we add datatype support for examples under references | ### 🚀 The feature
currently the examples under references only support default datatype (float32), can we support a argument like --data-type to allow user to specify the datatype for the model?
### Motivation, pitch
Many users like us always need to run different dataytpye for the model. like float16 and bfloat16.... | https://github.com/pytorch/vision/issues/8503 | open | [] | 2024-06-24T03:29:04Z | 2024-07-12T15:09:10Z | 2 | wincent8 |
huggingface/alignment-handbook | 174 | Question about torch_dtype when runnging run_orpo.py | I have been using `run_orpo.py` with my personal data successfully. However, as I use it, I have a question.
When I look at the code for `run_orpo.py`, I see that there is a code to match torch_dtype to the dtype of the pretrained model. However, when I actually train and save the model, even if the pretrained model... | https://github.com/huggingface/alignment-handbook/issues/174 | closed | [] | 2024-06-23T08:28:02Z | 2024-07-30T05:05:03Z | 6 | sylee96 |
huggingface/diffusers | 8,666 | Attention api changes no documentation ? | how can i see ur previous changes on attention ?
u have rename`` _slice_size , _sliced_attention and _attention`` attribute from attention
need to know what are alternative using of its ? | https://github.com/huggingface/diffusers/issues/8666 | closed | [] | 2024-06-23T07:08:58Z | 2024-06-23T11:31:47Z | 4 | xalteropsx |
huggingface/transformers.js | 819 | Blog on walkthrough with transformers js | ### Question
Hey, So I am writing this blog part of sharing knowledge in a blog series called Running AI/ML in the client. I am using transformer js example walkthrough in this part to validate some concepts. Can I get some feedback before it goes live? How do we connect? | https://github.com/huggingface/transformers.js/issues/819 | closed | [
"question"
] | 2024-06-23T06:06:42Z | 2024-06-27T19:10:05Z | null | ArijitCloud |
huggingface/trl | 1,763 | What is the difference between PPOv2Trainer and PPOTrainer? | What is the difference between PPOv2Trainer and PPOTrainer? And in trl\examples\scripts\ppo\ppo.py and trl\examples\scripts\ppo.py , there are two dpo.py files, can you tell me what is different between them? | https://github.com/huggingface/trl/issues/1763 | closed | [] | 2024-06-22T14:48:38Z | 2024-08-24T09:25:52Z | null | mst272 |
pytorch/xla | 7,326 | dear teachers, i can connect the internet, but i can not download it the torch_xla | pip install torch_xla[tpu]~=2.3.0 -f https://storage.googleapis.com/libtpu-releases/index.html
ERROR: Could not find a version that satisfies the requirement torch_xla~=2.3.0 (from versions: none)
ERROR: No matching distribution found for torch_xla~=2.3.0
| https://github.com/pytorch/xla/issues/7326 | closed | [
"question"
] | 2024-06-21T07:42:57Z | 2025-04-07T12:58:54Z | null | zhangwaer |
huggingface/diffusers | 8,649 | SD3 - num_images_per_prompt no longer honoured (throws error) | ### Describe the bug
With models prior to SD3, the parameter num_images_per_prompt is honoured, enabling generation of several images per prompt. With sd3-medium an error is generated.
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.
Not... | https://github.com/huggingface/diffusers/issues/8649 | closed | [
"bug"
] | 2024-06-20T11:28:22Z | 2024-06-29T13:05:28Z | 4 | zagglez |
huggingface/transformers.js | 814 | Consultation on the use of the library with chatbot models | ### Question
Hello, Greetings Vladimir, programmer in a web environment with PHP, JS, AJAX, first I apologize for my English, my native language is Latin Spanish, I am not very good at writing it, I have used a translator, I wanted to consult, how can I use this interesting and useful tool, to be able to create a chat... | https://github.com/huggingface/transformers.js/issues/814 | open | [
"question"
] | 2024-06-20T03:24:34Z | 2024-07-29T10:47:24Z | null | mate07 |
pytorch/torchtitan | 412 | ImportError in LLaMA Training Script | When attempting to run the training script for LLaMA with the following command:
`CONFIG_FILE="./train_configs/llama3_8b.toml" ./run_llama_train.sh`
an ImportError is encountered. The specific error message is:
`ImportError: cannot import name 'Partial' from 'torch.distributed._tensor' (/apps/torchtitan/torchtitan/l... | https://github.com/pytorch/torchtitan/issues/412 | closed | [
"question"
] | 2024-06-19T17:45:48Z | 2024-07-12T16:06:10Z | null | viai957 |
huggingface/optimum | 1,912 | Could you provide the official onnx model of Qwen-VL-Chat(-Int4)? | ### Feature request
Qwen-VL-Chat(-Int4) is useful to image-to-text model.
### Motivation
The image-to-text LMM model just like Qwen-VL-Chat(-Int4) is very useful.
### Your contribution
Not yet. | https://github.com/huggingface/optimum/issues/1912 | open | [
"feature-request",
"quantization"
] | 2024-06-19T08:43:58Z | 2024-10-09T07:52:54Z | 0 | yzq1990 |
pytorch/TensorRT | 2,940 | ❓ [Question] Is there any plan to support bfloat16 compile | ## What you have already tried
The nvidia tensorrt has already support the `bf16` precision after tensorrt>=9.2:
- https://github.com/NVIDIA/TensorRT/issues/1883
- https://github.com/AmusementClub/vs-mlrt/issues/64
However, the latest torch_tensorrt (`torch_tensorrt==2.3.0 w/ tensorrt==10.0.1`) has not suppor... | https://github.com/pytorch/TensorRT/issues/2940 | closed | [
"question"
] | 2024-06-19T06:05:30Z | 2024-06-25T04:39:59Z | null | leeeizhang |
pytorch/serve | 3,195 | How to send a torch array via request | I want to send a torch (cuda) array via python request to the inference API. Is that possible? | https://github.com/pytorch/serve/issues/3195 | closed | [] | 2024-06-18T21:05:58Z | 2024-06-19T19:21:59Z | null | lschaupp |
huggingface/diffusers | 8,626 | More thorough guidance for multiple IP adapter images/masks and a single IP Adapter | ### Describe the bug
I'm trying to use a single IP adapter with multiple IP adapter images and masks. This section of the docs gives an example of how I could do that: https://huggingface.co/docs/diffusers/v0.29.0/en/using-diffusers/ip_adapter#ip-adapter-masking
The docs provide the following code:
```python
fr... | https://github.com/huggingface/diffusers/issues/8626 | closed | [
"bug",
"stale"
] | 2024-06-18T18:06:37Z | 2024-09-23T11:36:10Z | 11 | chrismaltais |
pytorch/tutorials | 2,939 | [BUG] - is torch.compile necessary to use user defined triton kernel | ### Add Link
https://pytorch.org/tutorials/recipes/torch_compile_user_defined_triton_kernel_tutorial.html
### Describe the bug
i think we can call triton kernel with torch.compile
what we get when call triton kernel through torch.compile?
### Describe your environment
none
cc @williamwen42 @msaroufim | https://github.com/pytorch/tutorials/issues/2939 | closed | [
"bug",
"question",
"torch.compile"
] | 2024-06-18T16:12:15Z | 2024-06-18T16:41:31Z | null | felixdae |
huggingface/datasets | 6,979 | How can I load partial parquet files only? | I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it.
dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet")
How can I just using 000 - 100 from a 00314 from all partially?
I search whole net didn't found a solution, **this is stupid if the... | https://github.com/huggingface/datasets/issues/6979 | closed | [] | 2024-06-18T15:44:16Z | 2024-06-21T17:09:32Z | 12 | lucasjinreal |
pytorch/vision | 8,497 | Improve empty import time of torchvision | ### 🚀 The feature
When importing torchvision, a number of libraries are imported by default for more niche functionality of the library. To improve import time, I would favor delaying those imports to when they are needed
### Motivation, pitch
In my case, it is the av library in particular that contributes to the i... | https://github.com/pytorch/vision/issues/8497 | open | [] | 2024-06-18T09:24:43Z | 2024-07-29T12:02:13Z | 3 | bschindler |
huggingface/pytorch-image-models | 2,211 | How to Replicate Official Model Accuracy | Based on the accuracy provided by the official source, how can one replicate and train these models?
For example, for mobilenetv4_hybrid_large.e600_r384_in1k with a top-1 accuracy of 84.266
where can one find the training hyperparameters such as epochs, scheduler, warmup epochs, learning rate, batch size, and ot... | https://github.com/huggingface/pytorch-image-models/issues/2211 | closed | [
"enhancement"
] | 2024-06-18T05:30:59Z | 2024-06-24T23:36:45Z | null | usergxx |
huggingface/chat-ui | 1,290 | ERROR: Exception in ASGI application | Hello everyone, I have the following problem when using Huggingface ChatUI with FastChat. How can I change the configuration? Use npm to start development mode.
Thanks
```
MODELS=`[
{
"name": "Infinirc-7b-Llama2",
"id": "Infinirc-7b-Llama2",
"model": "Infinirc-7b-Llama2",
"parameters": {
... | https://github.com/huggingface/chat-ui/issues/1290 | open | [
"support"
] | 2024-06-18T02:07:50Z | 2024-06-23T13:26:59Z | 1 | rickychen-infinirc |
huggingface/autotrain-advanced | 684 | Where is the fine-tuned model output? | I’m new to using AutoTrain on Hugging Face and I encountered an issue during my first attempt at fine-tuning a model. I have a free account, because I want to see whether I can get something to work before I start paying for training. Here’s a summary of what I did and the problem I’m facing:
Training Configuration:
... | https://github.com/huggingface/autotrain-advanced/issues/684 | closed | [] | 2024-06-17T23:01:53Z | 2024-06-22T03:49:27Z | null | RonPisaturo |
pytorch/torchtitan | 409 | DataLoader state is empty for different ranks ? | Thanks for your amazing work !
We have been testing the llama3_8b model on slimpajama dataset. The training seem to be fine based on loss curves.
However, upon resuming the model from a previous checkpoint, we see the following warnings:
```
16: 2024-06-17 01:22:16,614 - root - WARNING - DataLoader state is... | https://github.com/pytorch/torchtitan/issues/409 | closed | [
"question"
] | 2024-06-17T17:46:42Z | 2024-11-22T00:00:55Z | null | ahatamiz |
huggingface/transformers | 31,453 | How to build and evaluate a vanilla transformer? | ### Model description
"Attention Is All You Need" is a landmark 2017 research paper authored by eight scientists working at Google, responsible for expanding 2014 attention mechanisms proposed by Bahdanau et al. into a new deep learning architecture known as the transformer with an encoder, cross-attention, and a deco... | https://github.com/huggingface/transformers/issues/31453 | closed | [] | 2024-06-17T17:17:11Z | 2024-11-04T13:56:06Z | null | Bachstelze |
huggingface/parler-tts | 74 | How to do with flan-t5 when i want to finetune based on Mini v0.1 but not from scratch? Flan t5 can not deal my language. | https://github.com/huggingface/parler-tts/issues/74 | open | [] | 2024-06-17T06:39:24Z | 2024-06-17T06:39:24Z | null | lyt719 | |
huggingface/candle | 2,269 | How to select which GPU to use | We are working with the stable diffusion example. How do we select which GPU device on our system to use for the rendering?
thanks. | https://github.com/huggingface/candle/issues/2269 | open | [] | 2024-06-16T19:53:18Z | 2024-06-21T19:29:31Z | null | donkey-donkey |
pytorch/pytorch | 128,698 | ONNX docs missing info about how to remove custom domains | ### 📚 The doc issue
In the docs about exporting to onnx [here](https://pytorch.org/tutorials/beginner/onnx/export_simple_model_to_onnx_tutorial.html?highlight=torch%20onnx%20dynamo_export) there is not a mention of how to remove the functions. The use of aten operators defined as functions creates a problem when conv... | https://github.com/pytorch/pytorch/issues/128698 | closed | [
"module: onnx",
"module: docs",
"triaged"
] | 2024-06-14T13:01:35Z | 2025-09-07T22:35:57Z | null | Jerry-Master |
huggingface/chat-ui | 1,283 | SELF_SIGNED_CERT_IN_CHAIN | I am experiencing this error. I'm on a corporate VPN and I tried turning it off and still the same error. The TLS reject is set to false as well.
SELF_SIGNED_CERT_IN_CHAIN
71.61
npm error errno SELF_SIGNED_CERT_IN_CHAIN
71.61
npm error request to https://registry.npmjs.org/failed, reason: self-signed certificate... | https://github.com/huggingface/chat-ui/issues/1283 | open | [
"support"
] | 2024-06-14T04:03:48Z | 2024-06-17T06:50:29Z | 2 | solanki-aman |
pytorch/torchtitan | 399 | How to use nsys? | Is there a recommended way to use nsys / nsight? I know there's a profiling hook for using the Pytorch profiler, but I'm wondering how to use nsys instead.
Can I use these APIs:
```
with torch.autograd.profiler.emit_nvtx():
profiler.start()
y = x.view(1, -1)
z = x.to(memory_format=torch.channels_las... | https://github.com/pytorch/torchtitan/issues/399 | closed | [
"enhancement"
] | 2024-06-13T18:14:52Z | 2024-11-22T00:00:02Z | null | vedantroy |
huggingface/diffusers | 8,527 | how to add controlnet in sd3! | I currently use inpainting controlnet in sdxl because it uses unet to easily support controlnet. And I am curious about how to add controlnet in sd3 with transforms model structure. | https://github.com/huggingface/diffusers/issues/8527 | closed | [] | 2024-06-13T10:14:38Z | 2024-08-24T04:20:28Z | null | appleyang123 |
huggingface/lerobot | 266 | Question - how to handle additional sensory input | Hi guys, sorry to bother you again :wink:
and thanks for your work, I'm very excited by Lerobot!
I'm currently collecting some teleop data where the robot has tactile sensors on the fingertips, as well as a FT sensor on the wrist and I was wondering how I would integrate this best into a Lerobot Dataset.
One... | https://github.com/huggingface/lerobot/issues/266 | closed | [
"question",
"dataset",
"stale"
] | 2024-06-13T08:39:26Z | 2025-10-23T02:29:29Z | null | tlpss |
huggingface/nanotron | 196 | how to run benchmark tests | Hi,
I can build this project with your commands, but there is no "pyaottriton" when ran the benchmark test like: benchmark_forward.py or benchmark_backward.py.
anything I missed?
Thanks | https://github.com/huggingface/nanotron/issues/196 | closed | [] | 2024-06-13T08:31:06Z | 2024-06-13T08:38:24Z | null | jinsong-mao |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.