repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/torchtitan | 803 | Gradient Scaling With Pipeline Parallelism | The idiomatic way to perform gradient scaling is something like this:
```python
preds = model(inputs)
loss = loss_fn(preds, targets)
scaler.scale(loss).backward()
```
Given that the current PyTorch PP API handles the backward pass *internally*, I find it difficult to do gradient scaling under a PP regime.
```python
i... | https://github.com/pytorch/torchtitan/issues/803 | open | [
"question",
"module: pipelining"
] | 2025-01-24T12:16:16Z | 2025-02-06T23:28:00Z | null | windsornguyen |
huggingface/trl | 2,642 | How to stop `SFTTrainer` from auto tokenizing my messages ? | I want to tokenize my text in a custom way in a custom data collator but for some reason i don't know the data keeps being auto tokenized.
I passed `processing_class=None` to stop this but nothing changed, how can i stop the auto tokenization process ? | https://github.com/huggingface/trl/issues/2642 | closed | [
"❓ question",
"🏋 SFT"
] | 2025-01-24T02:58:26Z | 2025-02-18T18:59:42Z | null | MohamedAliRashad |
pytorch/xla | 8,617 | Single core of TPU gives inference results different than the CPU results | # Description
I encountered an issue when using PyTorch XLA to train a model on TPU. My main code gives a different results than training with CPU or GPU so I decided to check using a toy example and found that prediction using pytorch XLA gives results different than prediction using CPU.
I also tried to check using p... | https://github.com/pytorch/xla/issues/8617 | closed | [
"duplicate",
"xla:tpu"
] | 2025-01-23T21:47:15Z | 2025-02-06T14:39:41Z | 1 | mohamedamara7 |
pytorch/tutorials | 3,254 | How to download pretrained word language quantized model? | In the word language quantized model tutorial, we assume we already have pretrained model.
But where can we download the model?
https://github.com/pytorch/tutorials/blob/main/advanced_source/dynamic_quantization_tutorial.py#L151-L157 | https://github.com/pytorch/tutorials/issues/3254 | closed | [
"easy",
"docathon-h1-2025"
] | 2025-01-23T20:29:10Z | 2025-06-04T21:05:05Z | null | Achilles718611 |
huggingface/diffusers | 10,637 | Issues with FlowMatchEulerDiscreteScheduler.set_timesteps() | ### Describe the bug
Why does `num_inference_steps` have the default `None`? It's not an `Optional`. It cannot be `None`. This leads to weird error messages if you skip this parameter.
https://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/schedulers/scheduling_flow_match_... | https://github.com/huggingface/diffusers/issues/10637 | closed | [
"bug"
] | 2025-01-23T20:22:51Z | 2025-02-16T15:29:08Z | 4 | dxqb |
huggingface/transformers.js | 1,165 | Releasing the Florence 2 ONNX conversion script? | ### Question
Hi,
This might not be the correct place to raise this issue, but I have not found a better option. There have been many requests of people trying to use their tuned Florence 2 models here and in other repos (https://github.com/huggingface/transformers.js/issues/815#issuecomment-2217220254, https://github... | https://github.com/huggingface/transformers.js/issues/1165 | closed | [
"question"
] | 2025-01-23T11:35:05Z | 2025-03-31T10:02:53Z | null | ir2718 |
huggingface/transformers | 35,853 | How to load a model directly into the GPU memory? | I have enough GPU memory, but not enough CPU memory.When I use the
"from_pretrained" function, the program gets killed due to insufficient memory. | https://github.com/huggingface/transformers/issues/35853 | closed | [] | 2025-01-23T09:47:04Z | 2025-01-23T15:19:01Z | null | LiBai531 |
huggingface/nanotron | 273 | What is the purpose of "task" | What is the purpose of the "tasks" argument in this line?
https://github.com/huggingface/nanotron/blob/9055c664c28a3b430b4e53bfcb5a074068c90f2a/tools/preprocess_data.py#L102C9-L102C28
Thanks | https://github.com/huggingface/nanotron/issues/273 | open | [] | 2025-01-23T09:44:35Z | 2025-02-07T17:09:12Z | null | laiviet |
huggingface/transformers.js | 1,164 | `onnxruntime-node` uncompressed too large for NextJS 15 API routes | ### Question
Hello! I'm trying to deploy `xenova/bge-small-en-v1.5` locally to embed text in an Next 15 API route, but I'm encountering this error with the route's unzipped max size exceeding 250 MB. Wanted to check in to see if there's some error on my side? Doesn't seem like `onnxruntime-node` should be ~720 MB unco... | https://github.com/huggingface/transformers.js/issues/1164 | open | [
"question"
] | 2025-01-23T03:28:16Z | 2025-10-22T20:42:41Z | null | raymondhechen |
huggingface/smolagents | 322 | How to capture CodeAgent's full thinking including the code, not just the final response into a variable | When we run a CodeAgent in a notebook, it print the question/task, the LLM model used, code (Executing this code, Execution logs) and the Final answer.
The return value from agent.run contrains only the final response.
I'm working on some demos for which I wanted to run a number of tasks, capture all the output (no... | https://github.com/huggingface/smolagents/issues/322 | open | [] | 2025-01-23T02:50:34Z | 2025-01-23T13:17:49Z | null | KannamSridharKumar |
pytorch/torchtitan | 801 | [Possible Bug] RoPE here is GPT-J style instead of NeoX/Llama style? | I might miss something so please let me know if I do, and in this case I will close the issue.
As we know, GPT-J and NeoX/Llama apply RoPE slightly differently (per hugging face implementation):
- the way GPT-J treats `q, k` as "complex tensor" is an interleaving style: `[q_0_real, q_0_imaginary, q_1_real, q_1_imagina... | https://github.com/pytorch/torchtitan/issues/801 | closed | [] | 2025-01-22T23:32:36Z | 2025-01-22T23:58:48Z | 1 | honglu2875 |
huggingface/smolagents | 312 | how to exec a bin and use the output as agent arg ? | hi
a simple exec tool as exec(path,[args]) should be in examples.
then an agent call as "use exec(/bin/ls,/bin)" put the result in sql db "(as bin-name) for later use and tell me how much of them are scripts while using sbx -z on each non-scripts"
as a short example | https://github.com/huggingface/smolagents/issues/312 | open | [] | 2025-01-22T12:55:22Z | 2025-01-22T12:55:22Z | null | malv-c |
pytorch/text | 2,279 | Could we have Android (Termux) Support? | # بسم الله الرحمان الرحيم امى بعد فالصلاة و السلام على سيدنا محمد وعلى آله اجمعين
## Feature/Issue
* building this project on mobile is pretty hard cuz of using ninja witch tries to build everything concurrently and this is got my phone to hang for a few minutes then OOM killed the process.
* also it tries the way th... | https://github.com/pytorch/text/issues/2279 | open | [] | 2025-01-22T05:22:38Z | 2025-01-22T08:45:23Z | 0 | TunifyBasic |
huggingface/datatrove | 326 | How to choose the best timeout value in extractors? | Hi,
I do not know how to choose the best timeout threshold for running extractor. Shouldn't this threshold be hardware-aware? | https://github.com/huggingface/datatrove/issues/326 | open | [] | 2025-01-22T03:14:58Z | 2025-02-10T09:53:03Z | null | jordane95 |
huggingface/datasets | 7,377 | Support for sparse arrays with the Arrow Sparse Tensor format? | ### Feature request
AI in biology is becoming a big thing. One thing that would be a huge benefit to the field that Huggingface Datasets doesn't currently have is native support for **sparse arrays**.
Arrow has support for sparse tensors.
https://arrow.apache.org/docs/format/Other.html#sparse-tensor
It would be ... | https://github.com/huggingface/datasets/issues/7377 | open | [
"enhancement"
] | 2025-01-21T20:14:35Z | 2025-01-30T14:06:45Z | 1 | JulesGM |
huggingface/peft | 2,339 | Peft version upgrade from 0.4.0 to 0.14.0 results in "No module named \u0027peft.utils.config\u0027" error | ### System Info
Hello,
I'm migrating my sagemaker endpoint from the `huggingface-pytorch-inference:2.1.0-transformers4.37.0-gpu-py310-cu118-ubuntu20.04` image (which is being deprecated) to the `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` image, which is supported.
This n... | https://github.com/huggingface/peft/issues/2339 | closed | [] | 2025-01-21T20:00:07Z | 2025-03-02T15:03:46Z | 2 | incchar |
huggingface/smolagents | 298 | How to pass images as input to CodeAgent? | Hello,
I want to pass an input image along with the prompt to `CodeAgent.run`. I see that there is an `additional_args` argument but when I pass the image as `{"image": "path/to/image.png"}`, the agent ends up loading the image via pytesseract to read the contents of the image instead of passing it to OpenAI/Anthropic... | https://github.com/huggingface/smolagents/issues/298 | closed | [] | 2025-01-21T17:14:27Z | 2025-02-18T18:41:27Z | null | DarshanDeshpande |
huggingface/lerobot | 650 | use a camera | can I use a camera to collect and train? | https://github.com/huggingface/lerobot/issues/650 | closed | [
"question"
] | 2025-01-21T10:35:02Z | 2025-04-07T15:53:26Z | null | lwx2024 |
huggingface/transformers | 35,807 | How to change data | ERROR: type should be string, got "\n\nhttps://huggingface.co/facebook/rag-token-nq\n\nfrom transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\n\ntokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\nretriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\nmodel = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\n\ninput_dict = tokenizer.prepare_seq2seq_batch(\"who holds the record in 100m freestyle\", return_tensors=\"pt\") \n\ngenerated = model.generate(input_ids=input_dict[\"input_ids\"]) \nprint(tokenizer.batch_decode(generated, skip_special_tokens=True)[0]) \n\n# should give michael phelps => sounds reasonable\n\n\n\nMy attempts\nhttps://github.com/kim90000/Attempts-with-facebook-rag-token-nq/blob/main/README.md" | https://github.com/huggingface/transformers/issues/35807 | closed | [] | 2025-01-21T06:17:09Z | 2025-02-28T08:03:38Z | null | kim90000 |
pytorch/vision | 8,871 | SE module is missing in 'class FusedMBConv', 'efficientnet.py'. Is there a reason for it? | According to the paper, the FusedMBConv block has an SE module. But I can't find it in the code. | https://github.com/pytorch/vision/issues/8871 | closed | [] | 2025-01-21T05:38:16Z | 2025-01-30T11:34:06Z | 5 | Morris-Chen007 |
huggingface/accelerate | 3,356 | how to config accelerate on 2 mac machines | https://huggingface.co/docs/accelerate/usage_guides/distributed_inference
i use accelerate config and when i run model , it will block and then got an error. means , can not connect IP and port.
\
who can help me. | https://github.com/huggingface/accelerate/issues/3356 | closed | [] | 2025-01-20T11:35:35Z | 2025-02-25T02:20:41Z | null | hsoftxl |
huggingface/transformers.js | 1,160 | How to use sentence-transformers/static-similarity-mrl-multilingual-v1 model? | ### Question
If I try to use `sentence-transformers/static-similarity-mrl-multilingual-v1` it fails on `tokenizer.json` not found. Is it possible to somehow convert the model to use it ? ONNX runtime is already there. | https://github.com/huggingface/transformers.js/issues/1160 | open | [
"question"
] | 2025-01-19T15:09:18Z | 2025-01-19T17:27:49Z | null | michalkvasnicak |
huggingface/diffusers | 10,606 | pred_original_sample in FlowMatchEulerDiscreteScheduler | Will pred_original_sample be supported in FlowMatchEulerDiscreteScheduler? How to get predicted x_0? | https://github.com/huggingface/diffusers/issues/10606 | closed | [] | 2025-01-19T10:02:22Z | 2025-02-14T12:21:33Z | 2 | haofanwang |
pytorch/vision | 8,868 | torchvision version 0.14.0 with cuda version 116 support wheel file suddendly disappeard in download.pytorch.org | Dear Commnunity team.
I have been using pytorch 1.13.0 and torchvision version 0.14.0 with cuda version 11.6 for my application(pytorch 2.x is not working for my app and torchvision 0.15 does not support pytorch 1.x)
I was embarrased to find out that torchvision version 0.14.0 with cuda 11.6 has been disappeared all... | https://github.com/pytorch/vision/issues/8868 | closed | [] | 2025-01-19T09:28:21Z | 2025-01-20T00:40:31Z | 0 | chulminkw |
pytorch/torchtitan | 797 | what is the point of first part of this assertion | why we need to `assert 0 <= 1`
https://github.com/pytorch/torchtitan/blob/d9898423ecef131825d13c6c8b521a24e889785f/torchtitan/models/llama/model.py#L79 | https://github.com/pytorch/torchtitan/issues/797 | closed | [] | 2025-01-19T07:05:24Z | 2025-01-19T15:30:25Z | null | gameofdimension |
huggingface/transformers.js | 1,157 | When using StyleTTS/Kokoro for text-to-speech conversion, how can I get the conversion progress? | ### Question
When using StyleTTS/Kokoro for text-to-speech conversion, how can I get the conversion progress?
```bash
npm i kokoro-js
```
```typescript
const model_id = "onnx-community/Kokoro-82M-ONNX";
const tts = await KokoroTTS.from_pretrained(model_id, {
dtype: "q8", // Options: "fp32", "fp16", "q8", "q4", "q4... | https://github.com/huggingface/transformers.js/issues/1157 | closed | [
"question"
] | 2025-01-18T03:36:28Z | 2025-10-13T04:46:59Z | null | emojiiii |
pytorch/executorch | 7,732 | Be able install ET where torch is compiled from source instead of prebuilt (e.g., nightly, release) | Be able install ET where torch is compiled from source instead of prebuilt (e.g., nightly, release)
There are a few use-cases why this is useful:
- If there are cross-dependencies between core vs ET and need to progress in lock steps, then we need to be able to install ET and test against locally compiled core.
- S... | https://github.com/pytorch/executorch/issues/7732 | closed | [
"triaged",
"module: user experience"
] | 2025-01-17T18:31:51Z | 2025-07-28T11:34:10Z | null | mergennachin |
pytorch/xla | 8,588 | Run XLA container with DDP in Vertex AI | ## ❓ Questions and Help
Hey there! I prepared a Docker container that trains a model using DDP, which works fine in a TPU VM. However, when I run the training job in Vertex AI, it fails. I suspect it's because the `--privileged --net host --shm-size=16G` parameters are not available for the container in Vertex AI. Is t... | https://github.com/pytorch/xla/issues/8588 | closed | [] | 2025-01-17T11:22:13Z | 2025-01-27T09:55:30Z | 1 | SteshinSS |
huggingface/transformers.js | 1,154 | Text generation pipeline memory spike | ### Question
## Description
Text generation pipeline has a memory spike at the starting point of every generation request from the instance and settle it down after few seconds. we tested this in lower vram and system memory environment it failed to generate anything because of this issue. also it generate nonsensical... | https://github.com/huggingface/transformers.js/issues/1154 | open | [
"question"
] | 2025-01-17T06:30:06Z | 2025-02-07T03:18:49Z | null | ashen007 |
pytorch/xla | 8,587 | [torch_xla2] Wire `torch_xla2.compile`d function with torch `AutogradFunction` | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Currently if we wrap with model with `torch_xla2.compile` and want to train the model using the traditional torch training loop similar to https://github.com/pytorch/xla/blob/master/experimental/torch_xla2/examples/basic_training.py
You wou... | https://github.com/pytorch/xla/issues/8587 | open | [
"enhancement",
"torchxla2"
] | 2025-01-17T01:18:27Z | 2025-02-11T12:19:27Z | 0 | qihqi |
huggingface/datasets | 7,372 | Inconsistent Behavior Between `load_dataset` and `load_from_disk` When Loading Sharded Datasets | ### Description
I encountered an inconsistency in behavior between `load_dataset` and `load_from_disk` when loading sharded datasets. Here is a minimal example to reproduce the issue:
#### Code 1: Using `load_dataset`
```python
from datasets import Dataset, load_dataset
# First save with max_shard_size=10
Dataset.fr... | https://github.com/huggingface/datasets/issues/7372 | open | [] | 2025-01-16T05:47:20Z | 2025-01-16T05:47:20Z | 0 | gaohongkui |
pytorch/kineto | 1,028 | Needs help, how to write trace files to remote storage | Recently, we have deployed dynolog in our gpu cluster to collect trace files via kineto on-demand profiling. It needs extra efforts to collect trace files dumped to local storage via `kineto` for distributed applications. We saw that kineto supports dumping traces files to remote storage in https://github.com/faceboo... | https://github.com/pytorch/kineto/issues/1028 | open | [] | 2025-01-16T03:52:48Z | 2025-03-11T20:39:30Z | null | staugust |
pytorch/torchtitan | 790 | should we have an extension point for model transforms out of tree? | In [torchao](https://github.com/pytorch/ao), we have various low precision training features which are in prototype: MX, int8, bitnet. While we expect most of these to eventually end up in the main torchao APIs, it often takes ~months for a prototype to graduate.
torchtitan is extremely useful for helping us test low... | https://github.com/pytorch/torchtitan/issues/790 | closed | [
"enhancement"
] | 2025-01-15T19:26:32Z | 2025-02-26T06:45:52Z | 17 | vkuzo |
pytorch/pytorch | 144,847 | torch.compile() In my use case of calling torch.compile(), I have found that the model's data outputs are inconsistent. I suspect that using Triton for operator fusion may have introduced precision deviations. I am unsure how to locate and fix this issue. | ### 🐛 Describe the bug
"My Torch environment is as follows:
2.2.2+cu121
My goal is to use functions related to torch.compile() to optimize the inference time of our model. In fact, it does work and achieves over a 50% reduction in inference time in the default mode.
The model code is as follows:
`"""
copy from htt... | https://github.com/pytorch/pytorch/issues/144847 | open | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-01-15T07:35:30Z | 2025-04-22T11:18:54Z | null | liangshaopeng |
pytorch/vision | 8,854 | Local Windows Torchvision Build fails | I am trying to locally build torchvision in a conda environment on my cpu-only windows laptop and even if the build seems to be successful, when I try to import the torchvision package, it fails with this error: **RuntimeError: operator torchvision::nms does not exist**. I tried multiple times ( with different versio... | https://github.com/pytorch/vision/issues/8854 | closed | [] | 2025-01-14T10:06:38Z | 2025-02-19T11:58:25Z | 1 | alinpahontu2912 |
huggingface/safetensors | 561 | Feature Request: Support for Ellipsis (...) in Indexing | ### Feature request
Thank you very much for your effort in maintaining this great project!
I’m writing to request the addition of support for ellipsis (...) in `safetensor.safe_open` indexing functionality. This would enhance usability and align SafeTensor’s API more closely with the standard Python indexing conventi... | https://github.com/huggingface/safetensors/issues/561 | open | [] | 2025-01-14T05:13:54Z | 2025-01-14T05:13:54Z | 0 | csaybar |
huggingface/diffusers | 10,566 | Unnecessary operations in `CogVideoXTransformer3DModel.forward()`? | ### Describe the bug
Here are few rows of codes in `CogVideoXTransformer3DModel.forward()` :
```py
# 3. Transformer blocks
...
if not self.config.use_rotary_positional_embeddings:
# CogVideoX-2B
hidden_states = self.norm_final(hidden_states)
else:
# ... | https://github.com/huggingface/diffusers/issues/10566 | closed | [
"bug",
"stale"
] | 2025-01-14T04:01:20Z | 2025-02-13T22:11:26Z | 2 | townwish4git |
huggingface/diffusers | 10,565 | Different generation with `Diffusers` in I2V tasks for LTX-video | ### Describe the bug
Hello, I encountered an issue with the generation when attempting the I2V task using `Diffusers`. Is there any difference between the `diffusers` implementation and the `LTX-video-inference scripts` in the I2V task?
- The above is the result from the `inference.py`, and the following is the resu... | https://github.com/huggingface/diffusers/issues/10565 | open | [
"bug",
"stale"
] | 2025-01-14T03:24:06Z | 2025-09-09T07:21:31Z | 11 | Kaihui-Cheng |
huggingface/transformers.js | 1,146 | Why does the local models keep downloading everyday? | ### Question
Every day when I come back to chat with the local models via transformers.js it downloads the models again. Can't I persisted the downloaded model so that I can chat with them anytime instantly?
Thank you. | https://github.com/huggingface/transformers.js/issues/1146 | closed | [
"question"
] | 2025-01-14T02:56:34Z | 2025-01-18T15:11:09Z | null | Nithur-M |
huggingface/chat-ui | 1,646 | Inline audio/video in the output | If a model returns a markdown content with an image (``), the chat-ui will display the image inline.
Is there something similar for audio and video? How can a model return audio or video content to the user?
I don't know if this is currently supported or not.
(I'm using the OpenAI endpoint)
btw, ... | https://github.com/huggingface/chat-ui/issues/1646 | open | [
"enhancement"
] | 2025-01-14T01:20:54Z | 2025-02-28T11:32:48Z | 1 | laurentlb |
huggingface/lerobot | 633 | [Question] How to set training to a local dataset? | Is there a way to train on a local dataset without manually adding the `local_files_only` arg to the `make_dataset` function of the train script?
I have set the `LEROBOT_HOME` env variable. | https://github.com/huggingface/lerobot/issues/633 | closed | [
"question",
"dataset"
] | 2025-01-13T15:27:00Z | 2025-10-08T08:37:55Z | null | tlpss |
huggingface/lerobot | 630 | Removing episodes from LeRobotDataset | Hi, thanks for building this. It's great.
Is there a way to easily remove episodes from a dataset. I had a decent amount of diversity in my episodes, and wanted to reduce it, so I had to remove ~1/2 of the episodes. Rather than rerecording them, I wanted to remove specified episodes (lets say all even episodes). Is ... | https://github.com/huggingface/lerobot/issues/630 | closed | [
"question",
"dataset",
"stale"
] | 2025-01-13T01:22:32Z | 2025-10-17T12:07:56Z | null | andlyu |
huggingface/safetensors | 559 | serialize & deserialize does not work as the documentation specify. | ### System Info
- `transformers` version: 4.42.3
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.5.2
- Accelerate version: 0.27.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.1+cu121 (True)
- Tensorfl... | https://github.com/huggingface/safetensors/issues/559 | open | [] | 2025-01-12T20:22:57Z | 2025-01-12T20:23:18Z | 0 | csaybar |
huggingface/transformers.js | 1,142 | Make in-browser WebGPU as seamless as in WebLLM | ### Question
Hi there! 👋
I've noticed something interesting about WebGPU support in browsers:
✅ [WebLLM's demo](https://chat.webllm.ai/) detects and uses my GPU automatically
❌ [transformers.js examples](https://huggingface.co/spaces/Xenova/nanollava-1.5-webgpu) fail with:
```Error: no available backend found... | https://github.com/huggingface/transformers.js/issues/1142 | closed | [
"question"
] | 2025-01-12T15:06:17Z | 2025-01-27T11:45:03Z | null | Anna-iroro |
huggingface/peft | 2,322 | model merge and unload feature for AdaLora | ### Feature request
unlike Lora or IA3 adapter type, AdaLora does not provide a method to merge lora adapter weights into original weights so that it can be used as a standalone model. I made that feature for a personal usecase and want to make a PR to make this feature accessible to everyone.
### Motivation
This f... | https://github.com/huggingface/peft/issues/2322 | closed | [] | 2025-01-12T09:20:01Z | 2025-01-14T12:47:35Z | 6 | DaehanKim |
huggingface/sentence-transformers | 3,166 | How to report a security issue responsibly? | I have just found a potential security issue in the repo and want to know how I can report it to your team privately, thanks! | https://github.com/huggingface/sentence-transformers/issues/3166 | closed | [] | 2025-01-12T04:24:15Z | 2025-01-12T08:52:43Z | null | zpbrent |
pytorch/vision | 8,848 | ValueError for Image size: Height 480 , Width 854 in RAFT | ### 🐛 Describe the bug
...
device = "cuda" if torch.cuda.is_available() else "cpu"
raft_model = raft_small(pretrained=True, progress=False).to(device)
raft_model = raft_model.eval()
transform = transforms.ToTensor()
with torch.no_grad():
list_of_flows = raft_model(old_batch.to(device), new_batch.to(device))... | https://github.com/pytorch/vision/issues/8848 | closed | [] | 2025-01-11T18:24:13Z | 2025-03-18T12:20:48Z | 1 | Neoyning |
pytorch/torchtitan | 785 | Why use RowwiseParallel for nn.Embedding instead of ColwiseParallel? | Colwise makes the logic a bit more clear. Rowwise splits on the token dimension, leading to confusion on how the different shards handle tokens that are not present within their shard. From a bit of debugging it seems like there is a special case for this somewhere deep in pytorch source code, but I could not find it.
... | https://github.com/pytorch/torchtitan/issues/785 | open | [
"question"
] | 2025-01-10T15:16:34Z | 2025-08-21T03:04:35Z | null | ghost |
huggingface/datasets | 7,365 | A parameter is specified but not used in datasets.arrow_dataset.Dataset.from_pandas() | ### Describe the bug
I am interested in creating train, test and eval splits from a pandas Dataframe, therefore I was looking at the possibilities I can follow. I noticed the split parameter and was hopeful to use it in order to generate the 3 at once, however, while trying to understand the code, i noticed that it ha... | https://github.com/huggingface/datasets/issues/7365 | open | [] | 2025-01-10T13:39:33Z | 2025-01-10T13:39:33Z | 0 | NourOM02 |
pytorch/TensorRT | 3,351 | ❓ [Question] How to install torch_tensorrt corresponding to pytorch tensorrt version | For example, I am using pytorch2.2.1, tensorrt10.2.0, how can I install torch_tensorrt (without changing pytorch, tensorrt versions) | https://github.com/pytorch/TensorRT/issues/3351 | open | [
"question"
] | 2025-01-10T07:12:50Z | 2025-01-15T23:47:47Z | null | swearirh |
huggingface/peft | 2,319 | Import error , is it a version issue? | ### System Info
When I execute the finetune.py file, an error occurs as follows: cannot import name 'prepare_model_for_int8_training'.Is it a version issue? My version is 0.14.0.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An of... | https://github.com/huggingface/peft/issues/2319 | closed | [] | 2025-01-10T02:34:52Z | 2025-01-13T10:13:18Z | 3 | zhangyangniubi |
pytorch/audio | 3,870 | SQUIM running in real-time | I applied SQUIM to assess speech quality as a way to correct the direction-of-arrival of a location-based speech enhancement system. [More info here](https://www.sciencedirect.com/science/article/pii/S1051200424005840).
I'm feeding the last 3-second window of the input to SQUIM, every 0.1 seconds. It is able to resp... | https://github.com/pytorch/audio/issues/3870 | open | [] | 2025-01-09T19:43:35Z | 2025-01-09T19:43:35Z | 0 | balkce |
huggingface/Google-Cloud-Containers | 138 | entrypoint.sh for TGI does not implemented requirements.txt installation process | Hello team,
Like this sample, https://github.com/huggingface/Google-Cloud-Containers/blob/main/containers/pytorch/inference/gpu/2.3.1/transformers/4.46.1/py311/entrypoint.sh
The entrypoint needs requirements.txt provisioning process.
But in this TGI sample does not contains these procedure.
https://github.com... | https://github.com/huggingface/Google-Cloud-Containers/issues/138 | closed | [
"question"
] | 2025-01-09T08:09:14Z | 2025-01-21T07:44:52Z | null | jk1333 |
huggingface/lerobot | 623 | Why different dimensionality state tensor with n_obs_steps vs not? | Curious about a design decision - why not have ACT with a [batch, n_obs_steps, state_dim] tensor but assert that n_obs_steps is length 1? Instead of [batch, state_dim]
Currently, we have to detect different dimensionality and handle when we're writing policy-agnostic code | https://github.com/huggingface/lerobot/issues/623 | closed | [
"question",
"policies",
"stale"
] | 2025-01-08T18:16:51Z | 2025-10-19T02:32:27Z | null | genemerewether |
pytorch/TensorRT | 3,348 | ❓ [Question] How to save tensorrt engine ? | ## ❓ Question
<!-- Your question -->
## I had already save torch.jit model and infer with pytorch backend successful, but I had tried find some example in project and issue, but I can not find any case, code, example, tutorial to show how to save a tensorrt engine for running by tensorrt backend, can you help me?... | https://github.com/pytorch/TensorRT/issues/3348 | open | [
"question"
] | 2025-01-08T12:24:23Z | 2025-01-08T15:10:27Z | null | lzcchl |
huggingface/diffusers | 10,496 | NF4 quantized flux models with loras | Is there any update here ? With nf4 quantized flux models, i could not use any lora
> **Update**: NF4 serialization and loading are working fine. @DN6 let's brainstorm how we can support it more easily? This would help us unlock doing LoRAs on the quantized weights, too (cc: @BenjaminBossan for PEFT). ... | https://github.com/huggingface/diffusers/issues/10496 | closed | [] | 2025-01-08T11:41:01Z | 2025-01-13T19:42:03Z | 12 | hamzaakyildiz |
pytorch/torchchat | 1,453 | Unabled to import torchao experimental quant_api | ### 🐛 Describe the bug
So i try to export my model and quantize it into .pte file using this command :
python3 torchchat.py export llama3.2-1b-instruct --quantize torchchat/quant_config/mobile.json --output-pte-path llama3.2_1b_instruct.pte
Before I do this, I already activate venv and executorch env,
But i got... | https://github.com/pytorch/torchchat/issues/1453 | closed | [] | 2025-01-08T11:05:43Z | 2025-01-10T12:43:55Z | 1 | Arthamna |
pytorch/torchchat | 1,452 | Why Torchchat uses MATH as SDPA backend? | ### 🐛 Describe the bug
Hi maintainers,
I find that, Torchchat uses MATH as SDPA backend in https://github.com/pytorch/torchchat/blob/main/torchchat/generate.py#L542. However, for other libs like vllm, they all accept flash attention as default backend.
So why Torchchat uses MATH as a default backend? Is this r... | https://github.com/pytorch/torchchat/issues/1452 | closed | [
"enhancement",
"triaged"
] | 2025-01-08T08:40:03Z | 2025-01-22T01:57:41Z | 8 | yanbing-j |
huggingface/diffusers | 10,489 | Bug in SanaPipeline example? | ### Describe the bug
I think there might be something wrong with the `SanaPipeline` example code at https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana#diffusers.SanaPipeline
It results in a shape mismatch (see detailed logs below): `mat1 and mat2 shapes cannot be multiplied (600x256000 and 2304x1152)`... | https://github.com/huggingface/diffusers/issues/10489 | closed | [
"bug"
] | 2025-01-07T17:14:27Z | 2025-01-08T05:18:05Z | 2 | geronimi73 |
pytorch/pytorch | 144,324 | FSDP: How to support w8a8 quantization? | ### 🐛 Describe the bug
I replaced nn.Linear with QuantLinear, substituting the nn.Linear operator with an int8 quantized operator.
act_tensor_int8, pertoken_scale = torch_npu.npu_dynamic_quant(x)
quant_out = torch_npu.npu_quant_matmul(act_tensor_int8,
self.weight.to(... | https://github.com/pytorch/pytorch/issues/144324 | closed | [
"triaged",
"module: fsdp",
"oncall: pt2"
] | 2025-01-07T13:17:02Z | 2025-07-02T08:19:36Z | null | Lenan22 |
huggingface/distil-whisper | 164 | How to finetune distil-whisper/distil-large-v2 model? | How to finetune distil-whisper/distil-large-v2 model? | https://github.com/huggingface/distil-whisper/issues/164 | open | [] | 2025-01-07T12:59:42Z | 2025-01-07T13:00:59Z | null | dhattareddy |
pytorch/xla | 8,541 | Slow XLA training performance. | ## ❓ Questions and Help
I'm evaluating PyTorch-XLA for training, but noticed that there is a big degradation in performance compared to the native pytorch device. Is it a known problem, or is there a problem with the way I use PyTorch-XLA? I tested a simple MNIST training example, comparing the performance between ... | https://github.com/pytorch/xla/issues/8541 | open | [
"performance",
"xla:gpu"
] | 2025-01-07T09:49:12Z | 2025-02-11T13:50:46Z | 4 | tzstoyanov |
huggingface/doc-builder | 539 | How to Deploy huggingface/doc-builder Artifacts to GitHub Pages? | Hi,
I am currently working with the `huggingface/doc-builder` and I'm looking to deploy the generated documentation artifacts to GitHub Pages. Could you provide guidance or best practices on how to achieve this?
Specifically, I am interested in understanding:
1. The steps required to configure the deployment p... | https://github.com/huggingface/doc-builder/issues/539 | open | [] | 2025-01-07T08:37:05Z | 2025-01-07T08:37:05Z | null | shunk031 |
huggingface/peft | 2,310 | Comparison of Different Fine-Tuning Techniques for Conversational AI | ### Feature request
It would be incredibly helpful to have a clear comparison or support for various fine-tuning techniques specifically for conversational AI. This feature could include insights into their strengths, limitations, and ideal use cases, helping practitioners choose the right approach for their needs.
... | https://github.com/huggingface/peft/issues/2310 | open | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-01-07T07:07:50Z | 2025-12-15T09:58:10Z | 44 | ImamaDev |
huggingface/smolagents | 83 | How to save/extract executed code | Is it possible to save the executed code? It's already in the log. It will be very useful.
ex.
```
╭─ Executing this code: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ 1 attractions_list = [ ... | https://github.com/huggingface/smolagents/issues/83 | closed | [] | 2025-01-06T15:40:17Z | 2025-02-16T17:43:40Z | null | Lodimup |
huggingface/diffusers | 10,475 | [SD3]The quality of the images generated by the inference is not as high as on the validation set during fine-tuning? | ### Describe the bug
Why is the quality of the graphs I generate with `StableDiffusion3Pipeline` not as good as the quality of the images in the validation set in the log generated when using dreambooth_lora for fine tuning?
Maybe I need some other plugin or parameter setting to maintain the same image quality as the... | https://github.com/huggingface/diffusers/issues/10475 | closed | [
"bug",
"stale"
] | 2025-01-06T14:52:57Z | 2025-02-06T12:17:47Z | 8 | ytwo-hub |
huggingface/datasets | 7,356 | How about adding a feature to pass the key when performing map on DatasetDict? | ### Feature request
Add a feature to pass the key of the DatasetDict when performing map
### Motivation
I often preprocess using map on DatasetDict.
Sometimes, I need to preprocess train and valid data differently depending on the task.
So, I thought it would be nice to pass the key (like train, valid) when perf... | https://github.com/huggingface/datasets/issues/7356 | closed | [
"enhancement"
] | 2025-01-06T08:13:52Z | 2025-03-24T10:57:47Z | null | jp1924 |
huggingface/diffusers | 10,468 | What is accelerate_ds2.yaml? | I can't find accelerate config file named "accelerate_ds2.yaml".
Please give me the file.
Thanks very much! | https://github.com/huggingface/diffusers/issues/10468 | closed | [] | 2025-01-06T07:53:06Z | 2025-01-12T05:32:01Z | null | aa327chenge |
huggingface/transformers | 35,523 | How about adding a combined step and epoch feature to save_strategy? | ### Feature request
Add epoch+steps functionality to save_strategy
### Motivation
I often set save_strategy to epoch for saving, but sometimes I need to run experiments with steps.
Recently, I had to compare checkpoints saved at both epoch and step intervals, which required running the experiment twice and was qui... | https://github.com/huggingface/transformers/issues/35523 | closed | [
"Feature request"
] | 2025-01-06T02:21:22Z | 2025-02-17T00:02:42Z | null | jp1924 |
huggingface/transformers | 35,512 | Perhaps your features (`videos` in this case) have excessive nesting (inputs type `list` where type `int` is expected). | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.46.1
- Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
- Python version: 3.10.16
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accele... | https://github.com/huggingface/transformers/issues/35512 | closed | [
"bug"
] | 2025-01-05T06:51:26Z | 2025-02-13T08:45:39Z | null | yxy-kunling |
huggingface/diffusers | 10,452 | pipe.disable_model_cpu_offload | **Is your feature request related to a problem? Please describe.**
If I enable the following in Gradio interface
sana_pipe.enable_model_cpu_offload()
and during next generation I want to disable cpu offload, how to do it? I mentioned Gradio specifically as command line inference will not have this problem unless... | https://github.com/huggingface/diffusers/issues/10452 | closed | [] | 2025-01-04T16:39:01Z | 2025-01-07T08:29:32Z | 3 | nitinmukesh |
huggingface/diffusers | 10,448 | Load DDUF file with Diffusers using mmap | DDUF support for diffusers is there and DDUF support mmap.
But diffusers example doesn't use or support mmap,
How can I load DDUF file to diffusers with mmap?
```
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"DDUF/FLUX.1-dev-DDUF", dduf_file="FLUX.1-dev.dduf", t... | https://github.com/huggingface/diffusers/issues/10448 | open | [
"stale"
] | 2025-01-04T00:42:09Z | 2025-02-03T15:02:46Z | 1 | adhikjoshi |
huggingface/lerobot | 613 | Starting off with pretrained models | Are there any pretrained models available that can be fine tuned using our own dataset for tasks like pick and place and manipulation? | https://github.com/huggingface/lerobot/issues/613 | closed | [
"question",
"stale"
] | 2025-01-03T21:09:40Z | 2025-10-08T20:53:09Z | null | rabhishek100 |
huggingface/optimum | 2,148 | Support for Exporting Specific Sub-Modules (e.g., Encoder, Decoder) | ### Feature request
Currently, when converting transformer models (like T5, but potentially others) to ONNX using the Optimum library, it appears to generate a single ONNX file encompassing the entire model architecture (both encoder and decoder). This occurs regardless of the specific task option selected during conv... | https://github.com/huggingface/optimum/issues/2148 | closed | [
"Stale"
] | 2025-01-03T14:48:36Z | 2025-04-08T02:09:03Z | 4 | happyme531 |
pytorch/vision | 8,836 | Question: Modify Resnet File structure and how to import it | Hi, I would like to modify the structure of the model [Resnet50 ](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). My goal is neither to add nor to remove layers, only to replace the convolutions that in the code are made by the pytorch nn.Conv function by convolutions made by the Nvidia CUTLA... | https://github.com/pytorch/vision/issues/8836 | closed | [] | 2025-01-03T12:43:50Z | 2025-04-08T15:45:32Z | null | IzanCatalan |
huggingface/smolagents | 52 | How to implement human in the loop? | How to implement human in the loop?
There are two scenarios: one where more information and input from the user are required, and another where the user's consent is needed to perform a certain action. | https://github.com/huggingface/smolagents/issues/52 | closed | [] | 2025-01-03T12:19:01Z | 2025-02-18T18:49:15Z | null | waderwu |
huggingface/lerobot | 611 | Can ACT policy support pushT task? | I want to train the ACT policy with pushT dataset, but the evaluation accuracy is only 0%.

Here is my yaml
[act_pusht.txt](https://github.com/user-attachments/files/18299197/act_pusht.txt)
And my training command is
''
pyt... | https://github.com/huggingface/lerobot/issues/611 | closed | [
"question",
"policies",
"stale"
] | 2025-01-03T11:30:40Z | 2025-10-19T02:32:28Z | null | Kimho666 |
pytorch/tutorials | 3,211 | 💡 [REQUEST] - Making the tutorial more coherent | ### 🚀 Describe the improvement or the new tutorial
The 3-series tutorial set (linked in existing tutorial set) is disconnected in term of concepts being introduced and reused; like the
- "Dataset" which is introduced in first tutorial but is not leveraged in next;
- Intricate details like explanation of use of `t... | https://github.com/pytorch/tutorials/issues/3211 | open | [
"nlp"
] | 2025-01-03T08:46:30Z | 2025-04-16T18:11:36Z | 1 | LunaticMaestro |
pytorch/torchtitan | 770 | How many H100 GPUs should I use to train Llama-3.1-70B models with Torchtitan? | I am planning to train the Llama-3.1-70B model using the Torchtitan framework and need advice on the optimal number of NVIDIA H100 GPUs required. My goal is to ensure efficient training in terms of time and cost, while maintaining a balance between hardware usage and model convergence. I’d appreciate insights on batch ... | https://github.com/pytorch/torchtitan/issues/770 | closed | [] | 2025-01-03T02:21:50Z | 2025-01-04T04:46:32Z | null | jacklanda |
pytorch/executorch | 7,486 | How to run ExecuTorch on Linux with aarch64-oe-linux-gcc11.2? | Hi, I am new to ExecuTorch and currently trying to build and run it on a Linux-based Qualcomm board (QCS/QCM8550). The board's specifications are:
OS: Linux
Compiler: aarch64-oe-linux-gcc11.2
SOC Model: 66
Hexagon Arch: V73
I noticed that most guides are focused on Android environments. Could you please provide ... | https://github.com/pytorch/executorch/issues/7486 | closed | [
"module: doc",
"need-user-input",
"triaged"
] | 2025-01-03T00:28:56Z | 2025-02-04T02:42:53Z | null | suhyun01150 |
huggingface/optimum | 2,147 | Convert Stable Diffusion Inpainting model to FP16 with FP32 inputs | ### Feature request
I've used [this script](https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/conv_sd_to_onnx.py) to convert models to ONNX in FP16 format but maintaining the FP32 inputs. One of the models that I converted was [Stable Diffusion 2 Inpainting](https://huggingface.co/jdp8/sd-2-inpainting... | https://github.com/huggingface/optimum/issues/2147 | closed | [] | 2025-01-02T21:28:43Z | 2025-01-25T00:15:54Z | 0 | jdp8 |
huggingface/diffusers | 10,433 | [Docs] Broken Links in a Section of Documentation | ### Broken Links in a Section of Documentation
>Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how ... | https://github.com/huggingface/diffusers/issues/10433 | closed | [] | 2025-01-02T18:24:44Z | 2025-01-06T18:07:39Z | 0 | SahilCarterr |
huggingface/transformers | 35,485 | How to run the model on another machine and send the answer to another machine. | ### System Info
transformers 4.31.0 , window os , python 3.10.12
### Who can help?
vision models: @amyeroberts, @qubvel
I have tried using this model on my machine myself, and it works normally, but the processing is very slow because the GPU on my machine is not that powerful. However, I have a server with a str... | https://github.com/huggingface/transformers/issues/35485 | closed | [
"bug"
] | 2025-01-02T10:03:42Z | 2025-01-07T10:20:46Z | null | ixn3rd3mxn |
huggingface/accelerate | 3,320 | How to save self-defined model with deepspeed zero 3? | ### System Info
```Shell
- `Accelerate` version: 1.0.1
- Python version: 3.10.0
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- PyTorch MUSA available: False
- System RAM: 128.00 GB
- GPU typ... | https://github.com/huggingface/accelerate/issues/3320 | closed | [] | 2025-01-02T08:15:36Z | 2025-02-10T15:07:18Z | null | amoyplane |
pytorch/executorch | 7,467 | How to run Qwen using Executorch? | ### 📚 The doc issue
Hi! I just wanted to how, how would I go about running Qwen using executorch? I was able to create the .pte file for Qwen. The example for Llama had a step 'Create a llama runner for android'. Do we have to do something similar for Qwen by creating a custom runner? Also the Qwen repository on Hugg... | https://github.com/pytorch/executorch/issues/7467 | closed | [
"triaged",
"module: llm"
] | 2025-01-02T07:16:56Z | 2025-08-28T21:17:24Z | null | Arya-Hari |
huggingface/diffusers | 10,425 | Euler Flow Matching Scheduler Missing Documentation for Parameters | ### Describe the bug
The Euler flow matching scheduler in Hugging Face Diffusers is missing clear documentation for its parameters, making it difficult for users to understand how to configure the scheduler effectively for different use cases.
### Reproduction
Steps to Reproduce:
Visit the Hugging Face Diffusers ... | https://github.com/huggingface/diffusers/issues/10425 | closed | [
"bug"
] | 2025-01-02T01:37:38Z | 2025-01-02T01:38:38Z | 0 | hanshengzhu0001 |
huggingface/transformers.js | 1,130 | Tips on Converting Newer Models | ### Question
🎉🎉Happy New Year to the incredible Transformers.js team!🎉🎉
As I work on converting new (text-generation) models for use with Transformers.js.
Here's what i've tried since last week :
* python converter script
* optimum cli onnx
* onnx-community/convert-to-onnx spaces
the problem i encount... | https://github.com/huggingface/transformers.js/issues/1130 | open | [
"question"
] | 2025-01-01T05:32:09Z | 2025-01-01T05:32:09Z | null | josephencila |
huggingface/lerobot | 606 | Dataset does not support length of feature shape > 1 | Hi,
Thank you for this excellent project!
I am trying to create a custom dataset with additional sensory information (such as tactile data) which is an Array3D tensor, but find that when I use the approach mentioned in #547, there is no support to add custom tensor like observations to the episode buffer.
Spec... | https://github.com/huggingface/lerobot/issues/606 | closed | [
"question",
"dataset",
"stale"
] | 2024-12-31T21:08:26Z | 2025-10-19T02:32:29Z | null | akashsharma02 |
huggingface/finetrainers | 169 | How to build a dataset for finetuning CogVideoX I2V 1.5 | Hi,
I want to finetune the CogVideoX I2V 1.5 (5B) model. I have a set of videos that I want to use, but first I need to preprocess them so they meet the requirements of the model. Do I have to make sure that my fine-tuning dataset meets the generation properties of the model? That is, in the case of CogVideoX 1.5, the... | https://github.com/huggingface/finetrainers/issues/169 | closed | [] | 2024-12-31T19:55:00Z | 2025-03-08T23:43:31Z | null | royvelich |
pytorch/torchtitan | 765 | Can I load from non-FSDP optimizer state with FSDP2? | I have been running training on a different framework with FSDP1, where I saved the states with FULL_STATE_DICT - leading to optimizer states that are in a normal `torch.save` format. I'd love to resume from this checkpoint - is this currently supported by FSDP2 / DCP? When I naively try `dcp.load` it resulted in a sha... | https://github.com/pytorch/torchtitan/issues/765 | closed | [
"question"
] | 2024-12-31T15:52:59Z | 2025-01-28T18:47:26Z | null | syncdoth |
huggingface/diffusers | 10,416 | Euler flow matching scheduler is missing documentation for parameters | 
I think there are some undocumented parameters here. | https://github.com/huggingface/diffusers/issues/10416 | closed | [] | 2024-12-31T13:15:35Z | 2025-01-09T18:54:41Z | 4 | bghira |
huggingface/chat-ui | 1,636 | Any way to pass authorization header from Oauth2 down to custom endpoint? | ## Describe your feature request
It would be nice to be able to pass the authorization header from Oauth2 to custom endpoint. I have an endpoint that mimicks TGI and I would like to authenticate every request in order to protect the api,
## Implementation idea
Just pass an authorization header from frontend to... | https://github.com/huggingface/chat-ui/issues/1636 | open | [
"enhancement"
] | 2024-12-31T13:00:22Z | 2024-12-31T13:00:22Z | 0 | corte |
huggingface/diffusers | 10,415 | [Pipelines] Add AttentiveEraser | ### Model/Pipeline/Scheduler description
I’ve worked on a project called AttentiveEraser, which is a tuning-free method for object removal in images using diffusion models. The code for this project is built upon modifications to existing Diffusers pipelines, so it should be relatively straightforward to integrate i... | https://github.com/huggingface/diffusers/issues/10415 | closed | [
"stale"
] | 2024-12-31T07:44:48Z | 2025-02-05T15:54:43Z | 7 | Anonym0u3 |
huggingface/diffusers | 10,414 | [<languageCode>] Translating docs to Chinese | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐.
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/m... | https://github.com/huggingface/diffusers/issues/10414 | closed | [] | 2024-12-31T06:45:21Z | 2024-12-31T06:49:52Z | 0 | S20180576 |
huggingface/peft | 2,301 | How to pass in an attention _ mask that is one dimension more than input _ ids | ### System Info
Hello, how can I pass in `attention_mask` that has one more dimension than `input_ids`, for example: `output = peft_model.generate(input_ids,attention_mask=attention_mask,max_new_tokens=100)` The `input_ids` dimension is [bitch_size,N], and the `attention_mask` dimension is [bitch_size,N,N].
Under th... | https://github.com/huggingface/peft/issues/2301 | closed | [] | 2024-12-31T02:26:14Z | 2025-02-07T15:03:57Z | null | Chinesehou97 |
pytorch/pytorch | 143,988 | Add a knob to control how many blocks are used by persistent matmul/attn kernels | ### 🚀 The feature, motivation and pitch
We train a transformer-style model using FSDP, and we have a very good overlap between the matmul kernels (from cuBLAS) and the NCCL operation in the background. However, when profiling, we have observed that the **matmuls take 2x as long** to complete when they are overlapped ... | https://github.com/pytorch/pytorch/issues/143988 | closed | [
"module: cuda",
"triaged",
"module: cublas",
"module: linear algebra"
] | 2024-12-30T16:31:05Z | 2025-07-10T11:20:38Z | null | lw |
huggingface/diffusers | 10,411 | How to call the training of lora weights obtained from examples/concestency_stiffness/train_lcm-distill-lora_std_wds. py | I followed https://github.com/huggingface/diffusers/tree/main/examples/consistency_distillation The provided tutorial trained the final Lora weight, but did not find a way to call it. May I ask if you can provide me with a demo of running and calling this weight? Thank you very much!
the training set:
```
#!/bin/bas... | https://github.com/huggingface/diffusers/issues/10411 | closed | [] | 2024-12-30T12:06:07Z | 2024-12-31T07:21:40Z | null | yangzhenyu6 |
huggingface/text-embeddings-inference | 461 | How to Set the Threshold for gte-multilingual-reranker | I want to use the gte-multilingual-reranker-base model to re-rank the retrieved documents and discard some of them based on a threshold. I have seen examples on Hugging Face where the logits are used as the output scores, but how can I determine the appropriate threshold? | https://github.com/huggingface/text-embeddings-inference/issues/461 | open | [] | 2024-12-30T11:39:48Z | 2025-02-09T06:29:02Z | null | ketusrai |
huggingface/optimum | 2,140 | KeyError: 'swinv2 model type is not supported yet in NormalizedConfig. | ### System Info
```shell
Google Colab
T4 GPU
transformers Version: 4.47.1
optimum Version: 1.24.0.dev0
```
### Who can help?
@michaelbenayoun, @JingyaHuang, @echarlaix
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `example... | https://github.com/huggingface/optimum/issues/2140 | open | [
"bug"
] | 2024-12-30T10:29:14Z | 2024-12-30T10:29:14Z | 0 | Billybeast2003 |
huggingface/optimum-intel | 1,096 | How to use trainer.train() with OVModelForCausalLM() model | I am currently converting a local LLM to Open Vino, I would like to fine tune my model with the Trainer function but I get an error stating: AttributeError: 'OVModelForCausalLM' object has no attribute 'named_children'
Please let me know if there is a way to fine tune openVino models that are loaded with OVModelForC... | https://github.com/huggingface/optimum-intel/issues/1096 | closed | [] | 2024-12-29T23:54:26Z | 2025-02-27T14:54:20Z | null | CJames1261 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.