repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/huggingface.js | 609 | [Question] What is the correct way to access commit diff results via http? | Data I am interested in:

Here's the endpoint to list commits
https://huggingface.co/api/models/SimonMA/Codellama-7b-lora-rps-adapter/commits/main | https://github.com/huggingface/huggingface.js/issues/609 | closed | [] | 2024-04-05T12:00:15Z | 2024-04-09T18:40:05Z | null | madgetr |
huggingface/dataset-viewer | 2,661 | Increase the number of backfill workers? | Today, it's 8. Let's try increasing it and see if it speeds up the backfill job.
The current throughput is 577 datasets/minute. | https://github.com/huggingface/dataset-viewer/issues/2661 | open | [
"question",
"P2",
"prod"
] | 2024-04-05T10:42:11Z | 2024-04-05T16:42:13Z | null | severo |
pytorch/TensorRT | 2,730 | ❓ [Question] Running LayerNorm in fp16 | ## ❓ Question
<!-- Your question -->
## What you have already tried
I am trying to convert a transformer model to TRT in fp16 (fp32 works fine 🙂). It includes bunch of LayerNorms, all of them have explicit casting of inputs to fp32, i.e:
``` python
class LayerNormFP32(nn.LayerNorm):
def forward(self, x... | https://github.com/pytorch/TensorRT/issues/2730 | open | [
"question"
] | 2024-04-05T09:06:28Z | 2025-04-25T12:01:41Z | null | Tomiinek |
huggingface/transformers | 30,066 | How to calculate the mAP on this network? | ### System Info
I want to evaluate my network with the mean Average Precision. I don't know how to get the class-id of my gt data. Are there any examples to calculate the mAP with this library?
I use the DetrForObjectDetection with my own dataset.
### Who can help?
_No response_
### Information
- [ ] ... | https://github.com/huggingface/transformers/issues/30066 | closed | [] | 2024-04-05T08:32:31Z | 2024-06-08T08:04:08Z | null | Sebi2106 |
huggingface/optimum-quanto | 152 | How does quanto calibrate torch functions? | I have learned quanto calibrate ops in module forms by adding module hooks, but how about torch functions like `torch.sigmoid`, `torch.elu`, and `torch.log` etc?
I think the output scale of `torch.sigmoid` could be directly evaluated similarly to quanto's approach with `softmax`. Additionally, `torch.elu` might be sub... | https://github.com/huggingface/optimum-quanto/issues/152 | closed | [
"question"
] | 2024-04-05T06:49:51Z | 2024-04-11T09:41:55Z | null | shuokay |
huggingface/candle | 2,007 | How to run inference of a (very) large model across mulitple GPUs ? | It is mentioned on README that candle supports multi GPU inference, using NCCL under the hood. How can this be implemented ? I wonder if there is any available example to look at..
Also, I know PyTorch has things like DDP and FSDP, is candle support for multi GPU inference comparable to these techniques ? | https://github.com/huggingface/candle/issues/2007 | open | [] | 2024-04-04T13:52:46Z | 2024-08-12T04:53:54Z | null | jorgeantonio21 |
huggingface/candle | 2,006 | How to get different outputs for the same prompt? | I used a gemma, it always returned same outputs for same prompt.
How can I get different outputs? Is there any method or parameter for sampling? (I even doubt that `top_p` works.)
| https://github.com/huggingface/candle/issues/2006 | closed | [] | 2024-04-04T10:43:31Z | 2024-04-13T11:17:36Z | null | Hojun-Son |
huggingface/chat-ui | 975 | is it possible to hide the setting from the users? most users do not want to create assistants, and they just want to use existing ones. | In the left-hand corner of hugginchat, "Assistants" and "Settings" are visible. We are considering whether it is possible to hide these options from our users, as they have expressed no interest in creating assistants and prefer to use existing ones. Many thanks for your kind help.. Howard | https://github.com/huggingface/chat-ui/issues/975 | open | [] | 2024-04-04T07:33:25Z | 2024-04-04T07:33:25Z | 0 | hjchenntnu |
huggingface/transformers.js | 679 | Speech Recognition/Whisper word level scores or confidence output | ### Question
Hey,
Big thanks for awesome project!
It possible to add score/confidence for word level output when using Speech Recognition/Whisper model?
Would appreciate any direction/comments or suggestion where to dig to add it.
Happy to submit PR if I will success in it.
Thanks!
| https://github.com/huggingface/transformers.js/issues/679 | open | [
"question"
] | 2024-04-04T07:04:00Z | 2024-04-04T07:04:00Z | null | wobbble |
huggingface/transformers | 30,034 | What is the data file format of `run_ner.py`? | ### Feature request
What is the correct format for custom dataset in run_ner.py? Would it be possible to include a few lines on this with a helpful example?
### Motivation
I am using the example script run_ner.py from [huggingface](https://github.com/huggingface)/transformers It is not possible to use standar... | https://github.com/huggingface/transformers/issues/30034 | closed | [
"Good First Issue"
] | 2024-04-04T06:36:30Z | 2024-04-08T11:50:00Z | null | sahil3773mehta |
huggingface/datasets | 6,777 | .Jsonl metadata not detected | ### Describe the bug
Hi I have the following directory structure:
|--dataset
| |-- images
| |-- metadata1000.csv
| |-- metadata1000.jsonl
| |-- padded_images
Example of metadata1000.jsonl file
{"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white... | https://github.com/huggingface/datasets/issues/6777 | open | [] | 2024-04-04T06:31:53Z | 2024-04-05T21:14:48Z | 5 | nighting0le01 |
pytorch/TensorRT | 2,724 | [Question] Model converted using TensorRT is slower than native Pytorch | Hi All,
We try to run `resent18` model faster than just running the torchvision version on GPU, therefore we planned to convert and quantize the model using TensorRT. However, we did not witness a performance boost after the conversion.
We tried to play with the `ir` mode using both `torch_compile` and `dynamo` in a... | https://github.com/pytorch/TensorRT/issues/2724 | closed | [
"question"
] | 2024-04-03T18:28:20Z | 2024-04-23T18:41:05Z | null | AvivSham |
pytorch/xla | 6,880 | test_train_mp_mnist.py failing for CUDA when GPU_NUM_DEVICES=1 | ## 🐛 Bug
Following [How to run with PyTorch/XLA:GPU](https://github.com/pytorch/xla/blob/master/docs/gpu.md#how-to-run-with-pytorchxlagpu) to test CUDA PJRT plugin. Running a model hangs when GPU_NUM_DEVICES is set to 1. For >1 values works as expected.
## To Reproduce
<!--
It is really important for the tea... | https://github.com/pytorch/xla/issues/6880 | closed | [] | 2024-04-03T09:58:30Z | 2024-04-08T11:27:27Z | 3 | mmakevic-amd |
huggingface/lighteval | 143 | Do an intro notebook on how to use `lighteval` | https://github.com/huggingface/lighteval/issues/143 | closed | [
"documentation"
] | 2024-04-03T07:53:25Z | 2024-12-05T10:18:42Z | null | clefourrier | |
huggingface/accelerate | 2,614 | How to I selectively apply accelerate to trainers | I have two trainers in a script, one is SFTTrainer and one is PPOTrainer, both from trl library. Is it possible to only apply accelerate to PPOTrainer? | https://github.com/huggingface/accelerate/issues/2614 | closed | [] | 2024-04-03T06:39:05Z | 2024-05-21T15:06:36Z | null | zyzhang1130 |
huggingface/sentence-transformers | 2,568 | How to improve sentence-transformers' performance on CPU? | On the CPU, I tried huggingface‘s optimization.onnx and sentence_transformers and I found that on the task of feature_extraction, optimization.onnx was not as good as sentence_transformers in batch encoding performance.
My question is, are sentence_transformers the current ceiling on CPU performance? | https://github.com/huggingface/sentence-transformers/issues/2568 | closed | [] | 2024-04-03T02:09:14Z | 2024-04-23T09:17:39Z | null | chensuo2048 |
pytorch/serve | 3,065 | improve security doc for model security check | ### 📚 The doc issue
The model url provided by cx potentially can contain unsafe content. Existing security lacks the summary of guidance to cx to overcome this issue.
### Suggest a potential alternative/fix
TorchServe provides 3 different levels security check to address this issue. TorchServe Security doc can be u... | https://github.com/pytorch/serve/issues/3065 | closed | [
"documentation",
"security"
] | 2024-04-02T19:14:36Z | 2024-04-17T18:25:42Z | 0 | lxning |
huggingface/datasets | 6,773 | Dataset on Hub re-downloads every time? | ### Describe the bug
Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whene... | https://github.com/huggingface/datasets/issues/6773 | closed | [] | 2024-04-02T17:23:22Z | 2024-04-08T18:43:45Z | 5 | manestay |
huggingface/transformers.js | 677 | How you debug/measure Python -> Javascript ONNX Conversion | ### Question
I have converted a couple ONNX models to use ONNXRuntimeWeb from using the Python onnx version as the source. Ive spent weeks debugging though. What's your strategy for comparing tensor values, etc, with these onnx models?
Ive console log'd N# of values from the tensor/array to see if the values have... | https://github.com/huggingface/transformers.js/issues/677 | open | [
"question"
] | 2024-04-02T16:16:22Z | 2024-04-02T16:18:03Z | null | matbeedotcom |
huggingface/transformers.js | 676 | How to use fp16 version of the model file? | ### Question
example files: https://huggingface.co/Xenova/modnet/tree/main/onnx | https://github.com/huggingface/transformers.js/issues/676 | closed | [
"question"
] | 2024-04-02T12:10:24Z | 2024-04-03T02:56:52Z | null | cyio |
huggingface/chat-ui | 969 | Display does not automatically update after receiving message | After receiving the message, the chat page does not update and is always in the loading state. The received message can only be displayed after refreshing the page or switching sessions.

| https://github.com/huggingface/chat-ui/issues/969 | open | [
"question"
] | 2024-04-02T06:14:59Z | 2024-04-03T04:26:23Z | null | w4rw4r |
pytorch/rl | 2,053 | [QUESTION] How to reset only certain nested parts of a key with TensorDictPrimer? | Hi, I have an observation spec for a multi-agent environment which looks like this:
```
CompositeSpec(
agents: CompositeSpec(
observation: UnboundedContinuousTensorSpec(
shape=torch.Size([100, 2, 14]),
space=None,
device=cuda:0,
dtype=torch.float32,
... | https://github.com/pytorch/rl/issues/2053 | closed | [] | 2024-04-02T02:53:19Z | 2024-04-18T15:04:25Z | null | kfu02 |
huggingface/dataset-viewer | 2,654 | Tutorial about how to start/run my own local dataset server. | Hey,
I'm new to the dataset server and rookie in the Web field. I wanted to build my own dataset server however, is there any tutorial that can guide me to build my own dataset server?
Many Thanks | https://github.com/huggingface/dataset-viewer/issues/2654 | closed | [] | 2024-04-02T01:30:12Z | 2024-05-11T15:03:50Z | null | ANYMS-A |
huggingface/accelerate | 2,603 | How to load a FSDP checkpoint model | I have fine tuned gemma 2b model using FSDP and these are the below files available under the checkpoint
```
optimizer_0 pytorch_model_fsdp_0 rng_state_0.pth rng_state_1.pth scheduler.pt trainer_state.json
```
How can i load the above FSDP object?
kindly help me with this issue,
| https://github.com/huggingface/accelerate/issues/2603 | closed | [] | 2024-04-01T16:53:24Z | 2024-05-11T15:06:21Z | null | nlpkiddo-2001 |
pytorch/TensorRT | 2,723 | ❓ [Question] Output shape error in deconvolution layer when model is quantized with pytorch-quantization and using torch-tensorrt via torchscript | ## ❓ Question
While using a simple model with int8 quantization (pytorch-quantization) when the output layer is deconvolution, torchscript to torch-tensorrt conversion fails with wrong number of output channels. If a conv layer is used instead of deconv, it works without an error.
## What you have already tried
... | https://github.com/pytorch/TensorRT/issues/2723 | closed | [
"question"
] | 2024-04-01T15:39:16Z | 2024-05-22T18:51:32Z | null | oazeybekoglu |
huggingface/datasets | 6,769 | (Willing to PR) Datasets with custom python objects | ### Feature request
Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code:
```
class MyClass:
pass
dataset = datasets.Dataset.from_list([
dict(a=MyClass(), b='hello'),
])
```
It gives... | https://github.com/huggingface/datasets/issues/6769 | open | [
"enhancement"
] | 2024-04-01T13:18:47Z | 2024-04-01T13:36:58Z | 0 | fzyzcjy |
pytorch/rl | 2,052 | [BUG?] How to handle next with custom environment and check_env_specs() | I recently starting learning TorchRL so it's possible that this is a misunderstanding on my part and not an actual bug.
## Describe the bug
I'm trying to setup a simple spatial arrangement problem using a custom environment. There are N blocks each with an x, y position and a size. My action consists of a block i... | https://github.com/pytorch/rl/issues/2052 | closed | [
"bug"
] | 2024-03-31T22:10:49Z | 2024-04-02T12:00:35Z | null | mneilly |
huggingface/optimum-quanto | 146 | Question about the gradient of QTensor and QBitTensor | I am confused by the gradient of the Quantizer and QBitTensor. Take QTensor as the example:
The evaluation of forward is:
```txt
data = base / scale (1)
data = round(data) (2)
data = clamp(data, qmin, qmax) (3)
```
I think the graidents should be:
```txt
grad_div = 1 / scale (1)
grad_round = 1 (2) #... | https://github.com/huggingface/optimum-quanto/issues/146 | closed | [
"question"
] | 2024-03-31T14:33:10Z | 2024-04-24T13:51:20Z | null | shuokay |
pytorch/text | 2,253 | PyTorch 2.4 is not supported by TorchText | Working on this for days trying to install torchtext with pytorch 2.4 and no luck.
The error message I receive:
```
torchtext 0.17.2 depends on torch==2.2.2
The user requested (constraint) torch==2.4.0.dev20240324+cu121
```
So it seems impossible to use torchtext with the latest version of pytorch.
Is th... | https://github.com/pytorch/text/issues/2253 | open | [] | 2024-03-31T05:07:53Z | 2025-08-11T14:46:49Z | 2 | grant541 |
huggingface/transformers.js | 673 | Is dit-base supported | ### Question
There is a [Huggingface repo](https://huggingface.co/Xenova/dit-base) for the ONNX version of the dit-base model but I can't seem to make it work.
I keep getting the following error:

Is the mode... | https://github.com/huggingface/transformers.js/issues/673 | closed | [
"question"
] | 2024-03-31T01:18:42Z | 2024-03-31T01:48:24Z | null | Maxzurek |
huggingface/datatrove | 143 | Understand the output of deduplication | Hi
I have arabic split from the CC trying to deduplicate it
I used datatrove for this with a small example
I got in my output folder two files
0000.c4_dup and 0000.c4_sig
Could you help me to understand this output
I cannot read its content as it's c/00000.c4_sig is not UTF-8 encoded and seems to be binary files... | https://github.com/huggingface/datatrove/issues/143 | closed | [
"question"
] | 2024-03-30T23:16:21Z | 2024-05-06T09:30:43Z | null | Manel-Hik |
huggingface/candle | 1,971 | How to use `topk`? | I am trying to use `topk` to implement X-LoRA in Candle, and want to perform `topk` in the last dimension. Specifically, I need the `indices` return value (as returned by [`torch.topk`](https://pytorch.org/docs/stable/generated/torch.topk.html)).
These indices will either be used to creaste a mask to zero out all t... | https://github.com/huggingface/candle/issues/1971 | closed | [] | 2024-03-30T20:29:45Z | 2024-07-23T02:02:58Z | null | EricLBuehler |
huggingface/transformers.js | 671 | What is involved in upgrading to V3? | ### Question
In anticipation of being able to [generate music](https://github.com/xenova/transformers.js/issues/668) with musicGen I'm attempting to switch my project over to version 3, which I was able to build on my mac.
I noticed that when using SpeechT5, the voice sounds completely garbled. I've attached a zip ... | https://github.com/huggingface/transformers.js/issues/671 | closed | [
"question"
] | 2024-03-29T18:09:23Z | 2024-03-31T13:50:27Z | null | flatsiedatsie |
huggingface/datasets | 6,764 | load_dataset can't work with symbolic links | ### Feature request
Enable the `load_dataset` function to load local datasets with symbolic links.
E.g, this dataset can be loaded:
├── example_dataset/
│ ├── data/
│ │ ├── train/
│ │ │ ├── file0
│ │ │ ├── file1
│ │ ├── dev/
│ │ │ ├── file2
│ │ │ ├── file3
│ ├── metad... | https://github.com/huggingface/datasets/issues/6764 | open | [
"enhancement"
] | 2024-03-29T17:49:28Z | 2025-04-29T15:06:28Z | 1 | VladimirVincan |
huggingface/transformers.js | 670 | Are tokenizers supposed to work in the browser? | ### Question
I'd love to use some pretrained tokenizers, right in my browser. On a number of occasions, I've tried to use this library to load and use a tokenizer in my browser, but it always fails with an error like this:
```
Uncaught (in promise) SyntaxError: JSON.parse: unexpected character at line 1 column 1 of ... | https://github.com/huggingface/transformers.js/issues/670 | closed | [
"question"
] | 2024-03-29T16:10:46Z | 2024-03-29T16:53:21Z | null | Vectorrent |
pytorch/serve | 3,054 | Building frontend from source in docker | ### 📚 The doc issue
Not able to find a way to add frontend modelserver jar as part of docker image to host a torchserve model
I was trying to learn making changes to frontend for a small fix in customizedMetadata on management api. the metadata is not json parsed. Adding the changes did not surface when i hosted t... | https://github.com/pytorch/serve/issues/3054 | closed | [
"triaged",
"docker"
] | 2024-03-29T15:54:29Z | 2024-04-04T16:54:06Z | 0 | harshita-meena |
huggingface/transformers.js | 669 | TinyLlama Conversion | ### Question
I ran the converter script on the tinyllama repo for both the TinyLlama models ([intermediate step 1431K 3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) and [chat v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)) and uploaded them to my repo ([intermediate... | https://github.com/huggingface/transformers.js/issues/669 | closed | [
"question"
] | 2024-03-29T14:50:06Z | 2025-10-13T04:57:32Z | null | dmmagdal |
huggingface/datatrove | 142 | Deduplicating local data throws an error | Hi,
I have data in my local machine in the format of a jsonl file and I want to deduplicate it. I'm using the following example:
`sent_dedup_config = SentDedupConfig(
n_sentences=3,
split_sentences=False, # set to False to split on \n instead
only_dedup_in_index=True,
min_doc_words=50,
)
FI... | https://github.com/huggingface/datatrove/issues/142 | closed | [
"question"
] | 2024-03-29T12:31:30Z | 2024-04-24T14:15:58Z | null | Manel-Hik |
pytorch/pytorch | 122,959 | RuntimeError with PyTorch's MultiheadAttention: How to resolve shape mismatch? | ### 🐛 Describe the bug
I'm encountering an issue regarding the input shape for PyTorch's MultiheadAttention. I have initialized MultiheadAttention as follows:
`attention = MultiheadAttention(embed_dim=1536, num_heads=4)`
The input tensors have the following shapes:
- query.shape is torch.Size([1, 1, 1536])
- B... | https://github.com/pytorch/pytorch/issues/122959 | closed | [] | 2024-03-29T09:19:45Z | 2025-01-22T12:08:21Z | null | YuyaWake |
pytorch/pytorch | 122,957 | How to export torch.optim.LBFGS using torch.onnx.export | ### 🚀 The feature, motivation and pitch
I have a python code that solve linear equations with torch.optim.LBFGS. And I want to make it work in C++. One posible way is to use libtorch. But I wander if I can export it like nn.Module with torch.onnx.export.
Here is my python code:
```
import torch
import torch.nn ... | https://github.com/pytorch/pytorch/issues/122957 | open | [
"module: onnx",
"module: optimizer",
"triaged"
] | 2024-03-29T08:42:49Z | 2024-07-22T09:48:29Z | null | shekmun |
huggingface/optimum-intel | 642 | How to apply LoRA adapter to a model loaded with OVModelForCausalLM()? | In the transformers library, we can load multiple adapters to the original model by load_adapter then switch the specified adapter with set_adapter like below.
```
# base model
model = AutoModelForCausalLM.from_pretrained(
model_name,
)
# load multiple adapters
model.load_adapter("model/adapter1/", "adap... | https://github.com/huggingface/optimum-intel/issues/642 | closed | [] | 2024-03-29T01:13:44Z | 2024-08-03T12:34:21Z | null | nai-kon |
pytorch/pytorch | 122,916 | MPS torch.where() is giving objectively incorrect results, leading to critical calculation errors | ### 🐛 Describe the bug
I think I have an example of how MPS can get completely different results from CPU. Hopefully the simplicity of this example will be clear and helpful. This may be related to a previous issue noted on this forum (#84936).
```python
import numpy as np
import torch
mps_device = torch.de... | https://github.com/pytorch/pytorch/issues/122916 | closed | [
"triaged",
"module: 64-bit",
"module: correctness (silent)",
"module: mps"
] | 2024-03-28T19:56:17Z | 2025-03-01T16:19:53Z | null | aradley |
huggingface/transformers | 29,948 | How to All Utilize all GPU's when device="balanced_low_0" in GPU setting | ### System Info
I know that while loading the model in "balanced_low_0" GPU setting the model is loaded into all GPU's apart from 0: GPU. Where the 0: GPU is left to do the text inference. (i.e. text inference as in performing all the calculation to generate response inside the LLM)
So, as per the give device param... | https://github.com/huggingface/transformers/issues/29948 | closed | [] | 2024-03-28T19:54:09Z | 2024-05-07T13:43:08Z | null | kmukeshreddy |
huggingface/dataset-viewer | 2,649 | Should we support /filter on columns that contain SQL commands? | See the `schema` column on https://huggingface.co/datasets/motherduckdb/duckdb-text2sql-25k. Clicking on any of the 'classes' leads to an error
<img width="1209" alt="Capture d’écran 2024-03-28 à 15 11 50" src="https://github.com/huggingface/datasets-server/assets/1676121/3aaf779f-0465-429a-bafb-1a16ff5f2901">
... | https://github.com/huggingface/dataset-viewer/issues/2649 | open | [
"question",
"api",
"P2"
] | 2024-03-28T14:14:01Z | 2024-03-28T14:24:34Z | null | severo |
pytorch/serve | 3,051 | Can torchserve return image data? | ### 📚 The doc issue
I have a model that outputs byte data of an image. I would like to ask how torchserve should return this type of data?
### Suggest a potential alternative/fix
_No response_ | https://github.com/pytorch/serve/issues/3051 | closed | [
"triaged"
] | 2024-03-28T07:24:56Z | 2024-04-02T22:53:39Z | 1 | pengxin233 |
huggingface/accelerate | 2,593 | How to use training function rather than training scripts in multi GPUs and multi node? | I confirmed that the Multi-gpu launcher is executed based on the training function using the PrepareForLaunch function in "accelerate/examples/multigpu_remote_launcher.py".
Usually, the "accelerate launch" or "python -m torch.distributed.run" command is used for multi-node, but is there a way to utilize a training f... | https://github.com/huggingface/accelerate/issues/2593 | closed | [] | 2024-03-28T07:05:50Z | 2024-05-05T15:06:26Z | null | wlsghks4043 |
pytorch/TensorRT | 2,720 | ❓ [Question] compiled ExportedProgram is slower than uncompiled model | ## ❓ Question
<!-- Your question -->
I tried compiling a few models with `torch_tensorrt.compile(model, inputs, ir='dynamo', ...)` and each one of them was slower than the respective uncompiled model. I was wondering if I was using torch_tensorrt incorrectly.
## What you have already tried
A minimum example:
`... | https://github.com/pytorch/TensorRT/issues/2720 | open | [
"question"
] | 2024-03-28T06:08:21Z | 2024-04-02T22:02:01Z | null | Qi-Zha0 |
huggingface/alignment-handbook | 144 | Can we please add the option to work with a tokenized dataset, escpailly for the CPT task. | Since we have the CPT task now, it would be nice to have the ability to feel a tokenized and packed dataset directly. | https://github.com/huggingface/alignment-handbook/issues/144 | open | [] | 2024-03-27T18:31:58Z | 2025-02-27T16:23:06Z | 1 | shamanez |
huggingface/transformers.js | 668 | Is it possible to run a music / sounds generation model? | ### Question
I'd love to create a browser-based music generation tool, or one that can turn text into sound effects. Is that supported?
I guess my more general question is: can Transformers.js run pretty much any .onnx I throw at it, or does each model require some level of implementation before it can be used? | https://github.com/huggingface/transformers.js/issues/668 | closed | [
"question"
] | 2024-03-27T18:22:31Z | 2024-05-13T21:17:54Z | null | flatsiedatsie |
huggingface/optimum-quanto | 139 | Dequantizing tensors using quanto | I noticed the quantized models have these 4 additional features, for every weight in the original, e.g:
```
model.layers.0.mlp.down_proj.activation_qtype,
model.layers.0.mlp.down_proj.input_scale,
model.layers.0.mlp.down_proj.output_scale,
model.layers.0.mlp.down_proj.weight_qtype
```
I guess `qtype` refers to t... | https://github.com/huggingface/optimum-quanto/issues/139 | closed | [
"question"
] | 2024-03-27T18:00:34Z | 2024-04-11T09:22:29Z | null | raunaks13 |
huggingface/safetensors | 458 | Safetensors uses excessive RAM when saving files | Safetensors uses around twice the RAM that `torch.save`:
```python
import resource
import torch
from safetensors.torch import save_file
torch.save({'tensor': torch.randn((500000000))}, 'test.torch')
print(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)
save_file({'tensor': torch.randn((500000000))}, 'tes... | https://github.com/huggingface/safetensors/issues/458 | closed | [
"Stale"
] | 2024-03-27T12:11:38Z | 2024-05-02T01:47:32Z | 1 | sheepymeh |
pytorch/text | 2,249 | Why torchtext needs to reinstall torch | Hi team, I am trying to install torchtext with torch 2.2.1-cu121 installed. But once I run `pip install torchtext` the pip will install torch 2.2.1 cpu version for me, is there any way to avoid this?
The output log:
```bash
Successfully installed torch-2.2.2+cu121 torchaudio-2.2.2+cu121 torchvision-0.17.2+cu121
P... | https://github.com/pytorch/text/issues/2249 | open | [] | 2024-03-27T11:19:41Z | 2024-03-27T11:23:04Z | 0 | WhenMelancholy |
huggingface/transformers | 29,897 | How to finetune a language model after extent token embeddings? | If I add some new tokens for a language model, I will get some random initialized weights in embeddings and lm_head. Is there any official way to train only these new weights? Or all I can do is adding hooks to the tensors to zero the gradient for weights I do not want to change? | https://github.com/huggingface/transformers/issues/29897 | closed | [] | 2024-03-27T08:20:24Z | 2024-03-27T15:01:04Z | null | bluewanderer |
pytorch/TensorRT | 2,718 | ❓ [Question] Can TensorRT load and run torch_tensorrt models directly? | Can TensorRT load and run torch_tensorrt models directly? I want to export my pytorch model and deploy it with TensorRT. | https://github.com/pytorch/TensorRT/issues/2718 | closed | [
"question"
] | 2024-03-27T07:46:57Z | 2024-06-07T01:10:43Z | null | theNefelibata |
huggingface/text-generation-inference | 1,677 | how to get the latest version number? | In the document, I use "docker run ghcr.io/huggingface/text-generation-inference:latest" to run the latest version of tgi. But in a production environment, I need to fix the version number. I can't find any webpage similar to [docker hub](https://hub.docker.com/r/pytorch/manylinux-cuda102). So how can I use docker comm... | https://github.com/huggingface/text-generation-inference/issues/1677 | closed | [] | 2024-03-27T05:43:49Z | 2024-03-29T02:30:10Z | null | fancyerii |
pytorch/pytorch | 122,756 | How to reduce memory usage for large matrix calculations? |
A_ = torch.sigmoid(torch.matmul(x, x.t()))
x is the feature of tens of thousands of nodes, the shape is 700,000*8, 8 is the number of features extracted from each node.
Calculation requires several t of memory. How to reduce memory overhead? | https://github.com/pytorch/pytorch/issues/122756 | open | [
"triaged"
] | 2024-03-27T02:06:03Z | 2024-04-01T15:59:16Z | null | bowensuuu |
pytorch/serve | 3,045 | gRPC Model Metadata using Open Inference Protocol | ### 🐛 Describe the bug
Consider a system where a feature service fetches model metadata that has information on what feature to fetch and finally infer from the model. In order for me fetch this metadata regarding inputs and outputs I am trying to use the recently added [Open inference protocol](https://github.com/... | https://github.com/pytorch/serve/issues/3045 | open | [
"OIP"
] | 2024-03-26T20:52:16Z | 2024-04-02T22:54:39Z | 1 | harshita-meena |
pytorch/xla | 6,822 | Loading large model (e.g. LLMs) | ## ❓ Questions and Help
Hi, I'm trying to load large models on TPU-V4 Pod. I saw the discussions in the issues about torchdistX and meta devices. I'm wondering is there any good or recommended solution now?
I am having trouble installing torchdistX with torch/torchXLA 2.2.0 and the LLaMA model I'm loading doesn't... | https://github.com/pytorch/xla/issues/6822 | closed | [
"question",
"dataloading"
] | 2024-03-26T18:20:16Z | 2025-04-18T12:43:08Z | null | tsb0601 |
huggingface/optimum-quanto | 134 | Should quanto use int dtype in AffineQuantizer instead of uint? | According to code in https://github.com/huggingface/quanto/blob/main/quanto/tensor/qbitstensor.py#L34 I find quanto use uint dtype to store the quantized value in affine quantizer, while in symmetric quantizer it is int dtype
https://github.com/huggingface/quanto/blob/main/quanto/tensor/qtensor.py#L62.
Taking har... | https://github.com/huggingface/optimum-quanto/issues/134 | closed | [
"question"
] | 2024-03-26T14:21:25Z | 2024-04-11T09:25:09Z | null | shuokay |
huggingface/hub-docs | 1,257 | Add section about deprecation of script-based datasets? | Asked here: https://github.com/huggingface/datasets-server/issues/2385#issuecomment-2017984722
> Perhaps a little bit of suggestion from me is to include a disclaimer in the docs so that others are aware that developing a custom script is not supported.
It would also help answer the discussions + we could link in... | https://github.com/huggingface/hub-docs/issues/1257 | open | [
"question"
] | 2024-03-26T13:20:27Z | 2024-03-26T17:49:50Z | null | severo |
pytorch/xla | 6,820 | Help RoPE fusion | ## ❓ Questions and Help
I use the set of tools pytorch/torch xla/openxla, and I want to fuse the operator RoPE into a custom operator, so that the hardware can operate directly. Do you think which layer I should do this better? In the xla pass? Define a RoPE operator in the python layer? Or has the existing framework ... | https://github.com/pytorch/xla/issues/6820 | closed | [
"question"
] | 2024-03-26T11:54:24Z | 2025-04-18T12:45:22Z | null | ckfgihub |
huggingface/candle | 1,941 | [help] how to update a portion of a long tensor | I'm aware of the closed issue(#1163 ) and understand that Var is mutable and Tensor is immutable by design. But I find it hard to impl some logic if it's impossible to update a portion of a Tensor.
For example, how can I generate a pairwise combination from two 2d tensors:
```rust
let a = Tensor::new(&[[1.... | https://github.com/huggingface/candle/issues/1941 | closed | [] | 2024-03-26T11:47:56Z | 2024-04-07T15:42:45Z | null | michael8090 |
huggingface/optimum | 1,776 | How to convert a model(tf_model.h5) with tokenizer folder to the onnx format | ### Feature request
I have trained the TensorFlow model using the Transformers library and saved the trained model and tokenizer in a folder named MODEL_WITH_TOKENIZER. The model is stored inside the folder in a **.h5** format - **tf_model.h5**
Here is the folder structure.
 and for each `gpt` turn a label (thumbs up or thumbs down). But for KTO training, I have only seen datasets with the columns `prompt`, `completion` and `label` (see e.g. https://huggingface.co/datasets/trl-lib/kto-mix-14k).
Do I need to unwind my shareGPT dialogs (se... | https://github.com/huggingface/alignment-handbook/issues/142 | open | [] | 2024-03-26T10:29:38Z | 2024-03-26T10:30:08Z | 0 | DavidFarago |
huggingface/transformers.js | 664 | How to confirm if webgpu actually working in the backend with inferencing | ### Question
Hi Team,
Thanks for the awsome library.
Recently I am experimenting to run background remove model in the client side using webgpu. I came across this solution https://huggingface.co/spaces/Xenova/remove-background-webgpu.
Tried to replicate the same in my local using your V3 branch.
The way I ... | https://github.com/huggingface/transformers.js/issues/664 | open | [
"question"
] | 2024-03-26T08:17:05Z | 2024-07-24T06:13:50Z | null | abiswas529 |
pytorch/serve | 3,042 | Custom class handler missing BaseHandler | ### 📚 The doc issue
I believe the docs for a custom class level entry point are missing the base-class `BaseHandler`. If i'm mistaken, please close this issue.
Link: https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handler-with-class-level-entry-point
### Suggest a potential alternative/... | https://github.com/pytorch/serve/issues/3042 | open | [
"documentation"
] | 2024-03-26T06:54:31Z | 2024-03-26T20:41:02Z | 0 | swstack |
huggingface/dataset-viewer | 2,630 | Take spawning.io opted out URLs into account in responses? | In particular, for images (assets / cached-assets).
Raised internally: https://huggingface.slack.com/archives/C040J3VPJUR/p1702578556307069?thread_ts=1702577137.311409&cid=C040J3VPJUR | https://github.com/huggingface/dataset-viewer/issues/2630 | open | [
"question",
"P2"
] | 2024-03-25T11:49:49Z | 2024-03-25T11:49:58Z | null | severo |
huggingface/datasets | 6,756 | Support SQLite files? | ### Feature request
Support loading a dataset from a SQLite file
https://huggingface.co/datasets/severo/test_iris_sqlite/tree/main
### Motivation
SQLite is a popular file format.
### Your contribution
See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal)
In ... | https://github.com/huggingface/datasets/issues/6756 | closed | [
"enhancement"
] | 2024-03-25T11:48:05Z | 2024-03-26T16:09:32Z | 3 | severo |
huggingface/dataset-viewer | 2,629 | Detect when a new commit only changes the dataset card? | Ideally, when we change the contents of the dataset card (not the YAML part), the responses computed by the datasets server should not be recomputed, because they will lead to the same results.
asked here (private slack channel): https://huggingface.slack.com/archives/C04N96UGUFM/p1701862863691809
> Sometimes I d... | https://github.com/huggingface/dataset-viewer/issues/2629 | closed | [
"question",
"improvement / optimization",
"P2"
] | 2024-03-25T10:57:36Z | 2024-06-19T16:02:33Z | null | severo |
huggingface/dataset-viewer | 2,627 | Replace our custom "stale bot" action with the GitHub's one? | See `actions/stale@v5`
```yaml
name: Mark inactive issues as stale
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-is... | https://github.com/huggingface/dataset-viewer/issues/2627 | open | [
"question",
"ci",
"P2"
] | 2024-03-25T10:48:47Z | 2024-03-25T10:49:02Z | null | severo |
pytorch/examples | 1,242 | Pytorch is insufficiently opinionated | ### 🐛 Describe the bug
## Context
Machine learning models can be trained on secret, synthetic, or biased data to create seemingly authoritative probability estimates used for abusive purposes in legal contexts. In Jessica Logan's case, her 911 call was used as "evidence" [(when interpreted by a poorly trained and ... | https://github.com/pytorch/examples/issues/1242 | closed | [] | 2024-03-25T09:13:15Z | 2024-03-26T07:17:14Z | 0 | ghost |
huggingface/candle-paged-attention | 1 | How to use candle-paged-attention in candle models? | Could you provide an example of candle-paged-attention for actual usage in candle models (candle-examples)? Is this crate ready to be used in candle? i.e., tested in end2end model inference? I'm a little bit confused about the construction of block_tables and context_lens. | https://github.com/huggingface/candle-paged-attention/issues/1 | open | [] | 2024-03-25T09:09:24Z | 2024-03-25T12:07:13Z | null | guoqingbao |
pytorch/examples | 1,241 | RuntimeError in Partialconv-master | ## 📚 Documentation
I am getting this error in signal_handling.py file
<img width="426" alt="image" src="https://github.com/pytorch/examples/assets/126889261/0881dd8e-abb2-467f-bab4-818f3f856418">
that is in miniconda3/lib/python3.12/site-packages/torch/utils/data/_utils/signal_handling.py
How can I fix this?
| https://github.com/pytorch/examples/issues/1241 | open | [] | 2024-03-24T21:37:03Z | 2024-03-26T07:17:49Z | 1 | shaSaaliha |
huggingface/optimum | 1,769 | Accuracy change with BetterTransformer | When transforming the model into BetterTransformer model I'm seeing accuracy drop on the models.
The output scores changes considerably (upto 1-2 decimal points of precision).
**Is accuracy change expected when switching to BetterTransformer ?** I'm not performing any ORT compilation or quantization on the model.
... | https://github.com/huggingface/optimum/issues/1769 | closed | [
"bettertransformer",
"Stale"
] | 2024-03-24T01:28:15Z | 2025-01-15T02:01:10Z | 7 | kapilsingh93 |
pytorch/PiPPy | 988 | How to use PiPPy for large models that won't fit on one GPU | Hello, I was wondering If someone could provide an example or some guidance on how to use PiPPy for models, that will not fit on one GPU. I want to run pipeline parallelism with Llama2 70B on a node with multiple a100 gpus. However, if I run the pippy_llama.py example, every process will just try to load the whole mod... | https://github.com/pytorch/PiPPy/issues/988 | open | [
"high-pri"
] | 2024-03-23T15:49:18Z | 2024-03-30T00:08:01Z | null | aspiridon0v |
huggingface/optimum-quanto | 129 | Performance of quanto quants vs bnb, AWQ, GPTQ, GGML ? | I was wondering if there were any comparisons done looking at the speed and ppl of `quanto` quantizations with respect to the other quantization techniques out there. | https://github.com/huggingface/optimum-quanto/issues/129 | closed | [
"question"
] | 2024-03-23T11:37:33Z | 2024-04-11T09:22:47Z | null | nnethercott |
huggingface/transformers | 29,826 | How to convert pretrained hugging face model to .pt for deploy? | I'm attempting to convert this [model](https://huggingface.co/UrukHan/wav2vec2-russian) in .pt format. It's working fine for me so i dont want to fine-tune it. How can i export it to .pt and run interface for example in flask?
I tried using this to convert to .pt:
```
from transformers import AutoConfig, AutoPro... | https://github.com/huggingface/transformers/issues/29826 | closed | [] | 2024-03-23T10:09:16Z | 2025-10-13T23:08:57Z | null | vonexel |
huggingface/datasets | 6,750 | `load_dataset` requires a network connection for local download? | ### Describe the bug
Hi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again?
### Steps to reproduce the bug
```
>>> import datasets
>>> datasets.load_dataset("hh-rlhf")
Repo card metadata block was not ... | https://github.com/huggingface/datasets/issues/6750 | closed | [] | 2024-03-23T01:06:32Z | 2024-04-15T15:38:52Z | 3 | MiroFurtado |
huggingface/dataset-viewer | 2,626 | upgrade to pyarrow 15? | we use pyarrow 14 | https://github.com/huggingface/dataset-viewer/issues/2626 | closed | [
"question",
"dependencies",
"P2"
] | 2024-03-22T18:22:04Z | 2024-04-30T16:19:19Z | null | severo |
pytorch/hub | 343 | How to load a custom YOLOv9 model using torch.hub.load()? | Hi,
I have trained a YOLOV9-e model on a custom dataset from this repo: [https://github.com/WongKinYiu/yolov9](url)
Now I tried to load it as below-

But getting the following error-
? | Dear developer,
I try to modify the code of [k_diffusion](https://github.com/huggingface/diffusers/blob/9613576191d8613fc550a1ec286adc4f1fc208ec/src/diffusers/pipelines/stable_diffusion_k_diffusion/pipeline_stable_diffusion_xl_k_diffusion.py#L837) to be compatible with controlnet.
But I got incorrect results, t... | https://github.com/huggingface/diffusers/issues/7429 | closed | [] | 2024-03-22T03:33:38Z | 2024-04-18T03:25:55Z | null | YoucanBaby |
pytorch/pytorch | 122,414 | `torch.compile` should result in an optimized module where `module.training` is the same as in the unoptimized module | ### 🚀 The feature, motivation and pitch
Hi, basically what the title says.
The current behavior of `torch.compile` is imo quite unexpected and can lead users to the false belief that a model is in eval mode.
### Alternatives
Alternatively, it would be a good idea to add to the documentation of `torch.compile` that... | https://github.com/pytorch/pytorch/issues/122414 | closed | [
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-june2024"
] | 2024-03-21T15:45:52Z | 2024-07-25T17:43:12Z | null | uwu-420 |
huggingface/transformers | 29,777 | `MistralAttention`: where is the sliding window | Hi,
I'm trying to understand the implementation of Mistral's attention in `MistralAttention`.
https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L195
It is my understanding that it should always be using local window attention. In `MistralFlashAttention2` this... | https://github.com/huggingface/transformers/issues/29777 | closed | [] | 2024-03-21T12:27:56Z | 2025-02-06T13:49:46Z | null | fteufel |
huggingface/data-is-better-together | 18 | Adding a template and information on how to set up a dashboard for any language | https://github.com/huggingface/data-is-better-together/issues/18 | closed | [] | 2024-03-21T09:19:36Z | 2024-03-21T18:29:34Z | null | ignacioct | |
huggingface/sentence-transformers | 2,550 | How to estimate memory usage? | I would like to use `sentence-transformers` in a low-end machine (CPU-only) to load pre-trained models, such as `paraphrase-multilingual-MiniLM-L12-v2`, and compute a sentence's embedding.
How to estimate memory usage? Is there any guideline to describe the minimum system requirements for loading pre-trained models? | https://github.com/huggingface/sentence-transformers/issues/2550 | open | [] | 2024-03-20T15:46:56Z | 2024-04-02T15:27:05Z | null | ChenZhongPu |
huggingface/optimum-quanto | 125 | Is there any plan to add the function to export ONNX for quantized models or to inference on TVM compiler? | https://github.com/huggingface/optimum-quanto/issues/125 | closed | [
"question"
] | 2024-03-20T15:38:44Z | 2024-04-11T09:23:55Z | null | ntkhoa95 | |
pytorch/pytorch | 122,303 | How to exclude some modules from quantization? | ### 🐛 Describe the bug
Hi there, I am newcomer to model quantization. I have some problems and hope to get some advice and help from community. Thanks in advance!
Here is a demo model:
```python
class DemoModel(nn.Module):
def __init__(self):
super(DemoModel, self).__init__()
self.con... | https://github.com/pytorch/pytorch/issues/122303 | open | [
"oncall: quantization"
] | 2024-03-20T12:26:33Z | 2024-03-27T08:22:57Z | null | stricklandye |
huggingface/chat-ui | 947 | The prompt for title generation is not optimal | Hello,
I've noticed that the prompt for title generation is not optimal. For example on my simple message `Hello`... The title I got was `💬 Hello! How can I help you today? Let me know if you have any questions or topics you'd like me to explain. I'll do my best to provide accurate and helpful information. Have a gre... | https://github.com/huggingface/chat-ui/issues/947 | open | [] | 2024-03-20T10:27:11Z | 2024-03-21T18:18:58Z | 5 | ihubanov |
pytorch/xla | 6,778 | Spmd pre-training llama2 multi-machine training so slow? | spmd has a normal training speed using eight blocks on a single machine, but the communication overhead increases rapidly in the case of multiple machines
device is:
gpu:A100 * 8 * 2
spmd strategy is:
```
for name, param in model.named_parameters():
shape = (num_devices,) + (1,) * (len(param.shape) - 1)
... | https://github.com/pytorch/xla/issues/6778 | closed | [
"performance",
"xla:gpu",
"distributed"
] | 2024-03-20T03:31:29Z | 2025-04-18T12:49:34Z | 23 | mars1248 |
huggingface/pytorch-image-models | 2,114 | By using timm.create, how to download weights from url instead of HF? | I want to use url to load vit_base_patch8_224, and dino from hf_hub, so how can I do this? | https://github.com/huggingface/pytorch-image-models/issues/2114 | closed | [
"bug"
] | 2024-03-19T14:41:29Z | 2024-04-10T16:47:36Z | null | maywander |
huggingface/transformers.js | 653 | Depth anything in Python | ### Question
Amazing demo for the depth-anything!
I want to have a similar point cloud, but in Python, and wondering what's the logic behind your js [implementation](https://github.com/xenova/transformers.js/blob/main/examples/depth-anything-client/main.js).
Specifically:
1. How do you set up the intrinsic mat... | https://github.com/huggingface/transformers.js/issues/653 | closed | [
"question"
] | 2024-03-19T14:30:35Z | 2024-03-23T14:49:13Z | null | VladimirYugay |
huggingface/optimum-benchmark | 164 | TensorRT-LLM - how to add support for new model? | Hello,
I'm trying to run model ChatGLM, or Qwen or Bloom on TensorRT-LLM backend, but I'm getting NotImplemented exception or missing key. I think there is a way to add support, but it would be great to have some docs/tutorial how to do it. | https://github.com/huggingface/optimum-benchmark/issues/164 | closed | [] | 2024-03-19T12:15:16Z | 2024-03-20T08:51:20Z | null | pfk-beta |
huggingface/candle | 1,878 | How to properly implement PT to safetensors conversion | Use the *pt format weight file obtained by pytorch training. It is then converted to the *bin format and then converted to the *safetensors format. Error message is reported in candle yolov8 with error message
Error: cannot find tensor net.b.1.0.bn.running_mean | https://github.com/huggingface/candle/issues/1878 | closed | [] | 2024-03-19T11:51:59Z | 2024-04-06T11:37:24Z | null | EHW-liao |
huggingface/alignment-handbook | 138 | How to select parts to bp in sft | 
As the pic has shown, there are some cases that some parts of the gpt's response should not be cacluated in backward computing, if I want to achieve this function, what should I do? (or can you realize thi... | https://github.com/huggingface/alignment-handbook/issues/138 | open | [] | 2024-03-19T10:26:49Z | 2024-03-19T10:26:49Z | null | Fu-Dayuan |
pytorch/torchx | 849 | Missing quotes on torchx install command. | ## 📚 Documentation
I was running the [TorchX Quickstart](https://pytorch.org/torchx/latest/quickstart.html) tutorial and I would get a message saying that the package couldn't be found.

After looking around, I re... | https://github.com/meta-pytorch/torchx/issues/849 | closed | [] | 2024-03-18T23:56:44Z | 2024-03-20T15:06:34Z | 2 | mdevino |
pytorch/pytorch | 122,079 | how to find the source code of the torch.linalg.eigh | ### 📚 The doc issue
what is the iteration process of the torch.linalg.eigh
### Suggest a potential alternative/fix
_No response_ | https://github.com/pytorch/pytorch/issues/122079 | closed | [] | 2024-03-18T07:50:05Z | 2024-03-19T02:27:30Z | null | liweiyangv |
huggingface/gsplat.js | 76 | How to start rendering with a local file path? | Hi, thanks for your work!
I am new to JS and want to ask how to start rendering given a local path. I really appreciate any help you can provide. | https://github.com/huggingface/gsplat.js/issues/76 | open | [] | 2024-03-18T07:13:31Z | 2024-04-18T13:14:24Z | null | yifanlu0227 |
pytorch/xla | 6,766 | How to implement parrallel training across TPU device with XLA 2.X | I found the latest opensource LLM from google: Gemma has two version of model structure.
1. https://github.com/google/gemma_pytorch/blob/main/gemma/model_xla.py
2. https://github.com/google/gemma_pytorch/blob/main/gemma/model.py
where the `model_xla` version with `run_xla.sh` and `xla_model_parallel.py` seems us... | https://github.com/pytorch/xla/issues/6766 | closed | [
"question",
"distributed",
"xla:tpu"
] | 2024-03-18T06:34:38Z | 2025-04-18T13:50:47Z | null | Mon-ius |
huggingface/accelerate | 2,560 | [Multi-GPU training] How to specific backend used in DDP training? | ### System Info
```Shell
.....
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_g... | https://github.com/huggingface/accelerate/issues/2560 | closed | [] | 2024-03-17T01:46:47Z | 2024-05-17T15:06:51Z | null | Luciennnnnnn |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.