repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/diffusers
9,900
Potential bug in repaint?
https://github.com/huggingface/diffusers/blob/dac623b59f52c58383a39207d5147aa34e0047cd/src/diffusers/schedulers/scheduling_repaint.py#L322 According to line5 of algorithm 1 in the paper, the second part in line 322 should remove the `**0.5`? thanks!
https://github.com/huggingface/diffusers/issues/9900
closed
[]
2024-11-10T10:41:26Z
2024-12-16T19:38:22Z
3
jingweiz
pytorch/vision
8,722
The link of **Multi-view Stereo Correspondence** doesn't exist in the doc
### 📚 The doc issue [The link](http://matthewalunbrown.com/patchdata/patchdata.html) of **Multi-view Stereo Correspondence** doesn't exist in [the doc](https://pytorch.org/vision/stable/datasets.html#image-pairs) as shown below: ![Screenshot 2024-11-10 102207](https://github.com/user-attachments/assets/a279a8a3-83...
https://github.com/pytorch/vision/issues/8722
open
[ "module: documentation" ]
2024-11-10T01:31:15Z
2024-11-27T17:56:47Z
3
hyperkai
pytorch/serve
3,362
Trying to find a doc explaining how the scaling works (min_worker to max_worker)
### 📚 The doc issue Can anyone help out? ### Suggest a potential alternative/fix _No response_
https://github.com/pytorch/serve/issues/3362
open
[]
2024-11-09T22:01:02Z
2024-11-09T22:01:02Z
null
lschaupp
huggingface/finetrainers
82
[question] what is the difference between cofgvideo scheduler and normal diffuers scheduler
### Feature request / 功能建议 CogVideoXDPMScheduler VS DPMSCheduler CogVideoXDDIMScheduler VS DDIM Scheduler Hi Aryan, is there any sampling difference between these two sampler? @a-r-r-o-w ### Motivation / 动机 / ### Your contribution / 您的贡献 /
https://github.com/huggingface/finetrainers/issues/82
closed
[]
2024-11-09T17:15:57Z
2024-12-19T14:43:23Z
null
foreverpiano
huggingface/optimum
2,092
Add support for RemBERT in the ONNX export
### Feature request Add RemBERT to supported architectures for ONNX export. ### Motivation The support for [RemBert](https://huggingface.co/docs/transformers/model_doc/rembert) was previously available in Transformers see [here](https://github.com/huggingface/transformers/issues/16308). However, now it seems that R...
https://github.com/huggingface/optimum/issues/2092
closed
[ "onnx" ]
2024-11-08T15:12:34Z
2024-12-02T13:54:10Z
1
mlynatom
pytorch/xla
8,366
Export training model to StableHlo
## ❓ Questions and Help The export API only supports `torch.nn.module` as input, is any method to export a training model with **step_fn** to StableHlo? Here is a simple training case from [example](https://github.com/pytorch/xla/blob/6454b42fd404d13f2008730ed4ad33b3a91723e3/examples/train_resnet_base.py#L16): ```...
https://github.com/pytorch/xla/issues/8366
closed
[]
2024-11-08T08:02:01Z
2025-01-09T02:00:38Z
3
Zantares
huggingface/lerobot
502
Low accuracy for diffusion policy+aloha env+sim_transfer_cude_human dataset
I'm trying to use diffusion model and aloha env to train on sim_transfer_cude_human dataset. But after 60000 training step, the evaluation accuracy is only 2%-6%. Idont know why? If I load pre-trained act policy, the accuracy can reach 80%.
https://github.com/huggingface/lerobot/issues/502
open
[ "question", "simulation" ]
2024-11-08T02:20:14Z
2025-11-29T02:48:27Z
null
Kimho666
pytorch/torchchat
1,358
Create doc and tests for distributed inference
### 🚀 The feature, motivation and pitch Once distributed inference integration into torchchat is functional, let's add a docs/distributed.md with an example, and plumb that example into `.ci/scripts/run-docs distributed`. (updown.py extracts all commands between triple backticks into a test script.) torchchat ha...
https://github.com/pytorch/torchchat/issues/1358
closed
[ "documentation", "actionable", "Distributed", "triaged" ]
2024-11-08T02:08:33Z
2025-01-18T06:15:01Z
2
mikekgfb
huggingface/local-gemma
41
How to load from file?
How to load model from file, eg. .h5 file, instead of downloading the model? Especially the model saved by keras_nlp.
https://github.com/huggingface/local-gemma/issues/41
open
[]
2024-11-07T03:01:25Z
2024-11-07T03:03:31Z
null
datdq-abivin
pytorch/FBGEMM
3,338
how to add -r in build instructions ?
<img width="1053" alt="image" src="https://github.com/user-attachments/assets/63c8565c-55b6-4ee0-a209-60862c51fe68">
https://github.com/pytorch/FBGEMM/issues/3338
open
[]
2024-11-07T02:06:21Z
2024-11-07T06:03:40Z
null
zhaozheng09
pytorch/xla
8,359
Query regarding using 1 chip (2 cores of TPU v3) for Inference
## ❓ Questions and Help Hello, I am trying to benchmark the performance of TPU v3 for inference. However, I would like to use 2 cores (1 chip). Please point me to any documentation that I can get started on. Also, is it possible to launch 2 inferences on 2 cores as separate independent processes? (This would just...
https://github.com/pytorch/xla/issues/8359
open
[ "question", "xla:tpu" ]
2024-11-06T18:03:21Z
2025-02-18T12:45:15Z
null
deepakkumar2440
pytorch/vision
8,713
`torchvision.ops.boxes.batched_nms` slow on large box numbers
### 🐛 Describe the bug ## Description `torchvision.ops.boxes.batched_nms` on CUDA GPU slows down considerably when then number of bounding boxes involved increases. The slow down is associated with Device -> Host transfer, and is linked to the iterative part of the Non Maximum Suppression (NMS) algorithm. In a ...
https://github.com/pytorch/vision/issues/8713
closed
[]
2024-11-06T12:58:13Z
2025-02-20T17:16:10Z
1
Ghelfi
huggingface/diffusers
9,876
Why isn’t VRAM being released after training LoRA?
### Describe the bug When I use train_dreambooth_lora_sdxl.py, the VRAM is not released after training. How can I fix this? ### Reproduction Not used. ### Logs _No response_ ### System Info - 🤗 Diffusers version: 0.31.0.dev0 - Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.17 - Running on G...
https://github.com/huggingface/diffusers/issues/9876
open
[ "bug", "stale" ]
2024-11-06T11:58:59Z
2024-12-13T15:03:25Z
14
hjw-0909
pytorch/ao
1,230
How to skip decomposition of dequantize_affine and quantize_affine custom ops in inductor?
I want to use the `torch.ops.quant.quantize_affine` (Q) and `torch.ops.quant.dequantize_affine` (DQ) to represent a quant model DAG in QDQ style, and do quant fusion using inductor's [pattern matcher](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/pattern_matcher.py), for instance: ``` x(i8) w(i8)...
https://github.com/pytorch/ao/issues/1230
closed
[]
2024-11-06T08:01:46Z
2024-11-12T05:35:06Z
null
Nullkooland
huggingface/diffusers
9,866
Flux controlnet can't be trained, do this script really work?
### Describe the bug run with one num processes, the code broke down and returns: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by ...
https://github.com/huggingface/diffusers/issues/9866
closed
[ "bug", "stale" ]
2024-11-05T08:51:57Z
2024-12-05T15:19:12Z
4
liuyu19970607
pytorch/executorch
6,655
How To Building and Running Llama 3.2 1B Instruct with Qualcomm AI Engine Direct Backend?
### Right Case When I follow the doc : https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#enablement, I export the Llama3.2-1B-Instruct:int4-spinquant-eo8 model to xnnpack backend pte successfully, and working alright on cpu. [ ![SpinQuant_XNNPACK](https://github.com/user-attachments/...
https://github.com/pytorch/executorch/issues/6655
open
[ "partner: qualcomm", "triaged", "module: qnn", "module: llm" ]
2024-11-05T08:00:19Z
2025-12-19T19:15:57Z
null
baotonghe
pytorch/serve
3,357
413 Request Entity Too Large
### 📚 The doc issue When making a request, sometimes 413 Request Entity Too Large is reported. Is there any configuration for torchserve that can increase the threshold of request size? ### Suggest a potential alternative/fix _No response_
https://github.com/pytorch/serve/issues/3357
open
[]
2024-11-05T02:38:59Z
2025-01-12T05:21:34Z
1
pengxin233
pytorch/tutorials
3,143
New Search Engine should link to right branch (stable/main/preview pr branch)
The search feature should match the branch that the docs loaded in. Why? The use case I often have is I use the search bar to quickly navigate to the page I had just edited in my PR to see how it'd render in prod. The new search engine produces results that always directs to stable, though, so there's no easy way to na...
https://github.com/pytorch/tutorials/issues/3143
closed
[ "regression" ]
2024-11-04T19:42:26Z
2024-11-19T19:19:34Z
0
janeyx99
pytorch/xla
8,355
Offer user guide instructions to users to leverage various `libtpu` versions
## 📚 Documentation Offer user guide instructions to users to leverage various `libtpu` versions. We want users to have a clear understanding of how to set their expectations when choosing between different libtpu options. Here is a snippet of various libtpu version. I will add more details (as needed) to this bu...
https://github.com/pytorch/xla/issues/8355
closed
[ "usability", "documentation" ]
2024-11-04T18:12:13Z
2025-03-03T18:32:33Z
15
miladm
huggingface/optimum-quanto
346
How to support activation 4bit quantization?
As mentioned in title.
https://github.com/huggingface/optimum-quanto/issues/346
closed
[ "Stale" ]
2024-11-04T09:59:21Z
2024-12-10T02:10:31Z
null
Ther-nullptr
pytorch/vision
8,714
I am using the torchvision-0.13.1+cu113 version, but it seems that it does not have the datapoints package. How can I solve this issue?
### 🐛 Describe the bug I am using the torchvision-0.13.1+cu113 version, but it seems that it does not have the datapoints package. How can I solve this issue? ### Versions I am using the torchvision-0.13.1+cu113 version, but it seems that it does not have the datapoints package. How can I solve this issue?
https://github.com/pytorch/vision/issues/8714
closed
[]
2024-11-04T07:23:48Z
2024-12-11T09:35:34Z
5
jiangsu415
huggingface/transformers
34,591
How to retrain the GLIP model on the Object365 dataset
Since I made some modifications to the GLIP model, I need to perform some pre-training again to improve performance. I replaced `_base_ = [../_base_/datasets/coco_detection.py]` with `_base_ = [../_base_/datasets/objects365v1_detection.py]` in `glip_atss_swin-t_a_fpn_dyhead_16xb2_ms-2x_funtune_coco.py` to train on Obje...
https://github.com/huggingface/transformers/issues/34591
closed
[]
2024-11-04T03:54:17Z
2024-11-04T06:46:17Z
null
Polarisamoon
huggingface/diffusers
9,847
Merge Lora weights into base model
I have finetuned the stable diffusion model and would like to merge the lora weights into the model itself. Currently I think in PEFT this is supported using `merge_and_unload` function but I seem to not find this option in diffusers. So is there any way to get a base model but with finetuned weights and If i am not wr...
https://github.com/huggingface/diffusers/issues/9847
closed
[]
2024-11-02T18:00:28Z
2024-11-03T03:03:45Z
1
yaswanth19
huggingface/chat-ui
1,550
Add full-text search in chat history
## Describe your feature request Allow users to search for specific keywords or phrases within the chat history, making it easier to find and recall previous conversations. ## Screenshots (if relevant) An example of the search bar placement could be found in #1079 ## Implementation idea One possible impl...
https://github.com/huggingface/chat-ui/issues/1550
closed
[ "enhancement" ]
2024-11-01T19:27:41Z
2025-05-28T15:03:19Z
5
kadykov
pytorch/torchchat
1,338
can't build AOTI runner
### 🐛 Describe the bug `torchchat/utils/scripts/build_native.sh aoti` Fails with ``` Building aoti native runner... Defaulting TORCHCHAT_ROOT to /home/warden/source/torchchat/torchchat/utils/scripts/../../.. since it is unset. ~/source/torchchat ~/source/torchchat Synchronizing submodule url for 'tokenizer/t...
https://github.com/pytorch/torchchat/issues/1338
closed
[]
2024-11-01T17:52:21Z
2024-11-01T21:36:12Z
1
byjlw
huggingface/diffusers
9,837
[Feature] Is it possible to customize latents.shape / prepare_latent for context parallel case?
**Is your feature request related to a problem? Please describe.** One may need to extend the code to context parallel case and the latent sequence length needs to get divided. Instead of copying all the code of pipeline.py, the minimum modification is just adding few lines about dividing the latent shape and all_gat...
https://github.com/huggingface/diffusers/issues/9837
closed
[ "stale" ]
2024-11-01T14:32:05Z
2024-12-01T15:07:36Z
3
foreverpiano
huggingface/diffusers
9,836
[Feature] Can we record layer_id for DiT model?
**Is your feature request related to a problem? Please describe.** Some layerwise algorithm may be based on layer-id. just need some simple modification for transformer2Dmodel and its inner module like attention part, batch_norm part. just pass the layer_id as an extra parameter.
https://github.com/huggingface/diffusers/issues/9836
closed
[ "stale" ]
2024-11-01T14:26:31Z
2025-01-27T01:31:21Z
9
foreverpiano
huggingface/diffusers
9,835
unused parameters lead to error when training contrlnet_sd3
### Discussed in https://github.com/huggingface/diffusers/discussions/9834 <div type='discussions-op-text'> <sup>Originally posted by **Zheng-Fang-CH** November 1, 2024</sup> ![b1fa13bdb595284dce31e3cf189876b](https://github.com/user-attachments/assets/12faa0fc-acb8-4c98-ba03-b0e41bc9075a) Is there someone mee...
https://github.com/huggingface/diffusers/issues/9835
closed
[]
2024-11-01T13:57:03Z
2024-11-17T07:33:25Z
6
Daryu-Fan
huggingface/diffusers
9,833
SD3.5-large. Why is it OK when calling with a single thread, but not with multiple threads?
### Describe the bug First, I created a SD3.5-large service: ```python import os os.environ["CUDA_VISIBLE_DEVICES"] = "1" import uuid from diffusers import BitsAndBytesConfig, SD3Transformer2DModel, DDIMScheduler, DDPMParallelScheduler from diffusers import StableDiffusion3Pipeline import torch from transf...
https://github.com/huggingface/diffusers/issues/9833
closed
[ "bug" ]
2024-11-01T08:00:04Z
2024-11-02T02:14:50Z
1
EvanSong77
huggingface/diffusers
9,825
Support IPAdapters for FLUX pipelines
### Model/Pipeline/Scheduler description IPAdapter for FLUX is available now, do you have any plans to add IPAdapter to FLUX pipelines? ### Open source status - [X] The model implementation is available. - [X] The model weights are available (Only relevant if addition is not a scheduler). ### Provide useful links f...
https://github.com/huggingface/diffusers/issues/9825
closed
[ "help wanted", "wip", "contributions-welcome", "IPAdapter" ]
2024-10-31T23:07:32Z
2024-12-21T17:49:59Z
10
chenxiao111222
huggingface/diffusers
9,822
Loading SDXL loras into Flux
### Describe the bug Currently it's possible to load SDXL loras without warning into Flux. ### Reproduction Is it possible for you to implement a raise a warning (and an error when a boolean is active) when the list of layers here is zero: https://github.com/huggingface/diffusers/blob/41e4779d988ead99e7acd78dc8e7...
https://github.com/huggingface/diffusers/issues/9822
closed
[ "bug" ]
2024-10-31T18:01:29Z
2024-12-10T14:37:32Z
8
christopher5106
huggingface/datasets
7,268
load_from_disk
### Describe the bug I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that? ### Steps to reproduce the bug when trying ...
https://github.com/huggingface/datasets/issues/7268
open
[]
2024-10-31T11:51:56Z
2025-07-01T08:42:17Z
3
ghaith-mq
pytorch/xla
8,342
Instructions in CONTRIBUTING.md for using VS Code don't seem to work
## 📚 Documentation I've followed the instructions in CONTRIBUTING.md to set up a dev environment using VS Code. Next I run python and then tried to import torch_xla as xla and I get an error: ``` >>> import torch_xla as xla Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/works...
https://github.com/pytorch/xla/issues/8342
closed
[ "documentation" ]
2024-10-30T18:16:38Z
2024-10-30T18:36:37Z
1
mikegre-google
huggingface/peft
2,188
How to change 'modules_to_save' setting when reloading a lora finetuned model
### System Info - `transformers` version: 4.36.2 - Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.19 - Huggingface_hub version: 0.24.6 - Safetensors version: 0.4.5 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True)...
https://github.com/huggingface/peft/issues/2188
closed
[]
2024-10-30T12:26:37Z
2024-12-08T15:03:37Z
null
dengchengxifrank
huggingface/huggingface.js
996
@huggingface/hub: how to use `modelInfo` with proper typing
THe `modelInfo` method is allowing the caller to define which field will be provided, it has been added in https://github.com/huggingface/huggingface.js/pull/946 https://github.com/huggingface/huggingface.js/blob/186ab738e2f9c7c3613330d45e44848186958815/packages/hub/src/lib/model-info.ts#L9-L11 Here is an example...
https://github.com/huggingface/huggingface.js/issues/996
closed
[]
2024-10-30T10:41:36Z
2024-10-30T12:02:47Z
null
axel7083
huggingface/diffusers
9,802
Multidiffusion (panorama pipeline) is missing segmentation inputs?
I'm looking at the multidiffusion panorama pipeline page (https://huggingface.co/docs/diffusers/en/api/pipelines/panorama). It looks like there is no way to specify the segmentation and associated prompts as in the original paper https://multidiffusion.github.io/ . If the code only has the panorama capability and not t...
https://github.com/huggingface/diffusers/issues/9802
open
[ "stale" ]
2024-10-29T20:15:15Z
2024-12-24T15:03:30Z
5
jloveric
pytorch/TensorRT
3,267
❓ [Question] How do you properly deploy a quantized model with tensorrt
## ❓ Question I have a PTQ model and a QAT model trained with the official pytorch API following the quantization tutorial, and I wish to deploy them on TensorRT for inference. The model is metaformer-like using convolution layers as token mixer. One part of the quantized model looks like this: ![image](https://githu...
https://github.com/pytorch/TensorRT/issues/3267
open
[ "question" ]
2024-10-29T15:06:54Z
2025-03-03T22:30:06Z
null
Urania880519
pytorch/torchtitan
658
Questions about FSDP2 support and memory usage.
What is current support of FSDP2 in main pytorch? I just see this here https://github.com/pytorch/pytorch/blob/main/torch/distributed/_composable/fully_shard.py#L45 > "`torch.distributed._composable.fully_shard` will be removed after PyTorch 2.5." Will FSDP2 be deprecated? Can FSDP1 work with DTensor as well as ...
https://github.com/pytorch/torchtitan/issues/658
closed
[ "question" ]
2024-10-29T11:09:01Z
2025-08-21T02:57:19Z
null
tangjiasheng
huggingface/transformers.js
1,000
Error while converting LLama-3.1:8b to ONNX
### Question Hey @xenova, Thanks a lot for this library! I tried converting [`meta-llama/Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) to ONNX using the following command (on `main`): ```bash python -m scripts.convert --quantize --model_id "meta-llama/Llama-3.1-8B-Instruct" `...
https://github.com/huggingface/transformers.js/issues/1000
open
[ "question" ]
2024-10-29T09:40:14Z
2024-10-29T09:40:14Z
null
charlesbvll
pytorch/torchchat
1,334
Multimodal Eval Enablement (Looking for Developer to Implement Design)
### 🚀 The feature, motivation and pitch ***Please note that since the actual implementation is going to be simple, and the design has already been reviewed, the purpose of this GitHub Issue is to look for a developer to implement this feature ASAP.*** LLM eval stands for the process of assessing the perplexity, ...
https://github.com/pytorch/torchchat/issues/1334
closed
[ "enhancement", "good first issue", "actionable", "Llama 3.2- Multimodal", "triaged" ]
2024-10-29T01:01:50Z
2025-03-25T06:24:18Z
26
Olivia-liu
huggingface/chat-ui
1,545
Support markdown & code blocks in text input
## Describe your feature request Would be nice to support code block in the text input bar, that would make it easier to input code. we could also support basic markdown features like bold or italic, maybe not headings tho. ## Screenshots (if relevant) Try https://claude.ai/new to see an example of how this co...
https://github.com/huggingface/chat-ui/issues/1545
open
[ "enhancement", "front" ]
2024-10-28T08:42:58Z
2024-11-11T20:26:32Z
2
nsarrazin
huggingface/peft
2,181
How can I do to export mode format as gguf
### Feature request This is a good project,I just got it today and encountered some problems. my any code ``` python from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Qwen2-0.5B") model = AutoModelForCausalL...
https://github.com/huggingface/peft/issues/2181
closed
[]
2024-10-26T13:51:45Z
2024-10-26T13:59:18Z
null
xu756
pytorch/xla
8,327
Add documentations for persistent caching
## 📚 Documentation Add documentations for persistent caching; the [current documentation](https://github.com/pytorch/xla/blob/310ff8f41858db7782f97542e76aeb60fa527d14/API_GUIDE.md#compilation-caching) briefly explains how to enable the cache. Though, it does little to 1. introduce the feature 2. explain what p...
https://github.com/pytorch/xla/issues/8327
open
[ "documentation" ]
2024-10-26T01:01:36Z
2024-10-26T01:01:37Z
0
miladm
huggingface/diffusers
9,772
Support ControlNetPlus Union if not already supported
It's not clear if ControlNetPlus is already supported by diffusers https://github.com/xinsir6/ControlNetPlus/tree/main/pipeline which consists of union controlnet for SDXL. This model seems to support the only SDXL segmentation that I'm aware of. If not already supported, it should be! https://github.com/xinsir6/Con...
https://github.com/huggingface/diffusers/issues/9772
closed
[ "help wanted", "Good second issue", "contributions-welcome" ]
2024-10-25T17:43:43Z
2024-12-11T17:07:54Z
5
jloveric
huggingface/transformers.js
994
Will these mistakes have an impact?
### Question After AutoProcessor.from_pretrained is loaded, an error occurred, and the error message is as follows: ````typescript ort-wasm-simd-thread…jsep.wasm:0x10367e0 2024-10-25 20:11:31.705399 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred e...
https://github.com/huggingface/transformers.js/issues/994
open
[ "question" ]
2024-10-25T12:17:03Z
2024-11-12T11:10:11Z
null
aidscooler
pytorch/vision
8,696
PyTorch & Torchvision compatible issue on Jetson Orin
### 🐛 Describe the bug Previous discussion: https://forums.developer.nvidia.com/t/pytorch-torchversion-compatible-issue-on-l4t35-5-0/310929/9 ```bash daniel@daniel-nvidia:~/Work/yolov5$ python detect.py --weights yolov5s.pt --source ../../Videos/Worlds_longest_drone_fpv_one_shot.mp4 WARNING ⚠️ Python>=3.10 i...
https://github.com/pytorch/vision/issues/8696
open
[]
2024-10-25T07:12:11Z
2024-10-25T07:28:44Z
0
lida2003
huggingface/transformers.js
993
How do I know the loading progress when loading .onnx file?
### Question Because the .onnx file is large(about 170M),I decided to provide a loading progress. Code as below: ```` typescript const modelSettings = { // Do not require config.json to be present in the repository config: { model_type: "custom" }, subfolder: "", proces...
https://github.com/huggingface/transformers.js/issues/993
open
[ "question" ]
2024-10-25T05:52:12Z
2024-10-25T17:54:30Z
null
aidscooler
huggingface/finetrainers
70
How to set the resolutions when finetuning I2V model?
I want to train a video diffusion with lower resolutions. I set the height_buckets=256 and width_buckets=256 in prepare_dataset.sh and process the data. But I run into the following error while run the train_image_to_video_lora.sh script. ValueError: It is currently not possible to generate videos at a different res...
https://github.com/huggingface/finetrainers/issues/70
closed
[]
2024-10-25T05:36:19Z
2024-11-11T18:27:29Z
null
TousakaNagio
huggingface/optimum
2,080
"ValueError: Trying to export a codesage model" while trying to export codesage/codesage-large
### System Info ```shell optimum 1.23.2 MacOS 14.7 Python 3.9 ``` ### Who can help? @michaelbenayoun ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (g...
https://github.com/huggingface/optimum/issues/2080
open
[ "bug" ]
2024-10-25T05:27:22Z
2024-10-25T05:27:22Z
0
TurboEncabulator9000
pytorch/pytorch
138,888
How to Implement multi-card parallel Inference by torchrun?
Hello everyone, I'm trying to achieve a goal of using trochrun for dual-card parallel inference. Then I have two questions. First, I found that torchrun is mainly used for model training, so can it be used for model inference? If can, my inference process is divided into two parts: model loading and inference. I only w...
https://github.com/pytorch/pytorch/issues/138888
closed
[ "oncall: distributed" ]
2024-10-25T03:52:20Z
2024-11-27T01:05:31Z
null
lcf2610
huggingface/chat-ui
1,543
RFC enable multimodal and tool usage at once for OAI endpoints ?
https://github.com/huggingface/chat-ui/blob/8ed1691ecff94e07d10dfb2874d3936d293f4842/src/lib/server/endpoints/openai/endpointOai.ts#L191C53-L191C65 Just played around with combining both of this What do you think about making tool calling only if no image is in conversation ? Otherwise we need to insert models twi...
https://github.com/huggingface/chat-ui/issues/1543
open
[]
2024-10-24T17:37:50Z
2024-10-24T17:39:14Z
0
flozi00
pytorch/tutorials
3,113
💡 [REQUEST] - Update tutorials with device-generic APIs
### 🚀 Describe the improvement or the new tutorial We should use the latest device-generic APIs when they come out in 2.6 in all tutorials to improve readability. ### Existing tutorials on this topic https://pytorch.org/tutorials/beginner/basics/buildmodel_tutorial is an example of one we should update. There is mo...
https://github.com/pytorch/tutorials/issues/3113
closed
[]
2024-10-24T17:33:29Z
2025-01-29T09:35:10Z
3
albanD
huggingface/transformers.js
991
Loading models from "non-URL" locations in the browser
### Question Hi! I have an application where the model files will be pre-loaded in a custom format into the browsers IndexDb. Based on my understanding, transformer.js currently only supports loading models by URL and then caches them in the browser cache. Getting the model files from the IndexDb instead, seems a li...
https://github.com/huggingface/transformers.js/issues/991
open
[ "question" ]
2024-10-24T12:18:19Z
2024-12-04T19:30:07Z
null
AKuederle
huggingface/finetrainers
68
How to set the hyperparameters when finetuning I2V model with LoRA?
File "/home/shinji106/ntu/cogvideox-factory/training/dataset.py", line 411, in __iter__ self.buckets[(f, h, w)].append(data) KeyError: (16, 320, 720) The resolution is (13, 320, ...
https://github.com/huggingface/finetrainers/issues/68
closed
[]
2024-10-24T08:06:33Z
2025-01-10T23:40:06Z
null
TousakaNagio
huggingface/datasets
7,249
How to debugging
### Describe the bug I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the ...
https://github.com/huggingface/datasets/issues/7249
open
[]
2024-10-24T01:03:51Z
2024-10-24T01:03:51Z
null
ShDdu
huggingface/sentence-transformers
3,015
How to customize the dataloader? e.g. Custom Data Augmentation
Hi, I've always been used to the old .fit behaviour where I could pass in the good DataLoader, implementing the Dataset myself, according to my needs. With the new trainer interface, how am I supposed to tweak the dataloader? Let's say I want to apply some random transformations to the input text, how can I d...
https://github.com/huggingface/sentence-transformers/issues/3015
open
[]
2024-10-23T17:11:13Z
2024-11-15T10:32:35Z
null
msciancalepore98
huggingface/diffusers
9,756
Could not find loading_adapters.ipynb
### Describe the bug while reading doc [Load adapters](https://huggingface.co/docs/diffusers/using-diffusers/loading_adapters) I tried to open in Colab to run an example on this page. <img width="504" alt="open_colab" src="https://github.com/user-attachments/assets/0b1397f1-d266-4d83-84ab-276ea796a2a4"> I...
https://github.com/huggingface/diffusers/issues/9756
closed
[ "bug" ]
2024-10-23T13:03:11Z
2024-11-01T15:27:56Z
6
thliang01
huggingface/accelerate
3,190
How to save the optimizer state while enabling Deepspeed to save the model
### System Info ```Shell Unrelated to configuration ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such...
https://github.com/huggingface/accelerate/issues/3190
closed
[]
2024-10-23T11:58:08Z
2024-11-01T02:53:38Z
null
ITerydh
huggingface/diffusers
9,750
Is it possible to provide img2img code for CogView3?
Is it possible to provide img2img code for CogView3?
https://github.com/huggingface/diffusers/issues/9750
open
[ "stale", "contributions-welcome" ]
2024-10-23T07:40:38Z
2024-12-20T15:04:01Z
3
ChalvYongkang
pytorch/serve
3,352
GPU not detected inside torchserve docker container
### 🐛 Describe the bug I am trying to create a Docker image for my custom handler of diffusers. I can create the Docker image and then a Docker container from it, but the Docker container is not able to detect the GPU. I have used the official TorchServe Docker image from Docker Hub, but it still cannot use the GPU...
https://github.com/pytorch/serve/issues/3352
closed
[]
2024-10-23T06:47:13Z
2024-10-23T10:36:06Z
1
dummyuser-123
pytorch/xla
8,301
Provide debugging and troubleshooting tips to Pallas developer
## 📚 Documentation Please provide documentation on how to troubleshoot pallas issues. One place we can put this information is in this [Pallas doc](https://github.com/pytorch/xla/blob/master/docs/source/features/pallas.md) cc @mikegre-google to help review the upcoming PR
https://github.com/pytorch/xla/issues/8301
open
[ "documentation" ]
2024-10-22T22:50:15Z
2024-10-25T21:58:56Z
0
miladm
huggingface/optimum
2,076
Problem converting tinyllama to onnx model with optimum-cli
### System Info ```shell main branch newest local pip install ``` ### Who can help? @michaelbenayoun ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (g...
https://github.com/huggingface/optimum/issues/2076
open
[ "bug" ]
2024-10-22T06:23:51Z
2024-10-22T06:36:42Z
0
hayyaw
pytorch/torchtitan
639
How to load previous distributed checkpoint after using FP8Linear + torch.compile?
FP8Linear + torch.compile is changing the parameters's name. If I do convert to FP8Linear -> torch.compile -> fsdp2 wrapping -> load distributed ckpt, the parameters's names do not match with the ckpt we want to resume from. And it's not straightforward to change the parameters's names in the distributed ckpt. T...
https://github.com/pytorch/torchtitan/issues/639
closed
[]
2024-10-21T23:27:33Z
2024-10-25T18:35:40Z
null
goldhuang
pytorch/ao
1,132
What is the expected inference steps after I apply torchao in training?

Hello, I have integrated torchao to my training. But I don't think it's 100% clear what the inference should be like. Should I use the converted FP8 linear layer to do inference? Is delayed scaling supposed to work in inference? Or, should I use the original linear layer to do inference? Thanks a lot in advance ...
https://github.com/pytorch/ao/issues/1132
closed
[ "float8" ]
2024-10-21T22:19:57Z
2024-12-09T18:59:50Z
null
goldhuang
pytorch/torchtitan
638
What is the expected inference steps after I apply torchao in training?
Hello, I have integrated torchao to my training. But I think it's not very clear what the inference should be like. Should I use the converted FP8 linear layer to do inference? Is delayed scaling supposed to work in inference? Or, should I use the original linear layer to do inference? Thanks in advance if you can...
https://github.com/pytorch/torchtitan/issues/638
open
[ "question" ]
2024-10-21T22:19:06Z
2024-10-22T03:33:39Z
null
goldhuang
pytorch/xla
8,295
litepod and tpu sample not working anymore: https://cloud.google.com/tpu/docs/pytorch-pods
## 🐛 Bug Sample located here doesn't seem to work on tpu v5e16 pod (previous did as of 3 days ago) https://cloud.google.com/tpu/docs/pytorch-pods ## To Reproduce Following the steps here: https://cloud.google.com/tpu/docs/pytorch-pods Before running the example: 1. set up SSH key pair using: ssh-keygen -t...
https://github.com/pytorch/xla/issues/8295
closed
[]
2024-10-21T16:43:31Z
2024-10-22T00:54:24Z
8
ttdd11
huggingface/diffusers
9,731
How to use Playground2.5 to train lora with own dataset to generate pictures of a specific style?
### Describe the bug Hi, I have been working on training models using the same dataset as "stabilityai/stable-diffusion-xl-base-1.0" with the script examples/text_to_image/train_text_to_image_lora_sdxl.py, and I achieved quite promising results. Now, I am trying to further improve the performance by switching to...
https://github.com/huggingface/diffusers/issues/9731
open
[ "bug", "stale" ]
2024-10-21T12:10:12Z
2024-11-20T15:03:04Z
null
hjw-0909
huggingface/diffusers
9,727
FLUX.1-dev dreambooth save problem trained on multigpu
### Describe the bug I tried to train flux using accelerate and deepspeed, but when using two L40s, the model could not be saved properly. What is the problem? ### Reproduction train.sh: accelerate launch --config_file config.yaml train_flux.py \ --pretrained_model_name_or_path="./FLUX.1-dev" \ --resolution=1...
https://github.com/huggingface/diffusers/issues/9727
closed
[ "bug" ]
2024-10-21T03:37:23Z
2024-10-29T06:38:00Z
1
jyy-1998
huggingface/diffusers
9,726
FLUX.1-dev dreambooth problem trained on multigpu
### Describe the bug I tried to use accelerate and deepspeed to train flux, and it worked fine when using two L40s, but an error occurred when using two a100s. What is the reason? ### Reproduction train.sh: accelerate launch --config_file config.yaml train_flux.py \ --pretrained_model_name_or_path="./FLUX.1-dev"...
https://github.com/huggingface/diffusers/issues/9726
closed
[ "bug" ]
2024-10-21T03:20:44Z
2024-10-21T03:32:42Z
0
jyy-1998
huggingface/tokenizers
1,661
How to Read Information in Large Tokenizer's Vocabulary
TLDR; This is how the byte-level BPE works. Main advantages are: - Smaller vocabularies - No unknown token This is totally expected behavior. The byte-level BPE converts all the Unicode code points into multiple byte-level characters: 1. Each Unicode code point is decomposed into bytes (1 byte for ASCII characte...
https://github.com/huggingface/tokenizers/issues/1661
closed
[]
2024-10-20T13:38:53Z
2024-10-21T07:29:43Z
null
kaizhuanren
pytorch/torchtitan
636
DDP + Pipeline parallelism
For fine tuning/training with `PP + DDP`, is there documentation or modification that can be done to achieve this using torchtitan? The following check in `parallelize_llama.py` was the point of error when trying the configuration on my end. `if world_mesh.ndim > 1: raise RuntimeError("DDP has not...
https://github.com/pytorch/torchtitan/issues/636
closed
[ "question" ]
2024-10-20T12:36:55Z
2024-11-08T00:03:05Z
null
prathameshtd
pytorch/torchtitan
635
data shuffling
I understand that the current version of the code doesn't shuffle the data during training, _i.e._ examples are consumed in order in each rank (in fact, there's a note to that effect [here](https://github.com/pytorch/torchtitan/blob/0edd2fb36c8c3468086986efd049e9bb0ff3414e/torchtitan/datasets/hf_datasets.py#L99)). I'm ...
https://github.com/pytorch/torchtitan/issues/635
closed
[ "question" ]
2024-10-20T03:39:35Z
2024-10-24T02:08:43Z
null
eminorhan
huggingface/diffusers
9,719
`disable_progress_bar` is ignored for some models (Loading checkpoint shards)
### Describe the bug When loading some pipelines, `diffusers.utils.logging.disable_progress_bar()` doesn't disable all progress bars. In particular the "Loading checkpoint shards" progress bar still appears. The "Loading pipeline components..." progress bar, however, is disabled as expected. Models I found, where this...
https://github.com/huggingface/diffusers/issues/9719
closed
[ "bug" ]
2024-10-19T17:42:37Z
2024-10-19T19:29:12Z
2
JonasLoos
pytorch/tutorials
3,100
💡 [REQUEST] - Add minGRU Tutorial for Efficient Sequence Modeling
### 🚀 Describe the improvement or the new tutorial I propose adding a tutorial on implementing and using minGRU (minimal Gated Recurrent Unit) to the PyTorch tutorials. This addition would provide valuable insights into efficient sequence modeling techniques for the PyTorch community. - Efficiency: Up to 1324x...
https://github.com/pytorch/tutorials/issues/3100
closed
[]
2024-10-19T16:35:32Z
2025-04-16T22:02:23Z
1
dame-cell
huggingface/optimum
2,069
High CUDA Memory Usage in ONNX Runtime with Inconsistent Memory Release
### System Info ```shell Optimum version: 1.22.0 Platform: Linux (Ubuntu 22.04.4 LTS) Python version: 3.12.2 ONNX Runtime Version: 1.19.2 CUDA Version: 12.1 CUDA Execution Provider: Yes (CUDA 12.1) ``` ### Who can help? @JingyaHuang @echarlaix ### Information - [ ] The official example scripts ...
https://github.com/huggingface/optimum/issues/2069
closed
[ "question", "Stale" ]
2024-10-19T02:45:54Z
2024-12-25T02:02:08Z
null
niyathimariya
pytorch/data
1,344
Delete datapipes and dataloader 2 documentation
### 📚 The doc issue Since these are gone on main, we should delete nightly documentation as well. Basically they need to disappear from here: https://pytorch.org/data/main/ ### Suggest a potential alternative/fix _No response_
https://github.com/meta-pytorch/data/issues/1344
closed
[ "documentation" ]
2024-10-18T23:14:59Z
2024-10-19T20:29:46Z
0
andrewkho
huggingface/transformers.js
981
Any gotcha's with manually adding items to transformers-cache?
### Question For [papeg.ai](https://www.papeg.ai) I've implemented that the service worker caches `.wasm` files from `jsDelivir` that Transformers.js [wasn't caching itself yet](https://github.com/huggingface/transformers.js/issues/685#issuecomment-2325125036). I've been caching those filesi n the 'main' Papeg.ai...
https://github.com/huggingface/transformers.js/issues/981
open
[ "question" ]
2024-10-18T12:53:07Z
2024-10-18T12:56:21Z
null
flatsiedatsie
huggingface/transformers
34,241
How to output token by token use transformers?
### System Info ... ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ... ### Expect...
https://github.com/huggingface/transformers/issues/34241
closed
[ "Discussion", "bug" ]
2024-10-18T09:45:19Z
2024-11-26T08:04:43Z
null
xuanzhangyang
huggingface/lerobot
477
Collecting human operated datasets in simulation
Hello, Can you provide info on how human supervision was provided for the simulated datasets (e.g. `lerobot/aloha_sim_transfer_cube_human`)? I am starting to setup a similar MuJoCo gym environment for the Stretch (https://github.com/mmurray/gym-stretch) and I would like to collect/train on some human teleop data, bu...
https://github.com/huggingface/lerobot/issues/477
closed
[ "question", "dataset", "simulation" ]
2024-10-17T23:24:17Z
2025-10-08T08:49:32Z
null
mmurray
pytorch/pytorch
138,280
Refactor FlexibleLayout to separate out "this stride can be changed" and "how this buffer is allocated can be changed"
### 🚀 The feature, motivation and pitch Currently, we have two layouts: - FixedLayout - FlexibleLayout Where FixedLayout basically means "We already decided the layout, don't change it" while FlexibleLayout means "we are free to change this layout". However, I think there are actually two different components...
https://github.com/pytorch/pytorch/issues/138280
open
[ "triaged", "oncall: pt2", "module: inductor", "internal ramp-up task" ]
2024-10-17T23:10:36Z
2025-12-02T17:11:15Z
null
Chillee
huggingface/lighteval
365
[FT] Using lighteval to evaluate a model on a single sample, how?
Thank you the team for the great work. I have a question. Can you please help me to use lighteval to evaluate a model on a single sample? For example, if I have an input from mmlu I, my model generates output O, how can I use lighteval to evaluate O with using the Acc metric? Thanks!
https://github.com/huggingface/lighteval/issues/365
closed
[ "feature" ]
2024-10-17T12:43:45Z
2024-10-24T10:12:54Z
null
dxlong2000
huggingface/diffusers
9,700
Flux inversion
current img2img is not so well, [RF Inversion](https://rf-inversion.github.io/)) provides an inverse method for Flux real image editing, can we implement it using diffusers? or how can we use DDIM inversion in Flux?
https://github.com/huggingface/diffusers/issues/9700
closed
[]
2024-10-17T07:03:59Z
2024-12-17T16:00:30Z
8
yuxu915
pytorch/pytorch
138,179
How to resolve the libfmt.a conflict in React Native.
### 🚀 The feature, motivation and pitch I want to develop a React Native module that primarily integrates LibTorch and includes some methods for loading models and making predictions. I created the module using `npx create-expo-module` and then proceeded with the development. When I run `pod install `in ios, i...
https://github.com/pytorch/pytorch/issues/138179
closed
[ "triage review" ]
2024-10-17T06:37:21Z
2024-10-21T17:35:00Z
null
wangyujiaoflag
pytorch/xla
8,270
Clarify that torch_xla2 is only recommended for inference
## 📚 Documentation <!-- A clear and concise description of what content is an issue. --> My understanding is that torch_xla2 is only recommended for inference. Address this in the [README](https://github.com/pytorch/xla/tree/master/experimental/torch_xla2)
https://github.com/pytorch/xla/issues/8270
closed
[ "question", "documentation" ]
2024-10-17T04:53:36Z
2025-02-27T13:08:45Z
null
cloudchrischan
huggingface/diffusers
9,698
Unable to Retrieve Intermediate Gradients with CogVideoXPipeline
### Describe the bug When generating videos using the CogVideoXPipeline model, we need to access the gradients of intermediate tensors. However, we do not require additional training or parameter updates for the model. We tried using register_forward_hook to capture the gradients, but this approach failed because t...
https://github.com/huggingface/diffusers/issues/9698
closed
[ "bug" ]
2024-10-17T04:30:56Z
2024-10-27T10:24:41Z
4
lovelyczli
huggingface/diffusers
9,697
train_text_to_image_sdxl training effect is very poor
I use DeepSpeed for training: train_text_to_image_sdxl.py 1.The data volume is 231 pieces 2. deepspeed json ![企业微信截图_17291359065532](https://github.com/user-attachments/assets/f82ad033-d786-4fe4-9264-3b6236304170) 3.Training Script ![企业微信截图_17291362274700](https://github.com/user-attachments/assets/ae5a6207-dbc8-...
https://github.com/huggingface/diffusers/issues/9697
closed
[]
2024-10-17T03:40:17Z
2024-10-17T08:32:44Z
2
wzhiyuan2016
huggingface/finetrainers
41
cannot access local variable 'gradient_norm_before_clip' where it is not associated with a value
During both I2V and t2V training, sometimes I encountered the error ``` [rank1]: File "/root/projects/cogvideox-factory/training/cogvideox_text_to_video_lora.py", line 762, in main [rank1]: "gradient_norm_before_clip": gradient_norm_before_clip, [rank1]: ^^^^^^^^^^^^^^^^^^^...
https://github.com/huggingface/finetrainers/issues/41
closed
[]
2024-10-16T18:34:19Z
2024-12-06T08:09:46Z
null
Yuancheng-Xu
huggingface/finetrainers
40
How to load the fine-tuned I2V model's LoRA module
I have successfully fine-tuned an I2V model (locally, without pushing to HF) and would like to load it for inference. I use the following code suggested in the readme ``` model_name = "THUDM/CogVideoX-5b-I2V" pipe = CogVideoXImageToVideoPipeline.from_pretrained( model_name, torch_dtype=torch.bfloat16 ).to("...
https://github.com/huggingface/finetrainers/issues/40
closed
[]
2024-10-16T17:25:21Z
2024-12-03T03:01:23Z
null
Yuancheng-Xu
pytorch/pytorch
138,073
`export()` fails for `full((n,), v)` but succeeds for `ones((n,)) * v` where `v` is dynamic
### 🐛 Describe the bug When using `torch.full((n,), v)` to create a tensor with a dynamic value, one receives a `Pending unbacked symbols` error. A simple workaround is to use `torch.ones((n,)) * v`, but unless I'm missing something the former should work just as well. Below is a minimal example to reproduce the...
https://github.com/pytorch/pytorch/issues/138073
closed
[ "oncall: pt2", "module: dynamic shapes", "module: dynamo", "oncall: export" ]
2024-10-16T13:29:52Z
2025-03-26T17:56:33Z
null
kwikwag
huggingface/transformers.js
975
Supporting Multiple Pipelines?
### Question First of all, thank you so much for creating transformers.js! This is a fantastic library, and I had lots of fun building with it! I have a question regarding using pipelines API: Would it be possible to start multiple pipelines? For example, instead of using just one pipeline to run inference, can we ...
https://github.com/huggingface/transformers.js/issues/975
closed
[ "question" ]
2024-10-16T08:06:44Z
2024-10-21T15:58:20Z
null
kelayamatoz
huggingface/chat-ui
1,525
Standardize Chat Prompt Templates to Use Jinja Format
## Describe your feature request Currently, the `chatPromptTemplate` for each model that can be set in env uses **Handlebars** format. However, the `chat_prompt` in the actual model's `tokenizer_config.json` uses **Jinja** format. This inconsistency is causing significant inconvenience. Since **Jinja** is widely use...
https://github.com/huggingface/chat-ui/issues/1525
open
[ "enhancement" ]
2024-10-16T05:26:12Z
2024-11-20T00:44:16Z
8
calycekr
pytorch/torchtitan
620
Is there way to offload training memory to DRAM (using FSDP2?) for training Llama3-8B with torchtitan?
I am training Llama3-8B using 2 RTX A6000ada 48GB, but got OOM. Is there way to offload training memory to DRAM (using FSDP2?) for training Llama3-8B with torchtitan? Thanks! ***Error message: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 112.00 MiB. GPU 0 has a total capacity of 47.48 GiB of whi...
https://github.com/pytorch/torchtitan/issues/620
closed
[ "question" ]
2024-10-15T19:54:17Z
2024-10-28T22:27:50Z
null
0781532
pytorch/serve
3,348
Getting started guide client samples broken ?
### 🐛 Describe the bug following the getting started guide: https://github.com/pytorch/serve/blob/master/docs/getting_started.md i get following error messages when trying to run the client examples. Am I doing something wrong? ### Error logs ``` serve$ python -m grpc_tools.protoc --proto_path=frontend/serv...
https://github.com/pytorch/serve/issues/3348
open
[]
2024-10-15T16:47:42Z
2024-12-26T04:00:44Z
1
nikste
huggingface/alignment-handbook
201
Full parameter fine-tuning keeps consuming system RAM and lead to crash
I am using alignment handbook to perform a full parameter fine-tuning of llama3 models with Deepspeed stage 2 on my own dataset which is relatively large (400k+ records). The training was performed on a slurm cluster with two nodes (each has 4 H100 GPUs). I have noticed that during the training, the system memory ut...
https://github.com/huggingface/alignment-handbook/issues/201
closed
[]
2024-10-15T15:04:18Z
2024-10-17T18:56:53Z
2
xiyang-aads-lilly
huggingface/chat-ui
1,522
Add example prompt field to tools
## Describe your feature request This lets the user specify a prompt that would call the tool. It can be shown as a demo if you're not sure how to use a tool. We should show it somewhere in the UI so the user can easily start a conversation from that demo. It can also be used for validating that a tool works...
https://github.com/huggingface/chat-ui/issues/1522
open
[ "enhancement", "front", "back", "tools" ]
2024-10-15T12:42:42Z
2024-10-15T12:42:43Z
0
nsarrazin
pytorch/torchtitan
619
Question about torch.compile has better throughput with 128-GPUs than 8-GPUs
Thank you for publishing the paper. I hope to get your answers to the following questions.: Normally, the training speed will decline as the number of GPUs increases. However, in the paper, with the torch.compile technology, the speed with 128 GPUs is better than that with 8 GPUs. ![compile](https://github.com/user-a...
https://github.com/pytorch/torchtitan/issues/619
closed
[ "question" ]
2024-10-15T09:14:25Z
2024-11-19T21:37:23Z
null
dz1iang
huggingface/optimum
2,060
Support int8 tinyllama tflite export.
### Feature request tflite exporter for decoder only llms such as tinyllama ### Motivation Some platforms only support full int8 op and full int8 tflite models can be deployed. Is there a support plan? Looking forward to your reply, thank you. ### Your contribution no
https://github.com/huggingface/optimum/issues/2060
closed
[ "feature-request", "Stale" ]
2024-10-15T03:25:54Z
2024-12-09T02:11:36Z
1
hayyaw
huggingface/diffusers
9,673
high cpu usage when loading multiple loras at once.
### Describe the bug Hi, I was making a synthesis system using celery and diffusers, and I found the cpu usage of program goes high when loading loras, it is okay when I use just one worker, but it becomes hard when using 8 workers at once. It happens when lora loaded first time, and I think it is because of p...
https://github.com/huggingface/diffusers/issues/9673
closed
[ "bug" ]
2024-10-15T01:49:37Z
2024-10-15T05:07:40Z
5
gudwns1215
huggingface/datasets
7,226
Add R as a How to use from the Polars (R) Library as an option
### Feature request The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd ## Add Polars (R) option The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well. ```r library(polars) ...
https://github.com/huggingface/datasets/issues/7226
open
[ "enhancement" ]
2024-10-14T19:56:07Z
2024-10-14T19:57:13Z
null
ran-codes
huggingface/lerobot
472
How to resume training with a higher offline steps than initial set up?
### System Info ```Shell - `lerobot` version: unknown - Platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.25.2 - Dataset version: 3.0.1 - Numpy version: 1.26.4 - PyTorch version (GPU?): 2.4.1 (True) - Cuda version: 11080 - Using GPU in script?: <fill ...
https://github.com/huggingface/lerobot/issues/472
closed
[]
2024-10-13T19:28:04Z
2024-10-22T05:51:42Z
null
Takuzenn