repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/chat-ui
417
CodeLlama Instruct Configuration
Hello Guys, Could you guide me in the right direction to get the configuration of the Code Llama Instruct model right? I have this config so far: ``` { "name": "Code Llama", "endpoints": [{"url": "http://127.0.0.1:8080"}], "description": "Programming Assistant", "userMessageToken": "[I...
https://github.com/huggingface/chat-ui/issues/417
open
[ "support", "models" ]
2023-08-28T13:42:09Z
2023-09-13T18:17:50Z
9
schauppi
huggingface/transformers.js
265
Unexpected token
I added this code to my React project. ``` import { pipeline } from "@xenova/transformers"; async function sentimentAnalysis() { // Allocate a pipeline for sentiment-analysis let pipe = await pipeline("sentiment-analysis"); let out = await pipe("I love transformers!"); console.log(out); } sentim...
https://github.com/huggingface/transformers.js/issues/265
closed
[ "question" ]
2023-08-28T13:34:42Z
2023-08-28T16:00:10Z
null
patrickinminneapolis
huggingface/diffusers
4,814
How to add more weight to the text prompt in ControlNet?
Hi, I want to know if there is a quick way of adding more weight to the text prompt in ControlNet during inference. If so, which parameter needs to be changed? Thanks,
https://github.com/huggingface/diffusers/issues/4814
closed
[ "stale" ]
2023-08-28T13:05:16Z
2023-10-30T15:07:45Z
null
miquel-espinosa
huggingface/autotrain-advanced
239
how to start without " pip install autotrain-advanced"
Dear, Thanks for your work. After installing through `pip`, running **`autotrain llm --train --project_name my-llm --model luodian/llama-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 3 --trainer sft`** can achieve fine-tuning on your own data. I...
https://github.com/huggingface/autotrain-advanced/issues/239
closed
[]
2023-08-28T10:02:37Z
2023-12-18T15:30:42Z
null
RedBlack888
huggingface/datasets
6,186
Feature request: add code example of multi-GPU processing
### Feature request Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work f...
https://github.com/huggingface/datasets/issues/6186
closed
[ "documentation", "enhancement" ]
2023-08-28T10:00:59Z
2024-10-07T09:39:51Z
18
NielsRogge
huggingface/autotrain-advanced
238
How to Train Consecutively Using Checkpoints
Hi, I've been using your project and it's been great. I'm a complete beginner in the field of AI, so sorry for such a basic question. Is there a way to train consecutively with checkpoints? Thank you!
https://github.com/huggingface/autotrain-advanced/issues/238
closed
[]
2023-08-28T08:31:30Z
2023-12-18T15:30:42Z
null
YOUNGASUNG
huggingface/transformers.js
264
[Question] TypeScript rewrite
<!-- QUESTION GOES HERE --> Hi Joshua. I found your idea is extremely exciting. I am a frontend developer who has worked on TypeScript professionally for three years. Would you mind me doing a TypeScript re-write, so this npm package can have a better DX. If I successfully transform the codebase into TypeScript and p...
https://github.com/huggingface/transformers.js/issues/264
open
[ "question" ]
2023-08-28T08:29:06Z
2024-04-27T12:05:24Z
null
Lantianyou
huggingface/text-generation-inference
934
How to use fine tune model in text-generation-inference
Hi Team I fine tune the llama 2 13b model and using merge_and_upload() functionality, I merge the model. How I can use this merge model using text-generation-inference. **Following command given an error** ![image](https://github.com/huggingface/text-generation-inference/assets/7765864/22e51673-4a4f-47ba-9b06-158...
https://github.com/huggingface/text-generation-inference/issues/934
closed
[]
2023-08-28T07:36:25Z
2023-08-28T08:53:28Z
null
chintanshrinath
huggingface/peft
869
How to correctly use Prefixing Tuning?
### System Info peft 0.5.0 transformers 4.32.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder - [ ] My own task or dataset (give details below) ### Reproduct...
https://github.com/huggingface/peft/issues/869
closed
[]
2023-08-27T18:03:06Z
2024-11-05T09:49:01Z
null
Vincent-Li-9701
huggingface/transformers
25,783
How to re-tokenize the training set in each epoch?
I have a special tokenizer which can tokenize the sentence based on some propability distribution. For example, 'I like green apple' ->'[I],[like],[green],[apple]'(30%) or '[I],[like],[green apple]' (70%). Now in the training part, I want the Trainer can retokenize the dataset in each epoch. How can I do so?
https://github.com/huggingface/transformers/issues/25783
closed
[]
2023-08-27T16:23:25Z
2023-09-01T13:01:43Z
null
tic-top
pytorch/rl
1,473
[Feature Request] How to create a compound actor?
## Motivation I created an environment with a compound action space: a list of continuous values (robot joint angles) and a boolean value (suction gripper on or off). In [the PPO tutorial](https://pytorch.org/rl/tutorials/coding_ppo.html) the policy_module is a ProbabilisticActor which takes "loc" and "scale" inp...
https://github.com/pytorch/rl/issues/1473
closed
[ "enhancement" ]
2023-08-27T15:49:38Z
2023-11-03T17:54:54Z
null
hersh
huggingface/optimum
1,318
Is it possible to compile pipeline (with tokenizer) to ONNX Runtime?
### Feature request Is it possible to compile the entire pipeline, tokenizer and transformer, to run with ONNX Runtime? My goal is to remove the `transformers` dependency entirely for runtime, to reduce serverless cold start. ### Motivation I could not find any examples, and could not make this work, so I wonder if ...
https://github.com/huggingface/optimum/issues/1318
open
[ "feature-request", "onnxruntime" ]
2023-08-26T17:57:52Z
2023-08-28T07:58:13Z
1
j-adamczyk
huggingface/trl
695
Reward is getting lower and lower with each epoch, What can be the issue in training?
Hello, I am trying to optimize a T5 fine-tuned model for text generation task. At the moment, I am using BLEU score (between two texts) as a reward function. Before the optimization with PPO, model is able to produce an average BLEU score of 35% however with ppo, after each epoch, the reward is reducing so far. What...
https://github.com/huggingface/trl/issues/695
closed
[]
2023-08-26T00:22:04Z
2023-11-01T15:06:14Z
null
sakinafatima
huggingface/dataset-viewer
1,733
Add API fuzzer to the tests?
Tools exist, see https://openapi.tools/
https://github.com/huggingface/dataset-viewer/issues/1733
closed
[ "question", "tests" ]
2023-08-25T21:44:10Z
2023-10-04T15:04:16Z
null
severo
huggingface/diffusers
4,778
[Discussion] How to allow for more dynamic prompt_embed scaling/weighting/fusion?
We have a couple of issues and requests for the community that ask for the possibility to **dynamically** change certain knobs of Stable Diffusion that are applied at **every denoising step**. - 1. **Prompt Fusion**. as stated [here](https://github.com/huggingface/diffusers/issues/4496). To implement prompt fusion ...
https://github.com/huggingface/diffusers/issues/4778
closed
[ "stale" ]
2023-08-25T10:03:17Z
2023-11-09T21:42:39Z
null
patrickvonplaten
huggingface/transformers.js
260
[Question] CDN download for use in a worker
Is there a way to get this to work inside a worker: ```html <script type="module"> import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.3'; </script> ``` I noticed you do this: ```js import { pipeline, env } from "@xenova/transformers"; ``` I'm trying to avoid any node modu...
https://github.com/huggingface/transformers.js/issues/260
closed
[ "question" ]
2023-08-24T18:24:51Z
2023-08-29T13:57:19Z
null
quantuminformation
huggingface/notebooks
428
How to load idefics fine tune model for inference?
Hi, recently I fine tune idefics model with peft. I am not able to load the model. Is there any way to load the model with peft back for inference?
https://github.com/huggingface/notebooks/issues/428
open
[]
2023-08-24T13:39:22Z
2024-04-25T10:39:55Z
null
imrankh46
huggingface/peft
857
How to load fine tune IDEFICS model with peft for inference?
### Feature request Request for IDEFICS model. ### Motivation I fine tune IDEFICS on custom dataset, but when I load they showing error. ### Your contribution Add class like AutoPeftModelforVisionTextToText() class, to easily load the model.
https://github.com/huggingface/peft/issues/857
closed
[]
2023-08-24T12:34:44Z
2023-09-01T15:46:50Z
null
imrankh46
huggingface/datasets
6,176
how to limit the size of memory mapped file?
### Describe the bug Huggingface datasets use memory-mapped file to map large datasets in memory for fast access. However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over ...
https://github.com/huggingface/datasets/issues/6176
open
[]
2023-08-24T05:33:45Z
2023-10-11T06:00:10Z
null
williamium3000
huggingface/autotrain-advanced
225
How to make inference the model
When I launch **autotrain llm --train --project_name my-llm --model meta-llama/Llama-2-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft** I have this output ![autoTrainDoubt](https://github.com/huggingface/autotrain-advanced/assets/30750249/ac8...
https://github.com/huggingface/autotrain-advanced/issues/225
closed
[]
2023-08-23T20:24:23Z
2023-12-18T15:30:40Z
null
amgomezdev
huggingface/autotrain-advanced
223
How to use captions with Dreambooth?
I'm trying to train an SDXL model with Dreambooth using captions for each image (I have found that this made quite a difference when training for style with the 1.5 model). How can I achieve that using autotrain? If I understand [this line](https://github.com/huggingface/autotrain-advanced/blob/main/src/autotrain/train...
https://github.com/huggingface/autotrain-advanced/issues/223
closed
[]
2023-08-23T15:32:16Z
2023-12-18T15:30:39Z
null
MaxGfeller
huggingface/trl
677
how to run reward_trainer.py
ValueError: Some specified arguments are not used by the HfArgumentParser: ['-f', '/Users/samittan/Library/Jupyter/runtime/kernel-32045810-5e16-48f4-8d44-c7a7f975f8a4.json']
https://github.com/huggingface/trl/issues/677
closed
[]
2023-08-23T09:39:52Z
2023-11-02T15:05:32Z
null
samitTAN
huggingface/chat-ui
412
preprompt not being injected for Llama 2
1. When I alter the preprompt for a Llama 2 type model, it appears to have no impact. It's as though the preprompt is not there. Sample config for .env.local: ``` MODELS=`[ { "name": "Trelis/Llama-2-7b-chat-hf-function-calling", "datasetName": "Trelis/function_calling_extended", "descrip...
https://github.com/huggingface/chat-ui/issues/412
closed
[ "support", "models" ]
2023-08-23T09:15:24Z
2023-09-18T12:48:07Z
7
RonanKMcGovern
huggingface/unity-api
15
How to download the model to the local call API
Because my internet connection is not very good, I would like to download the model to my local machine and use the Hugging Face API for calling. How can I achieve this?
https://github.com/huggingface/unity-api/issues/15
closed
[]
2023-08-23T08:08:40Z
2023-11-08T10:26:34Z
null
haldon98
huggingface/evaluate
485
How to use `SubTask` with metrics that require valid `config_name`
## Issue Currently I there does not seem to be a way to define the `config_name` for metric for a `SubTask` inside an `evaluate.EvaluationSuite`. ## Version evaluate version: 0.4.0 transformers version 4.32.0 Python version Python 3.10.6 ## Example For example, consider the following `EvaluationSuite...
https://github.com/huggingface/evaluate/issues/485
open
[]
2023-08-22T23:15:43Z
2023-08-23T16:38:18Z
null
tybrs
huggingface/diffusers
4,716
How to handle SDXL long prompt
### Describe the bug I am unable to use embeds prompt in order to handle prompt that is longer than 77 tokens. ### Reproduction ```python import itertools import os.path import random import string import time import typing as typ import torch from diffusers import StableDiffusionXLPipeline from tqdm impo...
https://github.com/huggingface/diffusers/issues/4716
closed
[ "bug" ]
2023-08-22T16:28:25Z
2023-08-27T02:46:18Z
null
elcolie
huggingface/candle
547
How to turn off automatic translation for whisper
When I input Chinese wav file , whisper outputs the English translation ``` ls@LeeeSes-MacBook-Air ~/r/candle (main)> cargo run --release --features accelerate --example whisper -- --model small --language zh --input /Users/ls/Downloads/output.wav Finished release [optimized] target(s) in 0.38s Running `ta...
https://github.com/huggingface/candle/issues/547
closed
[]
2023-08-22T11:16:45Z
2023-08-22T18:52:40Z
null
LeeeSe
huggingface/trl
674
How to load the model and the checkpoint after trained the model?
I trained my model using the code in the sft_trainer.py. And I save the checkpoint and the model in the same dir. But I don't know how to load the model with the checkpoint. Or I just want to konw that `trainer.save_model(script_args.output_dir)` means I have save a trained model, not just a checkpoint? I try many w...
https://github.com/huggingface/trl/issues/674
closed
[]
2023-08-22T10:31:01Z
2023-11-27T21:34:30Z
null
ccwdb
huggingface/text-generation-inference
899
text-generation-launcher tool how to use multi gpu cards?
### System Info text-generation-launcher 1.0.0 how to use multi gpu cards? ### Information - [ ] Docker - [X] The CLI directly ### Tasks - [X] An officially supported command - [ ] My own modifications ### Reproduction CUDA_VISIBLE_DEVICES=0,1,2,3 text-generation-launcher --model-id falcon-40b-instruct --sha...
https://github.com/huggingface/text-generation-inference/issues/899
closed
[]
2023-08-22T10:09:17Z
2023-08-22T10:13:06Z
null
luefei
huggingface/chat-ui
411
Chat-ui crashes TGI?
Hey! When I deploy TGI Endpoint locally and test it with the following cli request: `curl 127.0.0.1:8080/generate_stream \ -X POST \ -d '{"inputs":"def calculate_fibonacci(n:str):","parameters":{"max_new_tokens":100}}' \ -H 'Content-Type: application/json'` It works without any problem. Even lo...
https://github.com/huggingface/chat-ui/issues/411
open
[]
2023-08-22T08:48:02Z
2023-08-23T06:45:26Z
0
schauppi
huggingface/accelerate
1,870
[Question] How to optimize two loss alternately with gradient accumulation?
I want to update a model by optimizing two loss alternately with gradient accumulation like this ```python # Suppose gradient_accumulation is set to 2. optimizer = optim(unet.parameters()) with accelerator.accumulate(unet): outputs = unet(input) loss1 = loss_func1(outputs) loss1.backward() opt...
https://github.com/huggingface/accelerate/issues/1870
closed
[]
2023-08-21T12:49:19Z
2023-10-24T15:06:33Z
null
hkunzhe
huggingface/candle
538
How to disable openssl-sys being included?
I would like to stop openssl-sys from being included in my project when using candle, I'm not sure how to do this. I tried adding the below to my Cargo.toml but it didn't change anything. The reason I want to do it is because I get an error when trying to compile my library to aarch64-linux-android saying that pkg-conf...
https://github.com/huggingface/candle/issues/538
closed
[]
2023-08-21T10:47:26Z
2023-08-21T20:38:57Z
null
soupslurpr
pytorch/pytorch
107,580
Doc is unclear on how to install pytorch with Cuda via pip
### 📚 The doc issue ![image](https://github.com/pytorch/pytorch/assets/35759490/17b506aa-ff3a-40cf-baac-63bb66c486ac) I've been looking on how to install torch with CUDA via pip for almost one day and the doc is absolutely not helping on how to do so. ### Suggest a potential alternative/fix Explain clearly how t...
https://github.com/pytorch/pytorch/issues/107580
open
[ "triaged", "topic: docs" ]
2023-08-21T09:57:56Z
2023-08-22T08:42:08Z
null
MidKnightXI
huggingface/optimum
1,298
Support BetterTransfomer for the Baichuan LLM model
### Feature request is it possible to support Baichuan model with BetterTransformer? https://huggingface.co/baichuan-inc/Baichuan-13B-Chat ### Motivation A very popular Chinese and English large language model. ### Your contribution hope you can achieve it. Thanks.
https://github.com/huggingface/optimum/issues/1298
closed
[ "feature-request", "bettertransformer", "Stale" ]
2023-08-21T08:18:16Z
2025-05-04T02:17:22Z
1
BobLiu20
huggingface/candle
533
How to convert token to text?
Hello, thank you for this ML library in Rust. Sorry if this is a noob question, I'm new to machine learning and this is my first time trying to use a text generation model. I'm using the latest git version. In the quantized llama example, how would I convert a token to a string? I see the print_token function but I wan...
https://github.com/huggingface/candle/issues/533
closed
[]
2023-08-21T06:36:08Z
2023-08-21T07:51:37Z
null
soupslurpr
huggingface/safetensors
333
Slow load weight values from a HF model on a big-endian machine with the latest code
### System Info Python: 3.10 PyTorch: the latest main branch (i.e. 2.0.1+) safetensors: 0.3.3 Platform: s390x (big-endian) ### Information - [ ] The official example scripts - [X] My own modified scripts ### Reproduction I executed the following code using 0.3.1 and 0.3.3, and w/o safetensors. ```...
https://github.com/huggingface/safetensors/issues/333
closed
[ "Stale" ]
2023-08-20T18:19:44Z
2023-12-12T01:48:51Z
9
kiszk
huggingface/chat-ui
409
Deploy Chat UI Spaces Docker template with a PEFT adapter
I tried to accomplish this, but the container failed to launch the chat-ui app, as it seems to assume the model would be a non-adapted model. Is there a way to make it work?
https://github.com/huggingface/chat-ui/issues/409
closed
[ "bug", "back" ]
2023-08-20T05:26:50Z
2023-09-11T09:37:29Z
4
lrtherond
huggingface/datasets
6,163
Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32
### Describe the bug I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are inte...
https://github.com/huggingface/datasets/issues/6163
open
[]
2023-08-19T11:34:40Z
2025-07-22T12:04:46Z
2
shishirCTC
huggingface/sentence-transformers
2,278
How to set the no. of epochs for fine-tuning SBERT?
Hello, I am fine-tuning an biencoder SBERT model on domain specific data for semantic similarity. There is no loss value posted by the `fit ` function from the package. Any idea how to know if the model is overfitting or underfiting the dataset after each epoch? This could help me in deciding the appropriate no. of ep...
https://github.com/huggingface/sentence-transformers/issues/2278
open
[]
2023-08-18T18:14:05Z
2024-01-29T17:00:13Z
null
power-puff-gg
huggingface/setfit
409
model_head.pkl not found on HuggingFace Hub
i got message: "model_head.pkl not found on HuggingFace Hub, initialising classification head with random weights. You should TRAIN this model on a downstream task to use it for predictions and inference." is there something missing or is it normal?
https://github.com/huggingface/setfit/issues/409
closed
[ "question" ]
2023-08-18T07:52:20Z
2023-11-24T14:20:51Z
null
andysingal
huggingface/autotrain-advanced
216
How to do inference after train llama2
i trained model using this command ``` autotrain llm --train --project_name 'llama2-indo-testing' \ --model meta-llama/Llama-2-7b-hf \ --data_path data/ \ --text_column text \ --use_peft \ --use_int4 \ --learning_rate 2e-4 \ --train_batch_size 2 \ --num_train_epochs 3 \ --...
https://github.com/huggingface/autotrain-advanced/issues/216
closed
[]
2023-08-18T04:36:37Z
2023-12-18T15:30:38Z
null
muhammadfhadli1453
huggingface/diffusers
4,662
How to call a different scheduler when training a model from repo
I notice that the settings in train_dreambooth_lora_sdxl.py and the scheduler config from the repo seem to conflict. In the .py the noise scheduler is DDPM but whenever training starts it seems to still indicate that I am using the repo config scheduler, ie. EulerDiscreteScheduler. It used to be you could specify sched...
https://github.com/huggingface/diffusers/issues/4662
closed
[]
2023-08-17T21:40:10Z
2023-08-18T04:18:11Z
null
jmaccall316
huggingface/transformers
25,576
How can i make a PR for autotokenzier to adapt RWKV world
### Feature request Ususally we use own tokenzier with the transformer pipeline, like this https://github.com/xiaol/Huggingface-RWKV-World/blob/fca236afd5f2815b0dbe6c7ce3c92e51526e2e14/generate_hf_cfg.py#L79C1-L79C1 So far we have a lot of models using new tokenzier, using pipeline with autotokenizer is critica...
https://github.com/huggingface/transformers/issues/25576
closed
[]
2023-08-17T16:36:44Z
2023-09-25T08:02:43Z
null
xiaol
huggingface/accelerate
1,854
How to further accelerate training with 24 cards for 1.3b+ models using accelerate?
I found that when using DeepSpeed Zero (2 or 3) to train 1.3 billion and larger models (such as llama-7b or gpt-neo-1.3b), the training time for 8 * 32G V100 is almost the same as 24 * 32G V100 (I guess it's because of the additional communication overhead introduced by DeepSpeed). Is there any way to further accelerat...
https://github.com/huggingface/accelerate/issues/1854
closed
[]
2023-08-17T15:01:09Z
2023-09-24T15:05:52Z
null
Micheallei
huggingface/datasets
6,156
Why not use self._epoch as seed to shuffle in distributed training with IterableDataset
### Describe the bug Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping. https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177 My question ...
https://github.com/huggingface/datasets/issues/6156
closed
[]
2023-08-17T10:58:20Z
2023-08-17T14:33:15Z
3
npuichigo
huggingface/diffusers
4,643
when i load a controlnet model,where is the inference code?
I have read the code of con in diffusers/models/controlnet.py. but when I load a con weight,where is the code? tks
https://github.com/huggingface/diffusers/issues/4643
closed
[]
2023-08-17T02:50:59Z
2023-08-17T04:55:28Z
null
henbucuoshanghai
huggingface/dataset-viewer
1,689
Handle breaking change in google dependency?
See https://huggingface.co/datasets/bigscience/P3/discussions/6#64dca122e3e44e8000c45616 Should we downgrade the dependency, or fix the datasets?
https://github.com/huggingface/dataset-viewer/issues/1689
closed
[ "question", "dependencies", "P2" ]
2023-08-16T14:31:28Z
2024-02-06T14:59:59Z
null
severo
huggingface/optimum
1,286
Support BetterTransfomer for the GeneFormer model
### Feature request is it possible to support GeneFormer model with BetterTransformer? https://huggingface.co/ctheodoris/Geneformer ### Motivation It's a new paper with an active community in the Hugging Face repository. The training and inference speed is not fast enough. ### Your contribution Nothing at this ti...
https://github.com/huggingface/optimum/issues/1286
closed
[ "feature-request", "bettertransformer", "Stale" ]
2023-08-16T03:32:48Z
2025-05-07T02:13:16Z
1
seyedmirnezami
pytorch/torchx
753
Feature: Support for Multiple NodeSelectors and Tolerations in TorchX for Kubernetes
## Description <!-- concise description of the feature/enhancement --> I’m currently working with TorchX in conjunction with Volcano scheduling for my training jobs on an Amazon EKS cluster. I’ve also integrated Karpenter autoscaler for effective node scaling. Additionally, I’m using managed node groups with labele...
https://github.com/meta-pytorch/torchx/issues/753
open
[]
2023-08-15T21:55:30Z
2023-08-15T22:02:33Z
0
vara-bonthu
pytorch/pytorch
107,238
How to export GNN with dict inputs correctly?
## Problem description I am having an issue when exporting of PyTorch GNN model to ONNX. Here is my export code: ``` torch.onnx.export( model=model, args=(x_dict, edge_index_dict, edge_attr_dict, {}), f=save_path, verbose=False, input_names=["x_dict", "edge_index_dict", "edge_attr_dict"]...
https://github.com/pytorch/pytorch/issues/107238
closed
[ "module: onnx", "triaged" ]
2023-08-15T15:43:12Z
2024-03-27T21:47:06Z
null
emnigma
huggingface/diffusers
4,618
How to use dreamshaperXL10_alpha2Xl10.safetensors with controlnet-canny-sdxl-1.0 ?
I want to use dreamshaperXL10_alpha2Xl10.safetensors with controlnet-canny-sdxl-1.0 I downloaded dreamshaperXL10_alpha2Xl10.safetensors file and tried to use : pipe = StableDiffusionXLControlNetPipeline.from_pretrained( './dreamshaperXL10_alpha2Xl10.safetensors', controlnet=controlnet, use_safetensors=True, to...
https://github.com/huggingface/diffusers/issues/4618
closed
[]
2023-08-15T13:44:54Z
2023-08-22T01:31:37Z
null
arnold408
pytorch/pytorch
107,225
Is pytorch version 1.10.2 still maintained? What is the official EOM(End of Maintenance) date?
### 🐛 Describe the bug Is pytorch version 1.10.2 still maintained? What is the official EOM(End of Maintenance) date? ### Versions pytorch v1.10.2 cc @seemethere @malfet @svekars @carljparker
https://github.com/pytorch/pytorch/issues/107225
closed
[ "module: binaries", "module: docs", "oncall: releng", "triaged" ]
2023-08-15T12:36:25Z
2023-08-15T18:49:57Z
null
reBiocoder
pytorch/benchmark
1,825
how to run torchbenchmark in dynamo mode
Hi, 1. I want to test benchmark in dynamo mode, how can I run test_bench.py script? 2. When I add code: `self.model = torch.compile(self.model)` in BERT_pytorch __init__.py, then run: `pytest test_bench.py -k "test_train[BERT_pytorch-cuda-eager]" --ignore_machine_config --benchmark-autosave`, it raises below...
https://github.com/pytorch/benchmark/issues/1825
closed
[]
2023-08-15T12:12:20Z
2023-08-16T05:46:53Z
null
Godlovecui
huggingface/peft
826
what is alpha ?? alpha not in paper.
### Feature request https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py#L57 this alpha not in paper : https://arxiv.org/abs/2106.09685 where can i learn this alpha ?? thank you !! ### Motivation rt ### Your contribution rt
https://github.com/huggingface/peft/issues/826
closed
[]
2023-08-15T09:47:58Z
2023-09-23T15:03:19Z
null
XuJianzhi
huggingface/optimum
1,285
Merge patch into autogptq
### Feature request Currently, there is a patch to get GPTQ quantization working: ``` # !pip install -q git+https://github.com/fxmarty/AutoGPTQ.git@patch-act-order-exllama ``` Is there a plan to try and merge that into the autogptq repo? ### Motivation autogptq is slow to install. This is easily solved by usin...
https://github.com/huggingface/optimum/issues/1285
closed
[]
2023-08-14T16:24:14Z
2023-08-23T17:17:46Z
5
RonanKMcGovern
pytorch/pytorch
107,146
【libtorch c++ 】 how to make libtorch model distribute train and infer ,please show me one tutorial or example
### 🐛 Describe the bug HI, for libtorch I found distribute package ,but I don't know how to declare distribute param to make the libtorch model train and infer on distribute machines .need our team help, thanks,pleaase show me one example distribute train model code. thanks ### Versions libtorch 2.0
https://github.com/pytorch/pytorch/issues/107146
closed
[]
2023-08-14T15:54:56Z
2023-08-14T18:34:09Z
null
mullerhai
huggingface/candle
443
What is the minimal requirements of Intel MKL version?
Hello, Thanks for the great work! I've got an error while compiling with the `-features mkl` option. For example `cargo install --git https://github.com/huggingface/candle.git candle-examples --examples bert -F mkl` The error said ```bash = note: /usr/bin/ld: /workspaces/Kuberian/searcher/target/debug/deps/...
https://github.com/huggingface/candle/issues/443
closed
[]
2023-08-14T14:09:01Z
2024-02-03T16:43:34Z
null
iwanhae
huggingface/pytorch-image-models
1,917
how to change SqueezeExcite in efficientnet
I want to create efficientnet networks using timm, where SqueezeExcite contains three parts ['Conv2d','SiLU','Conv2d'], but it contains four parts ['Conv2d','SiLU','Conv2d','sigmoid'], How should I modify it, thank you
https://github.com/huggingface/pytorch-image-models/issues/1917
closed
[ "enhancement" ]
2023-08-14T11:45:05Z
2023-08-14T14:13:26Z
null
Yang-Changhui
huggingface/setfit
408
No tutorial or guideline for Few-shot learning on multiclass text classification
I just want to use SBERT for Few Shot multiclass text classification, however I couldn't see any tutorial or explanation for it. Can you explain to me that which "multi_target_strategy" and loss function should I use for multi-class text classification ?
https://github.com/huggingface/setfit/issues/408
open
[ "documentation", "question" ]
2023-08-14T09:02:18Z
2023-10-03T20:29:25Z
null
ByUnal
huggingface/diffusers
4,594
latents.requires_grad is false in my custom pipeline no matter what.
Hi, in my quest to make a flexible pipeline that can easily add new features instead of creating a pipeline for every variation, I made the following: ``` class StableDiffusionRubberPipeline(StableDiffusionPipeline): call_funcs=[] def __init__( self, vae: AutoencoderKL, text_enc...
https://github.com/huggingface/diffusers/issues/4594
closed
[]
2023-08-13T15:02:22Z
2023-08-14T12:11:36Z
null
alexblattner
huggingface/datasets
6,153
custom load dataset to hub
### System Info kaggle notebook i transformed dataset: ``` dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt") ``` to formatted_dataset: ``` Dataset({ features: ['message_tree_id', 'message_tree_text'], num_rows: 33143 }) ``` but would like to know how to upload to hub ### ...
https://github.com/huggingface/datasets/issues/6153
closed
[]
2023-08-13T04:42:22Z
2023-11-21T11:50:28Z
5
andysingal
huggingface/chat-ui
398
meta-llama/Llama-2-7b-chat-hf requires a pro subscription?
I ran the instructions to run locally, and ran into this. I've been working on my own ui, and thought I'd give this a shot, and if that's the route huggingface is going, I find that very disappointing. I was expecting the model to be hosted locally and routed through fastapi or something
https://github.com/huggingface/chat-ui/issues/398
closed
[]
2023-08-12T03:56:55Z
2023-08-12T04:03:11Z
1
thistleknot
huggingface/chat-ui
397
Dynamically adjust `max_new_tokens`
Hi, I am running a 4096 context length model behind TGI interface. My primary use case is summarization wherein some of my requests can be quite large. I have set `truncate` to 4000 and that leaves `max_new_tokens` to be at most 4096-4000=96. So, even if my input length is not 4000 tokens long, say it is only ...
https://github.com/huggingface/chat-ui/issues/397
open
[ "question", "back" ]
2023-08-11T16:37:10Z
2023-09-18T12:49:49Z
null
abhinavkulkarni
huggingface/chat-ui
396
Long chat history
How do you manage a long chat history? Do you truncate the history at some point and call the API only with the most recent messages?
https://github.com/huggingface/chat-ui/issues/396
closed
[ "question" ]
2023-08-11T15:52:43Z
2023-09-18T12:50:07Z
null
keidev
huggingface/trl
638
How many and what kind of gpus needed to run the example?
For every script or project in the example directory, could you please tell us how many and what kind of gpus needed to run the experiments? Thanks a lot.
https://github.com/huggingface/trl/issues/638
closed
[]
2023-08-11T14:12:34Z
2023-09-11T08:22:33Z
null
Wallace-222
huggingface/chat-ui
395
Error's out evetime I try to add a new model
I'm currently having an huge issue. I'm trying to easily add models in to the chat ui. I have made a holder and added a specific model in that folder but I'm unable to actual get to use that model. I'm not sure what I'm doing wrong I've staired at the docs for a few hours re reading and also looked it up on YouTube but...
https://github.com/huggingface/chat-ui/issues/395
closed
[ "support" ]
2023-08-11T12:55:03Z
2023-09-11T09:35:55Z
3
Dom-Cogan
huggingface/dataset-viewer
1,662
Should we change 500 to another status code when the error comes from the dataset?
See #1661 for example. Same for the "retry later" error: is 500 the most appropriate status code?
https://github.com/huggingface/dataset-viewer/issues/1662
open
[ "question", "api", "P2" ]
2023-08-10T15:57:03Z
2023-08-14T15:36:27Z
null
severo
huggingface/datasets
6,139
Offline dataset viewer
### Feature request The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something t...
https://github.com/huggingface/datasets/issues/6139
closed
[ "enhancement", "dataset-viewer" ]
2023-08-10T11:30:00Z
2024-09-24T18:36:35Z
7
yuvalkirstain
huggingface/text-generation-inference
807
How to create a NCCL group on Kubernetes?
I am deploying text-generation-inference on EKS with each node having 1 NVIDIA A10G GPU. How should I create a group such that a model like llama-2-13b-chat is able to use GPUs across nodes for inference?
https://github.com/huggingface/text-generation-inference/issues/807
closed
[ "Stale" ]
2023-08-10T09:29:59Z
2024-04-17T01:45:28Z
null
rsaxena-rajat
pytorch/kineto
799
pytorch.profiler cannot profile aten:mm on GPU
I use pytorch.profiler to profile a program of matmul on GPU, it seems profiler does not record aten.mm correctly. There is stats in GPU kernel View, <img width="2118" alt="image" src="https://github.com/pytorch/kineto/assets/11534916/dc126d48-1517-4af2-9200-8fd37aeaa6a4"> but no GPU kernel stats in Trace view. ...
https://github.com/pytorch/kineto/issues/799
closed
[ "question", "plugin" ]
2023-08-10T08:13:15Z
2024-04-23T15:50:55Z
null
scse-l
huggingface/chat-ui
394
Internal server error: Unexpected token ] in JSON at position 1090
1:58:23 AM [vite] Error when evaluating SSR module /src/lib/server/models.ts: |- SyntaxError: Unexpected token ] in JSON at position 1090 at JSON.parse (<anonymous>) at eval (/home/chat-ui/src/lib/server/models.ts:46:14) at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks...
https://github.com/huggingface/chat-ui/issues/394
closed
[ "support" ]
2023-08-10T02:01:49Z
2023-09-11T09:36:29Z
2
Ichigo3766
pytorch/xla
5,424
How can I use torch_xla fsdp with AMP on GPU?
## ❓ Questions and Help Hello, how can I ues torch_xla fsdp + AMP on GPU? Does the torch_xla fsdp support AMP? I've read the the following code carefully. Can I forcibly fuse them together ? test/test_train_mp_imagenet_fsdp.py test/test_train_mp_imagenet_amp.py Thanks.
https://github.com/pytorch/xla/issues/5424
closed
[ "question", "distributed" ]
2023-08-09T08:21:40Z
2025-04-29T13:58:58Z
null
Pluto1944
huggingface/trl
627
how to use Reward model?
How to use Reward Model in RLHF PPO stage? Could you provide an example? thank you very much
https://github.com/huggingface/trl/issues/627
closed
[]
2023-08-09T02:52:23Z
2023-08-12T02:04:17Z
null
zhuxiaosheng
huggingface/transformers.js
243
QW
hi Joshua how u doing man i wish every thing's good, i just wanna ask if you know any body need any help or have any issues in their nodeJs backend code or their servers it will be a great pleasure to and help
https://github.com/huggingface/transformers.js/issues/243
closed
[ "question", "off-topic" ]
2023-08-08T21:46:13Z
2023-08-09T19:55:55Z
null
jedLahrim
huggingface/peft
808
What is the correct way to apply LoRA on a custom model (not models on HuggingFace)?
Hi, most models in examples are `transformers` pretrained models. However, I'm using a custom model and applying LoRA to it: ``` model = MyPytorchModel() model = PeftModel(model, peft_config) ======= training... ======== model.save_pretrained(save_path) ``` Then, I reload my custom model and merge lora weight: ...
https://github.com/huggingface/peft/issues/808
closed
[]
2023-08-08T17:10:36Z
2025-08-01T21:14:25Z
null
DtYXs
huggingface/diffusers
4,533
How to debug custom pipeline locally ?
Hi, I build diffusers from source, and I am using ControlNet. However, diffusers seems not to load the custom pipeline from ```diffusers/examples/community/stable_diffusion_controlnet_img2img.py``` as I expected. Instead, it seems to download from the hub and cache a new ```stable_diffusion_controlnet_img2img.py`...
https://github.com/huggingface/diffusers/issues/4533
closed
[]
2023-08-08T15:34:40Z
2023-08-09T12:17:42Z
null
pansanity666
huggingface/setfit
405
how to set the device id
How do I run multiple training runs on different GPU devices? I don't see any argument which allows me to set this. Thank you!
https://github.com/huggingface/setfit/issues/405
open
[]
2023-08-08T08:25:36Z
2023-08-08T08:25:36Z
null
vahuja4
pytorch/android-demo-app
331
What is IValue type? It is a Tensor?
What is the diff of IValue and Tensor? Could you please share some references? Thx.
https://github.com/pytorch/android-demo-app/issues/331
open
[]
2023-08-08T00:30:45Z
2023-08-08T00:30:45Z
null
NeighborhoodCoding
huggingface/transformers.js
239
[Question] Adding Custom or Unused Token
<!-- QUESTION GOES HERE --> Is it possible to add custom range as a token? For example for price_list of $100-$200 Can we add a custom vocab like this in vocab list vocab list: nice hello __$100-$200__ fish ...
https://github.com/huggingface/transformers.js/issues/239
closed
[ "question" ]
2023-08-07T18:32:20Z
2023-08-07T20:38:15Z
null
hadminh
huggingface/chat-ui
390
Can I hook it up to a retrieval system for a document chatbot?
I want to use the instructor-xl text embedding model and use FAISS to create and retrieve from a vector store. Sort of a chatbot for documents or a domain specific chatbot. Any ideas on how I can do it?
https://github.com/huggingface/chat-ui/issues/390
open
[]
2023-08-07T15:22:10Z
2024-02-22T12:55:41Z
9
adarshxs
huggingface/diffusers
4,507
How to train stable-diffusion-xl-base-1.0 without lora?
Hi, I want to train `stable-diffusion-xl-base-1.0` without lora, how to do this? I can run `train_text_to_image_lora_sdxl.py` . But `train_text_to_image.py` with `MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"` with raise an error: ``` diffusers/models/unet_2d_condition.py:836 in forward ...
https://github.com/huggingface/diffusers/issues/4507
closed
[]
2023-08-07T10:38:24Z
2023-08-14T07:25:49Z
null
KimmiShi
huggingface/text-generation-inference
782
What is the correct parameter combination for using dynamic RoPE scaling ?
Hi Team, First of all thanks for the awesome piece of software !! I want to use `upstage/Llama-2-70b-instruct-v2` model with `--max-input-length=8192 --max-total-tokens=10240` which originally supports `max_position_embeddings=4096`. I tried running the following command : ``` docker run -it --rm --gpus all...
https://github.com/huggingface/text-generation-inference/issues/782
closed
[]
2023-08-07T05:58:14Z
2023-09-06T13:59:36Z
null
hrushikesh198
huggingface/transformers.js
238
[Question] Can you list all available models using tranformers.js?
Hey 👋 I was wondering if it's possible to list available models using the `transformers.js` package? e.g. > pipeline.getAvailableModels()
https://github.com/huggingface/transformers.js/issues/238
closed
[ "question" ]
2023-08-07T01:53:35Z
2023-08-13T23:27:55Z
null
sambowenhughes
huggingface/chat-ui
389
Inject assistant message in the begining of the chat
Hey, is it possible to start a conversation with an assistant message showing up as the first message in the chat?
https://github.com/huggingface/chat-ui/issues/389
closed
[ "enhancement", "question" ]
2023-08-06T17:25:25Z
2023-09-18T12:52:16Z
null
matankley
huggingface/diffusers
4,494
How to convert a diffuser pipeline of XL to checkpoint or safetensors
I need to fine-tune stable diffusion unet or something like that. Then I have to convert the pipeline into ckpt for webui usage. Before I use the `scripts/convert_diffusers_to_original_stable_diffusion.py` for transforming. But currently it cannot convert correctly for XL pipeline and webui may raise bugs. Thanks i...
https://github.com/huggingface/diffusers/issues/4494
closed
[ "stale", "contributions-welcome" ]
2023-08-06T13:06:54Z
2023-11-06T04:42:19Z
null
FeiiYin
huggingface/chat-ui
388
Is it down?
It doesnt load for me also your website
https://github.com/huggingface/chat-ui/issues/388
closed
[]
2023-08-06T08:54:47Z
2023-08-08T06:05:48Z
6
BenutzerEinsZweiDrei
huggingface/transformers.js
237
[Question] Ipynb for ONNX conversion?
Could you please share the code you're using to convert models to onnx? I know you say in your cards you're using Optimum, but when I try to do it myself, I get much larger onnx files (talking about disk space here) and I don't know what I'm doing wrong.
https://github.com/huggingface/transformers.js/issues/237
closed
[ "question" ]
2023-08-06T08:45:19Z
2023-08-06T09:17:02Z
null
Mihaiii
huggingface/transformers.js
233
[Docs] Mention demo (GitHub pages) in Readme
I love your old demo page on GitHub pages (https://xenova.github.io/transformers.js/), as one can easily play with the models and copy code if needed. Is there any reason it's not mentioned anymore (or not more visible) in the Readme? (Sorry, added bug label accidentally, should be question instead)
https://github.com/huggingface/transformers.js/issues/233
closed
[ "question" ]
2023-08-04T10:53:48Z
2023-12-06T15:01:38Z
null
do-me
pytorch/text
2,197
Does DataLoader(shuffle=True) really shuffle DBpedia dataset correctly?
According to [the docs][1], DBpedia dataset has 14 classes (labels) and 40000 texts for each class. Hence, if I create batches using `DataLoader(shuffle=True)` as follows: ```python import torchtext.datasets as d from torch.utils.data.dataloader import DataLoader train = DataLoader( d.DBpedia(split="train"...
https://github.com/pytorch/text/issues/2197
open
[]
2023-08-04T10:34:52Z
2023-08-04T10:37:18Z
0
fujidaiti
pytorch/text
2,196
torchtext.datasets - requests.exceptions.ConnectionError
## 🐛 Bug **Description of the bug** When I try to use Multi30k dataset, I get this error: ``` requests.exceptions.ConnectionError: This exception is thrown by __iter__ of HTTPReaderIterDataPipe(skip_on_error=False, source_datapipe=OnDiskCacheHolderIterDataPipe, timeout=None) ``` **To Reproduce** ``` ...
https://github.com/pytorch/text/issues/2196
open
[]
2023-08-04T09:25:28Z
2024-01-11T07:53:51Z
2
afurkank
huggingface/datasets
6,120
Lookahead streaming support?
### Feature request From what I understand, streaming dataset currently pulls the data, and process the data as it is requested. This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment. While the delays might be dataset specific (or even mappi...
https://github.com/huggingface/datasets/issues/6120
open
[ "enhancement" ]
2023-08-04T04:01:52Z
2023-08-17T17:48:42Z
1
PicoCreator
huggingface/diffusers
4,459
how to convert a picture to text embedding, without training these image model like Textual Inversion
clip text: tokens -> text_embedding -> text_features clip img: img -> img_embedding -> img_features how inversion without training every time: img -> text_embedding
https://github.com/huggingface/diffusers/issues/4459
closed
[ "stale" ]
2023-08-04T01:46:25Z
2023-09-12T15:03:45Z
null
yanchaoguo
huggingface/datasets
6,116
[Docs] The "Process" how-to guide lacks description of `select_columns` function
### Feature request The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the gui...
https://github.com/huggingface/datasets/issues/6116
closed
[ "enhancement" ]
2023-08-03T13:45:10Z
2023-08-16T10:02:53Z
null
unifyh
pytorch/TensorRT
2,167
❓ [Question] Is a INT8 calibrator specific to a given model or just specific to a dataset?
## ❓ Question Is a INT8 calibrator specific to a given model or just specific to a dataset? INT8 calibrators can be cached to accelerate further usage, which is nice. However, it's not clear from the documentation if the cached calibrator can only be used to calibrate the model it was used for TensorRT conversion...
https://github.com/pytorch/TensorRT/issues/2167
closed
[ "question" ]
2023-08-03T11:38:16Z
2023-08-15T19:53:12Z
null
laclouis5
huggingface/diffusers
4,453
How to convert diffusers SDXL lora into safetensors that works with AUTO1111 webui
### Describe the bug I trained a lora on SDXL with this diffusers script: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py I get great results when using the output .bin with the diffusers inference code. How can I convert the .bin to .safetensors that can be loa...
https://github.com/huggingface/diffusers/issues/4453
closed
[ "bug", "stale" ]
2023-08-03T11:23:25Z
2023-09-12T15:03:46Z
null
wangqyqq
huggingface/text-generation-inference
765
How to benchmark a warmed local model by docker
### System Info Using the docker run to connected local model and it worked: `docker run --rm --name tgi --runtime=nvidia --gpus all -p 5001:5001 -v data/nfs/gdiist/model:/data k8s-master:5000/text-generation-inference:0.9.3 --model-id /data/llama-7b-hf --hostname 0.0.0.0 --port 5001 --dtype float16 ` ``` 2...
https://github.com/huggingface/text-generation-inference/issues/765
closed
[]
2023-08-03T09:28:07Z
2023-10-16T01:50:10Z
null
Laych7
huggingface/diffusers
4,448
Outpainting results from diffusers' StableDiffusionControlNetPipeline is much worse than those from A1111 webui. How to improve?
I am trying to outpaint some human images (mainly the lower-body part) with SD 1.5 conditioned on ControlNet's inpainting and openpose. I have been using A1111 webui with ControlNet extension and it has been working quite well: Here are my settings in the webui: <img width="774" alt="Screenshot 2023-08-03 at 15 08 30...
https://github.com/huggingface/diffusers/issues/4448
closed
[]
2023-08-03T07:19:12Z
2023-08-30T05:35:03Z
null
xiyichen
huggingface/transformers
25,280
How to download files from HF spaces
### System Info google colab ### Who can help? @sanchit-gandhi @rock ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproductio...
https://github.com/huggingface/transformers/issues/25280
closed
[]
2023-08-03T07:02:03Z
2023-09-11T08:02:40Z
null
andysingal
huggingface/diffusers
4,445
How to finetune lora model ?
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] If I have a model from civitai , how to finetune it in sd1.5 and sdxl? **Describe the solution you'd like** A clear and concise description of what you w...
https://github.com/huggingface/diffusers/issues/4445
closed
[ "stale" ]
2023-08-03T01:55:15Z
2023-09-12T15:03:49Z
null
kelisiya
pytorch/torchx
749
Passing additional build arguments to Dockerfile.torchx
## ❓ Questions and Help ### Please note that this issue tracker is not a help form and this issue will be closed. Before submitting, please ensure you have gone through our [documentation](https://pytorch.org/torchx). ### Question Use case: My team uses torchx to submit the job to remote scheduler such ...
https://github.com/meta-pytorch/torchx/issues/749
open
[]
2023-08-02T20:05:02Z
2023-10-04T22:35:48Z
4
anjali-chadha