repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js | 490 | Is it possible to implement sentence splitting? | ### Question
Can this library be used to implement sentence splitting, possibly with tokenizers? | https://github.com/huggingface/transformers.js/issues/490 | closed | [
"question"
] | 2023-12-30T01:17:55Z | 2024-02-01T01:51:52Z | null | devfacet |
huggingface/transformers.js | 486 | Output different from sentence transformers | ### Question
Hello, i'm not sure if i'm doing something wrong, but the pooled outputs from sentence transformers and this library seem to be different.
The results are the same if I use `pooling: 'none'` in js and `output_value='token_embedding` in python.
I've seen some other similar issues, but this seems to be a ... | https://github.com/huggingface/transformers.js/issues/486 | closed | [
"question"
] | 2023-12-29T10:15:07Z | 2024-01-02T12:20:17Z | null | leodalcin |
huggingface/trl | 1,155 | What is the best way for the inference process in LORA in PEFT approach | Here is the SFTtrainer method i used for finetuning mistral
```
trainer = SFTTrainer(
model=peft_model,
train_dataset=data,
peft_config=peft_config,
dataset_text_field=" column name",
max_seq_length=3000,
tokenizer=tokenizer,
args=training_arguments,
packing=packing,
)
traine... | https://github.com/huggingface/trl/issues/1155 | closed | [] | 2023-12-29T09:51:23Z | 2024-02-10T15:05:12Z | null | pradeepdev-1995 |
huggingface/peft | 1,310 | What is the best way for the inference process in LORA in PEFT approach | ### Feature request
What is the best way for the inference process in LORA in PEFT approach
### Motivation
What is the best way for the inference process in LORA in PEFT approach
### Your contribution
Here is the SFTtrainer method i used for finetuning mistral
```
trainer = SFTTrainer(
model=peft_mode... | https://github.com/huggingface/peft/issues/1310 | closed | [] | 2023-12-29T09:49:55Z | 2024-01-02T15:31:23Z | null | pradeepdev-1995 |
huggingface/datasets | 6,542 | Datasets : wikipedia 20220301.en error | ### Describe the bug
When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist.
### Steps to reproduce the bug
1.I tried downloading directly.
```python
wiki_dataset = load_dataset("wikipedia", "20220301.en")
```
An exception occurre... | https://github.com/huggingface/datasets/issues/6542 | closed | [] | 2023-12-29T08:34:51Z | 2024-01-02T13:21:06Z | 2 | ppx666 |
huggingface/diffusers | 6,384 | How to map A1111 reference_only parameters into diffusers? | Thanks for the community to implement the reference_only functionality in A1111, but how can the parameters correspond to each other? I have tried to reproduce the effect of webui in the diffusers library, but I can't seem to do it. I'm using the StableDiffusionReferencePipeline community pipeline.
My questions are... | https://github.com/huggingface/diffusers/issues/6384 | closed | [
"stale"
] | 2023-12-29T08:16:15Z | 2024-01-28T15:29:43Z | null | Logos23333 |
huggingface/peft | 1,308 | How to check the gradients of lora layers when training a peft model | ### Feature request
when I trained a lora model like this
```python
model = get_peft_model(model, lora_config)
training(model,data)
```
How can I check the gradients of lora layers from a `peft` model ?
### Motivation
check gradients of lora layers from peft model during training
### Your contribution
ni | https://github.com/huggingface/peft/issues/1308 | closed | [] | 2023-12-29T04:26:10Z | 2024-01-05T04:55:41Z | null | stardusts-hj |
pytorch/tutorials | 2,724 | 💡 Request - Tutorials for Holistic Trace Analysis | ### 🚀 Descirbe the improvement or the new tutorial
Add tutorials explaining how to use features in Holistic Trace Analysis.
### Existing tutorials on this topic
None
### Additional context
HTA eases the profiling distributed jobs in PyTorch. In order to introduce HTA to the PyTorch community it would be beneficia... | https://github.com/pytorch/tutorials/issues/2724 | closed | [] | 2023-12-28T21:56:27Z | 2024-01-02T23:03:08Z | 0 | anupambhatnagar |
huggingface/transformers.js | 484 | TypeScript Pipline Types for different models? | ### Question
Is there a suggested way to get types for the different models? Right now after I create a pipline, like one of the following:
```
const segmenter = await pipeline('image-segmentation', 'Xenova/face-parsing');
// or
const extractor = await pipeline(`feature-extraction`, `Xenova/UAE-Large-V1`, {
... | https://github.com/huggingface/transformers.js/issues/484 | closed | [
"question"
] | 2023-12-28T21:16:05Z | 2024-01-02T15:08:47Z | null | wesbos |
huggingface/optimum-neuron | 395 | How to use generate() with inputs_embeds | I hope this is the right place to ask this question. Let me know if I need to move to another repo.
Currently I'm using `NeuronModelForCausalLM`.
I have a use case where I need to be able to do the following:
1. Generate embedding tokens
2. Modify embedding tokens
3. Run inference from modified embedding tok... | https://github.com/huggingface/optimum-neuron/issues/395 | closed | [
"Stale"
] | 2023-12-28T18:28:28Z | 2024-10-31T08:04:57Z | null | liechtym |
huggingface/transformers.js | 483 | Unrecognized token '<' when running | ### Question
I downloaded the react translation example. When I start the app everything seems to render fine, but as soon as I press translate, nothing happens and I get this error in the console on the browser:
`Unhandled Promise Rejection: SyntaxError: JSON Parse error: Unrecognized token '<'`
I've gotten th... | https://github.com/huggingface/transformers.js/issues/483 | closed | [
"question"
] | 2023-12-28T14:44:50Z | 2023-12-28T20:35:02Z | null | philg-204 |
huggingface/transformers.js | 482 | How tot get the same output as the python library for the Resnet Model ? | ### Question
Hi,
I am trying to translate a python script to use it in my node server. Currently, I spawn a process to execute the python code, but I would like to improve response time by using the transformers.js version.
My problem is that I don't have the same output with the two codes.
The python output... | https://github.com/huggingface/transformers.js/issues/482 | closed | [
"question"
] | 2023-12-28T11:38:20Z | 2024-01-10T15:04:22Z | null | Spoutnik97 |
huggingface/diffusers | 6,370 | How to use diffusers lora in the AUTOMATIC1111 | Thanks for your great work, I use the train_text_to_image_lora_sdxl.py to train my custom dataset and get these output, And I get the good result. But I want to use the AUTOMATIC1111 to use the lora weight, I move the pytorch_lora_weights to the AUTOMATIC1111 lora folder But get the error report:`AssertionError: conver... | https://github.com/huggingface/diffusers/issues/6370 | closed | [] | 2023-12-28T06:17:19Z | 2024-01-02T13:38:26Z | null | chongxian |
huggingface/computer-vision-course | 163 | How to include "What you'll learn" section for this course? | Hello everyone,
Our PR for Fundamentals of Computer Vision was merged a few days back. After that, one thing we still need to acknowledge based on your [feedback](https://github.com/johko/computer-vision-course/issues/38#issuecomment-1764502604) on our chapter outline is building a demo using Gradio to give learners ... | https://github.com/huggingface/computer-vision-course/issues/163 | closed | [] | 2023-12-27T12:41:26Z | 2024-04-26T13:36:59Z | null | seshupavan |
huggingface/transformers | 28,260 | How to set pad_token of Llava for batched generation and training? | Hello, @younesbelkada I'm trying to use Llava for batched generation, using the default pad_token. here is the script:
```python
import json
from PIL import Image
from transformers import AutoProcessor, LlavaForConditionalGeneration,AutoTokenizer
from torch.utils.data import Dataset,DataLoader
import torch
impor... | https://github.com/huggingface/transformers/issues/28260 | closed | [] | 2023-12-27T12:17:02Z | 2024-02-05T02:43:32Z | null | TideDra |
huggingface/transformers | 28,259 | How to add new merge rules in AutoTokenizer | ### Model description
I'm training new tokenizer from llama2, however, it seems that BPE tokenizer will clear the origin "vocab" and "merge" dict, and the training result is highly bias in my own datasets (about 6M C function) with some ugly tokens.
I wonder that is it possible to train a tokenizer from llama2 with... | https://github.com/huggingface/transformers/issues/28259 | open | [
"New model"
] | 2023-12-27T12:15:26Z | 2023-12-27T12:15:26Z | null | Sandspeare |
huggingface/accelerate | 2,289 | [QUESTION] why stage3_gather_16bit_weights_on_model_save is set to false no matter what value of it in deepspeed config | [`accelerator._prepare_deepspeed()`](https://github.com/huggingface/accelerate/blob/d08c23c20975f39393b431143237c193733e7bb8/src/accelerate/accelerator.py#L1464C13-L1464C82) looks to force the `stage3_gather_16bit_weights_on_model_save` to `false`, which should raise an exception in [`accelerator.get_state_dict()`](htt... | https://github.com/huggingface/accelerate/issues/2289 | closed | [] | 2023-12-27T10:04:28Z | 2024-01-05T06:59:16Z | null | LaniakeaS |
huggingface/diffusers | 6,352 | how to choose save precision for lora file in training | I'm confused about my lora precision(fp16,bf16,float) and whether i can choose precision about my lora weights. I searched for the params about the **StableDiffusionXLPipeline.save_lora_weights** function used to save lora in sdxl text2img training script and didnt find params like 'save_precision' or sth.
anyone ca... | https://github.com/huggingface/diffusers/issues/6352 | closed | [] | 2023-12-27T09:02:47Z | 2023-12-28T08:21:29Z | null | DoctorTar |
huggingface/transformers.js | 481 | Why do certain models not load? | ### Question
I was keen to try:
https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0
I tried:
```ts
import {
AutoModelForCausalLM,
AutoTokenizer,
} from '@xenova/transformers';
const autoTokenizer = await AutoTokenizer.from_pretrained(
'Upstage/SOLAR-10.7B-Instruct-v1.0',
);
const model ... | https://github.com/huggingface/transformers.js/issues/481 | open | [
"question"
] | 2023-12-27T01:44:52Z | 2024-05-10T18:21:57Z | null | adaboese |
pytorch/TensorRT | 2,558 | How to set the input when compiling model for non-image input? | Hi, I have trained a model whose input is a set of 3D points with a shape `Nx3`, N is not a fixed number. In this case, how to set the input during compiling my model?
For image, the input shape is like this:
```
inputs = [torch.randn((1, 3, 224, 224)).to("cuda").half()]
```
What if for my case? Thank you!
``... | https://github.com/pytorch/TensorRT/issues/2558 | open | [
"question"
] | 2023-12-26T12:34:22Z | 2023-12-27T18:20:31Z | null | DeepDuke |
huggingface/peft | 1,298 | [Question] What is the main difference between "modules_to_save" and "target_modules"? | Hi, in my work I need to add some special token to LLAMA, so I need to train the parameter of ["embed_tokens", "lm_head"] for both layers, what confuses me is that should I add this parameter to LoraConfig's "modules_to_save " or "target_modules"? Looking forward to your reply! | https://github.com/huggingface/peft/issues/1298 | closed | [] | 2023-12-26T07:37:05Z | 2024-02-03T15:03:27Z | null | SatireY |
huggingface/datasets | 6,534 | How to configure multiple folders in the same zip package | How should I write "config" in readme when all the data, such as train test, is in a zip file
train floder and test floder in data.zip | https://github.com/huggingface/datasets/issues/6534 | open | [] | 2023-12-26T03:56:20Z | 2023-12-26T06:31:16Z | null | d710055071 |
pytorch/xla | 6,234 | How to judge the input parameters in an hlo graph, which is the weight of the model | ## ❓ Questions and Help
How to judge the input parameters in an hlo graph, which is the weight of the model (that is, the parameters saved by the model and the parameters thought of the model training), is there any good way to judge it in C++ torch xla source code?
for example: (one model of only linear Op)
... | https://github.com/pytorch/xla/issues/6234 | closed | [] | 2023-12-25T09:22:28Z | 2024-01-24T06:22:24Z | null | ckfgihub |
pytorch/TensorRT | 2,557 | ❓ [Question] a10 performance drop significantly | ## ❓ Question
<!-- Your question -->
I converted the gfpgan model (https://github.com/TencentARC/GFPGAN) with torch_tensorrt, and I found torch_tensorrt is twice as fast as torch in 3070. But in one a10 server, torch_tensorrt and torch are closed; In other a10 server, torch_tensorrt is even twice as slow as torc... | https://github.com/pytorch/TensorRT/issues/2557 | open | [
"question"
] | 2023-12-25T08:54:43Z | 2024-01-05T02:12:17Z | null | ArtemisZGL |
huggingface/trl | 1,140 | How to additional finetune with new data from previous adapter ? | Hi All, I have question about finetune. Currently I use SFTtrainer for finetuning Llama2-7b-chat model and save it in adapter format. The question is, In case of I want to additional finetune with new data from previous adapter, How I could to do. Normally I additional finetune by merge adapter with base model before f... | https://github.com/huggingface/trl/issues/1140 | closed | [] | 2023-12-25T04:19:34Z | 2024-02-01T15:05:24Z | null | SiraHaruethaipree |
huggingface/optimum | 1,613 | Convert opus translation to onnx and run inference from it | To convert I use this snippet
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers.models.marian import MarianOnnxConfig
import onnxruntime as ort
model_ckpt = "Helsinki-NLP/opus-mt-en-zh"
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
ref_model = AutoModelForSeq2SeqLM.from_... | https://github.com/huggingface/optimum/issues/1613 | closed | [] | 2023-12-25T04:04:47Z | 2025-04-29T01:45:20Z | 5 | x4080 |
huggingface/chat-ui | 658 | chat-ui do not support TGI http url when deploy publicly | hi @nsarrazin, the chat-ui works well locally
~~~
# .env.local
endpoints: [{"type":"tgi","url":"http://127.0.0.1:8080/generate_stream"}]
~~~
but if deploy it in public, when chat from the external brower, get the 403 error:
~~~
403
You don't have access to this conversation. If someone gave you this link, ask... | https://github.com/huggingface/chat-ui/issues/658 | closed | [] | 2023-12-25T03:08:10Z | 2024-04-25T16:27:52Z | 1 | walkacross |
huggingface/transformers.js | 475 | How to use your own models | ### Question
Hey I really appreciate your work here!
I'm very interested in setting up a perfect RAG pipeline / flow and therefore I need a good document extraction with table-transformers and layout detection.
Example :
https://github.com/deepdoctection/deepdoctection
Where I'd use
https://huggingface.c... | https://github.com/huggingface/transformers.js/issues/475 | closed | [
"question"
] | 2023-12-24T21:38:02Z | 2024-05-15T09:32:26Z | null | DomEscobar |
huggingface/datasets | 6,530 | Impossible to save a mapped dataset to disk | ### Describe the bug
I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py).
After... | https://github.com/huggingface/datasets/issues/6530 | open | [] | 2023-12-23T15:18:27Z | 2023-12-24T09:40:30Z | 1 | kopyl |
huggingface/sentence-transformers | 2,392 | util.paraphrase_mining returning scores only above 0.98 | Hey,
I'm using util.paraphrase_mining (sentence-transformers v2.2.2) to get similarity scores (cosine) in a corpus of ~20k texts with the encoder model being all-MiniLM-L6-v2 and with the parameters query_chunk_size=500, corpus_chunk_size=1000, top_k=500000, max_pairs=5000000.
The returned list of triplets contain s... | https://github.com/huggingface/sentence-transformers/issues/2392 | closed | [
"question"
] | 2023-12-23T13:00:27Z | 2024-01-29T14:20:33Z | null | sinangokce |
huggingface/chat-ui | 656 | Web Search failed with "Invalid URL" | 
Why is this happening? It seems to happen regardless of whether I have USE_LOCAL_WEBSEARCH set to true or false.
```
SERPAPI_KEY=<my key>
USE_LOCAL_WEBSEARCH=true
MODELS=`[
{
"name": "mistralai/Mix... | https://github.com/huggingface/chat-ui/issues/656 | closed | [] | 2023-12-22T19:19:34Z | 2024-01-09T05:45:13Z | 5 | gururise |
huggingface/chat-ui | 655 | Generation failed (Module.summarize) when using TogetherAI openai compatible endpoint | TogetherAI offers an [OpenAI compatible endpoint](https://docs.together.ai/docs/openai-api-compatibility). When using this endpoint with the model setup as follows:
```
MODELS=`[
{
"name": "mistralai/Mixtral-8x7b-Instruct-v0.1",
"displayName": "Mixtral-8x7b",
"endpoints" : [{
"ty... | https://github.com/huggingface/chat-ui/issues/655 | open | [] | 2023-12-22T17:34:59Z | 2024-01-23T05:14:26Z | 1 | gururise |
huggingface/datasets | 6,529 | Impossible to only download a test split | I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function.
Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed b... | https://github.com/huggingface/datasets/issues/6529 | open | [] | 2023-12-22T16:56:32Z | 2024-02-02T00:05:04Z | 2 | ysig |
huggingface/transformers.js | 470 | How to convert a model with .pt tail | ### Question
I'm new to this area,I'm woundering how to convert a model with .pt tail?thanks a lot | https://github.com/huggingface/transformers.js/issues/470 | open | [
"question"
] | 2023-12-22T10:20:16Z | 2023-12-23T20:46:37Z | null | Bzayyz |
huggingface/transformers.js | 469 | How to convert a model with .pt tail | ### Question
I'm new to this area,I'm woundering how to convert a model with .p2 tail?thanks a lot | https://github.com/huggingface/transformers.js/issues/469 | closed | [
"question"
] | 2023-12-22T10:20:05Z | 2023-12-22T10:20:54Z | null | Bzayyz |
pytorch/tutorials | 2,721 | [BUG] - <title>RuntimeError: CUDA error: an illegal memory access was encountered using vmap and model ensembling call for cuda system | ### Add Link
https://pytorch.org/tutorials/intermediate/ensembling.html
https://pytorch.org/docs/stable/notes/extending.func.html#defining-the-vmap-staticmethod
### Describe the bug
### 🐛 Describe the bug
I want to use **vmap** to vectorize the **ensemble models** inherited from torch.autograd.Function. And tor... | https://github.com/pytorch/tutorials/issues/2721 | open | [
"bug",
"core"
] | 2023-12-22T09:26:03Z | 2024-01-04T08:27:38Z | 2 | wuyingxiong |
huggingface/chat-ui | 650 | chat-ui docker image failed to connect the mongo docker contrainer | step 1: build the chat-ui image
~~~
docker build -t chat-ui -f ./Dockerfile.local .
~~~
step 2:
~~~
# bind the 27016
docker run -d -p 27016:27017 --name mongo-chatui mongo:latest
~~~
step 3: run a contrainer
~~~
# add a .env.local config
MONGODB_URL=mongodb://localhost:27016
HF_TOKEN=<your access tok... | https://github.com/huggingface/chat-ui/issues/650 | open | [
"support",
"docker"
] | 2023-12-22T08:34:52Z | 2025-05-25T20:37:17Z | 6 | walkacross |
huggingface/chat-ui | 649 | Formatting is incorrect when using LiteLLM (Together.ai) | I'm using Mixtral-7b-Instruct-v0.1 via [LiteLLM](https://github.com/BerriAI/litellm) to provide a OpenAI compatible API to together.ai where the model is hosted.
Everything works fine, including streaming; however, the formatting is messed up as shown. Any ideas why?
 | https://github.com/huggingface/candle/issues/1463 | open | [] | 2023-12-21T18:42:38Z | 2024-01-01T11:56:29Z | null | tyfeng1997 |
pytorch/audio | 3,720 | Can't install some of the libraries | Hello, i have a problem while installing some of the libraries because i can't install module fcntl. Is there any solution because on one windows pc works but on my main it doesn't. That module is linux dependent. | https://github.com/pytorch/audio/issues/3720 | open | [] | 2023-12-21T13:58:55Z | 2023-12-21T13:58:55Z | 0 | Toplica001 |
pytorch/audio | 3,719 | streamreader add_video_stream doesn't seem to accept any filter_desc options | ### 🐛 Describe the bug
I'm using the following options in my streamreader:
```
vr.add_video_stream(
frames_per_chunk=decode_size,
decoder=codec,
decoder_option={"threads": "0", "gpu": "0"},
hw_accel='cuda',
... | https://github.com/pytorch/audio/issues/3719 | open | [] | 2023-12-21T09:58:03Z | 2023-12-28T07:46:49Z | 1 | caspersmit-sa |
huggingface/transformers | 28,179 | How to fine tune facebook/esm2_t33_650M_UR50D | ### System Info
How to fine tune facebook/esm2_t33_650M_UR50D?It's too big and the model.half() couldn't work. Besids, i always met the error : CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc). Is it possible that the model in the hug... | https://github.com/huggingface/transformers/issues/28179 | closed | [] | 2023-12-21T09:50:27Z | 2024-01-30T08:03:39Z | null | Admire7494 |
huggingface/alignment-handbook | 81 | Why we use a lower batch size when comparing SFT lora with SFT full fine-tuning ? | https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/sft/config_lora.yaml
| https://github.com/huggingface/alignment-handbook/issues/81 | closed | [] | 2023-12-20T21:09:33Z | 2024-01-07T21:03:14Z | 2 | shamanez |
huggingface/trl | 1,115 | How to prepare multi-turn dialogue dataset for dpo? | the single-turn dialogue dataset is like:
dpo_dataset_dict = {
"prompt": [
"hello",
"how are you",
"What is your name?",
"What is your name?",
"Which is the best programming language?",
"Which is the best programming language?",
"Which is the best pro... | https://github.com/huggingface/trl/issues/1115 | closed | [
"🏋 DPO"
] | 2023-12-20T09:14:45Z | 2024-10-03T14:12:48Z | null | chloefresh |
huggingface/transformers | 28,155 | What is the minimum video card with large memory required to run the mixtral-8x7b model | I mean the model that just came out:mistralai/Mixtral-8x7B-Instruct-v0.1,looks like a lot of parameter files,what is the minimum nvidia graphics card video memory required? | https://github.com/huggingface/transformers/issues/28155 | closed | [] | 2023-12-20T01:54:45Z | 2024-01-28T08:04:44Z | null | zysNLP |
huggingface/dataset-viewer | 2,218 | JobManagerCrashedError jobs are never retried | Currently, we have 7768 jobs with error_code `JobManagerCrashedError`. Some of them are caused by zombie killer set crashes.
```
Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.aggregate([{$match:{error_code:"JobManagerCrashedError","details.copied_from_artifact":{$exists:false}}}... | https://github.com/huggingface/dataset-viewer/issues/2218 | closed | [
"question"
] | 2023-12-19T15:22:30Z | 2024-01-09T20:32:58Z | null | AndreaFrancis |
pytorch/benchmark | 2,094 | how to get the memory test job | https://arxiv.org/pdf/2304.14226.pdf the paper says torchbench can do memory test, but I can‘t find any test jobs for memory test
https://github.com/pytorch/benchmark/actions | https://github.com/pytorch/benchmark/issues/2094 | closed | [] | 2023-12-19T14:18:29Z | 2023-12-20T01:59:46Z | null | GuWei007 |
pytorch/TensorRT | 2,551 | ❓ [Question] Error regarding the operation of pytorch_quantization:/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found | ## ❓ Question
<!-- Your question -->
When I run finetune_qat.py for vgg I get the error:
```
python finetune_qat.py
Traceback (most recent call last):
File "/home/incar/tms/source/tensortclassicify/finetune_qat.py", line 16, in <module>
from pytorch_quantization import nn as quant_nn
File "/home... | https://github.com/pytorch/TensorRT/issues/2551 | open | [
"question"
] | 2023-12-19T10:16:49Z | 2024-02-16T02:29:47Z | null | tms2003 |
huggingface/optimum | 1,608 | XENOVA conversion issues | ### System Info
```shell
using the requirements.txt in Xenova for environment.
https://github.com/xenova/transformers.js/blob/main/scripts/requirements.txt
```
### Who can help?
@xenova
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An off... | https://github.com/huggingface/optimum/issues/1608 | closed | [
"bug"
] | 2023-12-19T02:11:58Z | 2023-12-19T04:54:00Z | 3 | gidzr |
pytorch/torchx | 802 | Why can't tracker entrypoint be specified in .torchxconfig | ## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
The [documentation](https://pytorch.org/torchx/main/tracker.html#user-job-co... | https://github.com/meta-pytorch/torchx/issues/802 | open | [] | 2023-12-18T21:26:02Z | 2023-12-19T17:39:10Z | 2 | clumsy |
huggingface/safetensors | 409 | Doesn't work with versions of torch where "meta" dtype is not supported. | ### System Info
This is on my mac where I was just testing the interface. It seems like this could easily be fixed.
```
...
>>> from safetensors.torch import save_file
>>> x
{'a': tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])}
>>> x['a'].device
device(type='cpu')
>>> save_file(x, filename='foo')
Traceback... | https://github.com/huggingface/safetensors/issues/409 | closed | [
"Stale"
] | 2023-12-18T15:51:28Z | 2024-01-23T01:49:25Z | null | danpovey |
huggingface/candle | 1,457 | How to do to quantize manually a phi-2 version, starting from safetensors file | Hi
I have fine tuned a phi-2 model using lora
I merged adapter with base model to get a trained one
I now have a bunch of safetensors file
How is it possible to convert these files into a gguf file ( llama.cpp concerter does not support phi)
In other words, how is it possible to achieve the same as : mo... | https://github.com/huggingface/candle/issues/1457 | closed | [] | 2023-12-18T15:14:37Z | 2023-12-18T15:58:12Z | null | ghost |
huggingface/optimum | 1,605 | Static Quantization - Token classification | Hi,
I am following the code [here](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/quantization/token-classification) for doing static quantization on my token classification model.
The inference time for quantized model(static) is almost the same as non quantized one. I have tried dynamic q... | https://github.com/huggingface/optimum/issues/1605 | open | [
"quantization"
] | 2023-12-18T13:31:33Z | 2024-10-09T09:21:22Z | 0 | akshay-babbar |
huggingface/diffusers | 6,211 | [Examples] How much time you support training scripts of text to video in diffusers? | I want to train svd in diffusers, can you support this feature in examples.
Thanks for your contributions. | https://github.com/huggingface/diffusers/issues/6211 | closed | [
"stale"
] | 2023-12-18T08:26:57Z | 2024-01-26T15:05:32Z | null | jiaxiangc |
huggingface/optimum | 1,604 | Table Transformer to ONNX | ### Feature request
Hi all,
I am trying to convert Table-transformer model from transformers(pretrained) to ONNX. Error reads something like " 'table-transformer' is not a supported format.
Is there any way to convert table-transformer (TATR) to ONNX model. Any help would be cherished.
Thanks.
### Motivation
M... | https://github.com/huggingface/optimum/issues/1604 | closed | [
"feature-request",
"onnx"
] | 2023-12-18T07:18:21Z | 2024-02-28T08:52:49Z | 3 | balajiChundi |
huggingface/safetensors | 407 | Does safetensors save the model's hierarchical structure? Is it similar to ONNX? | If safetensors saves the model's hierarchical structure, how can one access this structure? Is it possible to read it directly like with ONNX?Can I directly load a model from safetensors?
If the hierarchical structure of the model is not preserved, does it mean that the original model must be read from config.json? | https://github.com/huggingface/safetensors/issues/407 | closed | [
"Stale"
] | 2023-12-17T15:04:55Z | 2024-02-24T01:45:09Z | 3 | ZDragonX |
huggingface/datasets | 6,507 | where is glue_metric.py> @Frankie123421 what was the resolution to this? | > @Frankie123421 what was the resolution to this?
use glue_metric.py instead of glue.py in load_metric
_Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
| https://github.com/huggingface/datasets/issues/6507 | closed | [] | 2023-12-17T09:58:25Z | 2023-12-18T11:42:49Z | null | Mcccccc1024 |
huggingface/peft | 1,278 | How to add trainable parameters? (bugs in 'modules_to_save') | ### System Info
Hi,
How can I train other weights in the model rather than fix them during lora training?
### Who can help?
@BenjaminBossan Hi, I find you are active recently so I @ you here..
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially su... | https://github.com/huggingface/peft/issues/1278 | closed | [] | 2023-12-17T05:34:09Z | 2024-01-29T15:03:39Z | null | shawnricecake |
pytorch/audio | 3,717 | AV-HuBERT integration with torchaudio.pipelines.Wav2Vec2FABundle | ### 🚀 The feature
How would someone go about configuring AV-HuBERT to work with `torchaudio.pipelines.Wav2Vec2FABundle`? It currently only supports [MMS_FA](https://pytorch.org/audio/stable/pipelines.html#pertrained-models)
### Motivation, pitch
Currently the `torchaudio.pipelines.Wav2Vec2FABundle` forced aligner o... | https://github.com/pytorch/audio/issues/3717 | open | [] | 2023-12-16T01:04:05Z | 2023-12-16T01:04:05Z | 0 | bejjani |
huggingface/accelerate | 2,262 | When I trained with two processes the gradient of the parameters could not be shared and I ended up with two different models. How to solve this problem? | When I trained with two processes the gradient of the parameters could not be shared and I ended up with two different models. Did anyone meet this problem before? How to solve it? | https://github.com/huggingface/accelerate/issues/2262 | closed | [] | 2023-12-15T13:48:34Z | 2024-06-11T12:26:07Z | null | zypsjtu |
huggingface/datasets | 6,501 | OverflowError: value too large to convert to int32_t | ### Describe the bug

### Steps to reproduce the bug
just loading datasets
### Expected behavior
how can I fix it
### Environment info
pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3... | https://github.com/huggingface/datasets/issues/6501 | open | [] | 2023-12-15T10:10:21Z | 2025-06-27T04:27:14Z | 1 | zhangfan-algo |
pytorch/kineto | 851 | In Overview page, time unit error | Time unit error

| https://github.com/pytorch/kineto/issues/851 | closed | [
"question"
] | 2023-12-15T04:15:45Z | 2024-04-23T15:23:24Z | null | Aiuan |
huggingface/diffusers | 6,178 | How to train Stable Diffusion with DDPM? | I want to train Stable Diffusion with DDPM, but I can't find the code in this project. I found a lot of training code elsewhere on the internet, but most of it is distillation code on pre-trained models, not the original DDPM training code. I also tried to implement the original training code myself, but I couldn't get... | https://github.com/huggingface/diffusers/issues/6178 | closed | [] | 2023-12-15T02:43:07Z | 2023-12-15T02:54:06Z | null | MenSanYan |
huggingface/dataset-viewer | 2,208 | Add a collection with datasets infos | While working on enabling private datasets (#39) under conditions (isPro, isEnterprise), I thought we missed a place where we control the access to the dataset.
I think the first step in the DAG, instead of dataset-config-names, should be more about the dataset characteristics: if it's private or public, maybe if it... | https://github.com/huggingface/dataset-viewer/issues/2208 | closed | [
"question",
"refactoring / architecture",
"P2"
] | 2023-12-14T13:59:42Z | 2024-01-11T14:30:03Z | null | severo |
huggingface/dataset-viewer | 2,207 | Backfill job processes datasets with disabled viewer? | If I read the code correctly, the backfill cronjob does not check if the dataset viewer is disabled (`viewer: false` in the README).
If we want to implement the dataset viewer for private datasets, under conditions (isPro, isEnterprise), we will have to check these conditions before adding jobs. | https://github.com/huggingface/dataset-viewer/issues/2207 | closed | [
"bug",
"question",
"P2"
] | 2023-12-14T13:01:53Z | 2024-02-06T16:03:10Z | null | severo |
huggingface/huggingface_hub | 1,907 | How to fix "VBox(children=(HTML(value='<center> <img..." error? When trying login() | ### Describe the bug
Hello. I am doing like below but it doesn't show enter token panel as supposed to be
What could be the reason?

Pip freeze is as below
```
alembic @ file:///home/conda/feedstoc... | https://github.com/huggingface/huggingface_hub/issues/1907 | closed | [
"bug"
] | 2023-12-14T11:45:44Z | 2025-03-15T08:03:44Z | null | FurkanGozukara |
huggingface/unity-api | 17 | Android support | Great repo! My question is - does it work on Android?
I did some research but couldn't find much - except for some comments on [YouTube](https://www.youtube.com/watch?v=Ngmb7l7tO0I) that speech recognition doesn't really work on Android ("_when i export to an a Android Device the text always is "you", no matter what... | https://github.com/huggingface/unity-api/issues/17 | open | [
"question"
] | 2023-12-14T11:15:56Z | 2024-01-18T10:56:45Z | null | dogadogan |
huggingface/alignment-handbook | 76 | can we inference with lora adapter after running the SFT ? | I trained the model using SFT on a custom dataset using lora config, which produced a Lora adapter, can we infer with it like having a base model and this adapter on top of it, or merge it ? | https://github.com/huggingface/alignment-handbook/issues/76 | closed | [] | 2023-12-14T10:55:20Z | 2023-12-28T07:14:29Z | 2 | Tejaswi-kashyap-006 |
huggingface/accelerate | 2,251 | when a tensor is generated from some_func(A.shape) (where A is a tensor), the generated tensor locates in cpu, not A's device | how to solve it ? I have tried tensor.to(A.device) and tensor.to(accelerator.device), but it seems not to work. | https://github.com/huggingface/accelerate/issues/2251 | closed | [] | 2023-12-14T09:18:15Z | 2023-12-14T14:38:17Z | null | weizhenhuan |
pytorch/serve | 2,853 | Torchserve Error: number of batch response mismatched | ### 🐛 Describe the bug
We deployed NER Model with n1-standard-8 machine without GPU with below config properties. when we kept batch size as 1, it is taking more time to process the simultaneous requests. when we try to increase the batch size, we are getting below error. (we tried with different batch size like 16... | https://github.com/pytorch/serve/issues/2853 | closed | [
"triaged"
] | 2023-12-14T08:33:11Z | 2024-01-18T20:11:46Z | 9 | rajeshmore1 |
pytorch/TensorRT | 2,541 | ❓ [Question] Is it possible to export unet's tensorrt engine as a file in stable diffusion? | ## ❓ Question
Hello. I am currently trying to infer the stable diffusion XL inpaint model using your package.
model link : https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1
I referred to your example code and modified it as follows.
```python
import torch
from diffusers import AutoP... | https://github.com/pytorch/TensorRT/issues/2541 | open | [
"question"
] | 2023-12-14T08:13:19Z | 2023-12-15T22:48:48Z | null | 0-chan-kor |
huggingface/peft | 1,265 | When generate outputs, how to get the probility of the outputs? Is there any param to let the model output probility ? | ### Feature request
xx
### Motivation
xx
### Your contribution
xx | https://github.com/huggingface/peft/issues/1265 | closed | [] | 2023-12-14T08:05:34Z | 2023-12-14T10:37:19Z | null | ShawnALiu |
huggingface/transformers | 28,025 | How to combine two pretrained model in huggingface transformers? | ### Feature request
I want to combine two pretrained model(LLAMA and BERT) in a new python class. More specific,The way I've tried is to define a new class c that inherits llama and load bert in c's \_\_init\_\_ function.

If you go to https://... | https://github.com/huggingface/chat-ui/issues/631 | open | [
"enhancement"
] | 2023-12-13T10:50:19Z | 2023-12-14T14:26:31Z | 4 | patchie |
huggingface/optimum | 1,592 | Can optimum.bettertransformer supports LLAVA model? | ### System Info
```shell
Local NVIDIA env:
(llava) xuyang@nobisuke:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0
Python=3.10.4
Torch... | https://github.com/huggingface/optimum/issues/1592 | closed | [
"bug"
] | 2023-12-13T09:08:35Z | 2023-12-13T12:37:13Z | 1 | xiaovhua |
huggingface/blog | 1,702 | How to introduce new alphabets in Whisper fine-tuning | Dear @sanchit-gandhi,
I was following your tutorial, [Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper), to fine-tune Whisper with a dataset in the Amharic language. Amharic is used in Whisper training as speech-translation only, [Amharic audio -> corresponding... | https://github.com/huggingface/blog/issues/1702 | open | [] | 2023-12-13T02:47:31Z | 2024-10-02T02:16:12Z | null | mequanent |
huggingface/chat-ui | 629 | Unable to use Azure AD for OpenID signin | Azure AD does not return the `picture` claim for the `profile` scope which results in a Zod validation error and authentication failing with `HTTP 500`:
```
chat-ui-chat-ui-1 | 21:07:21 28|index | ZodError: [
chat-ui-chat-ui-1 | 21:07:21 28|index | {
chat-ui-chat-ui-1 | 21:07:21 28|index | "code": "inval... | https://github.com/huggingface/chat-ui/issues/629 | closed | [
"support"
] | 2023-12-12T21:22:19Z | 2024-02-19T09:39:51Z | 8 | zacps |
huggingface/chat-ui | 628 | isModelsModalOpen is not defined in ChatIntroduction.svelte probably after recent update ? | Hi getting this error after updating to the latest version :
Am Running :
{
'chat-ui': '0.6.0',
npm: '10.2.4',
node: '21.3.0',
acorn: '8.11.2',
ada: '2.7.4',
ares: '1.20.1',
base64: '0.5.1',
brotli: '1.0.9',
cjs_module_lexer: '1.2.2',
cldr: '44.0',
icu: '74.1',
llhttp: '9.1.3',
... | https://github.com/huggingface/chat-ui/issues/628 | closed | [
"support"
] | 2023-12-12T18:49:31Z | 2023-12-24T07:40:42Z | 7 | DrShivang |
huggingface/autotrain-advanced | 389 | How to disable default used --multi_gpu ? | File "/app/env/lib/python3.10/site-packages/accelerate/commands/launch.py", line 822, in _validate_launch_command
raise ValueError("You need to use at least 2 processes to use `--multi_gpu`.")
ValueError: You need to use at least 2 processes to use `--multi_gpu`.
How to disable this from the default provided... | https://github.com/huggingface/autotrain-advanced/issues/389 | closed | [] | 2023-12-12T13:32:03Z | 2023-12-15T09:21:52Z | null | FiveTechSoft |
huggingface/chat-ui | 627 | Rlhf data collection feature | Is it possible to add a way to generate multiple drafts for a given input. And then based on what the user picks save that data so that it can be used for rlhf? | https://github.com/huggingface/chat-ui/issues/627 | open | [
"enhancement",
"front",
"back"
] | 2023-12-12T13:29:06Z | 2023-12-14T08:53:14Z | 0 | nivibilla |
huggingface/transformers | 27,974 | how to replace the existing token in a tokenizer | ### Feature request
I have a tokenizer which have lots of preserved tokens like bellow:
```
'<reserved_7>': 100,
'<reserved_8>': 101,
'<reserved_9>': 102,
'<reserved_10>': 103,
'<reserved_11>': 104,
'<reserved_12>': 105,
'<reserved_13>': 106,
'<reserved_14>': 107,
```
I want to replace the '<reser... | https://github.com/huggingface/transformers/issues/27974 | closed | [] | 2023-12-12T12:59:53Z | 2025-05-05T19:18:29Z | null | muziyongshixin |
pytorch/TensorRT | 2,530 | ❓ [Question] The stable diffusion example doesn't work | ## ❓ Question
<!-- Your question -->
## What you have already tried
https://github.com/pytorch/TensorRT/blob/main/examples/dynamo/torch_compile_stable_diffusion.py
I tried executing the above Python code, but conversion to TensorRT failed as shown below.
```bash
WARNING:torch_tensorrt.dynamo.backend.backe... | https://github.com/pytorch/TensorRT/issues/2530 | closed | [
"question"
] | 2023-12-12T10:35:01Z | 2024-10-25T10:30:09Z | null | 0-chan-kor |
huggingface/chat-ui | 623 | ChatUI with Docker - Permissions Issue | I'm trying to use the ChatUI space with Docker. I have a private, custom model which I've trained.
I want to access it in a private space using Docker ChatUI
I seem to be running into permissions errors.
Things I've tried:
Following the instructions set out here: https://huggingface.co/blog/Llama2-for-non-engin... | https://github.com/huggingface/chat-ui/issues/623 | open | [
"support"
] | 2023-12-12T08:10:31Z | 2023-12-28T13:58:22Z | 1 | aidansys17 |
huggingface/text-generation-inference | 1,332 | How can I set log output to local file | ### Feature request
I want to set the TGI log to file instead of stdout.
### Motivation
I want to set the TGI log to file instead of stdout.
### Your contribution
how can I use params in command of env variables to set log output to file. | https://github.com/huggingface/text-generation-inference/issues/1332 | closed | [
"Stale"
] | 2023-12-12T07:54:26Z | 2024-01-18T01:46:56Z | null | soulseen |
pytorch/serve | 2,849 | Broken pipe on big response tensors | ### 🐛 Describe the bug
We have a model which essentially does image segmentation of sorts.
The output tensor is of this size: `[batch, 920, 920]`, fp32.
I keep getting broken pipe errors in this:
From my debugging, it essentially fails after I return this tensor from my `postprocess` method in base h... | https://github.com/pytorch/serve/issues/2849 | open | [
"triaged"
] | 2023-12-12T07:30:27Z | 2023-12-29T11:17:16Z | 3 | hariom-qure |
huggingface/alignment-handbook | 74 | A question about the SFTTrainer (also a theoretical question about SFT in general) | I have a general question about Supervised Fine Tuning (SFT) for Dialogue applications.
Should the SFT process use the same LM objective (next-token prediction) that is used in pre-training a language model?
The "Dialogue" task is predicting "assistant" tokens, right? Shouldn't the objective be predicting only th... | https://github.com/huggingface/alignment-handbook/issues/74 | open | [] | 2023-12-12T06:54:02Z | 2024-01-22T14:34:15Z | 3 | PradeepKadubandi |
huggingface/transformers.js | 453 | Summarization Parameters not working | ### Question
I've tried several of the supported summarization models with the code used in the browser extension example.
The only one I get any results from in a reasonable time is t5-small.
My problem with it is that despite any parameters I try to pass in the result is always same length.
I've traced thro... | https://github.com/huggingface/transformers.js/issues/453 | open | [
"question"
] | 2023-12-12T06:21:52Z | 2023-12-19T21:52:32Z | null | kwlayman |
huggingface/safetensors | 400 | torch.nn.Module named_parameters() seem to be failing for safetensors | ### System Info
safetensors==0.4.1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Reproduction
Noticed this issue with the new Mixtral model
https://github.com/vllm-project/vllm/issues/2020
Is there any way to fix this with safetensors?
### Expected behavior
Load the m... | https://github.com/huggingface/safetensors/issues/400 | closed | [
"Stale"
] | 2023-12-11T18:54:06Z | 2024-01-17T01:48:50Z | 1 | 0-hero |
huggingface/optimum | 1,583 | Add support for Chatglm2 & qwen onnx models | ### Feature request
Need to export ChatGLM2 & Qwen models to onnx using hf optimum.
ChatGLM2: model-card-> [https://huggingface.co/THUDM/chatglm2-6b](https://github.com/huggingface/optimum/issues/url)
Qwen: model-card-> [https://huggingface.co/Qwen/Qwen-7B-Chat](https://github.com/huggingface/optimum/issues/url)
... | https://github.com/huggingface/optimum/issues/1583 | closed | [] | 2023-12-11T15:22:59Z | 2024-04-24T10:21:48Z | 4 | manishghop |
huggingface/peft | 1,247 | How to save parameters in prompt_encoder layers in p-tuning? | I want to resume training from checkpoint in p-tuning, but the model only save parameters in prompt_embeddings.
<img width="370" alt="image" src="https://github.com/huggingface/peft/assets/58416622/a085224f-32f2-409c-9a51-77c7438bc6a2">
| https://github.com/huggingface/peft/issues/1247 | closed | [] | 2023-12-11T02:44:59Z | 2024-01-19T15:03:32Z | null | lyt719 |
huggingface/optimum-benchmark | 102 | How to evaluate a model that already exists locally and hasn't been uploaded yet, "model=?" | 
i really want to know how to load qwen model, thank you very much | https://github.com/huggingface/optimum-benchmark/issues/102 | closed | [] | 2023-12-10T08:35:59Z | 2024-01-11T08:18:17Z | null | WCSY-YG |
huggingface/transformers | 27,928 | [Question] What is the main difference between "AutoModelForCasualLM" and "PeftModelForCausalLM"? | I also wrote it down in peft repo. However this issue is also related to transformers. So i write my question here again.
issue is here in peft(https://github.com/huggingface/peft/issues/1245)
Hello, Sorry for naive question.
I noticed that the``model.generate()`` function performed differently when inferrence rig... | https://github.com/huggingface/transformers/issues/27928 | closed | [] | 2023-12-10T03:10:36Z | 2024-02-01T00:49:07Z | null | daehuikim |
huggingface/peft | 1,245 | [Question] What is the main difference between "AutoModelForCasualLM" and "PeftModelForCausalLM"? | Because This is is related to "transformers". Therefore I wrote this question in transformers repo either.
issue is here in transformers(https://github.com/huggingface/transformers/issues/27928)
Hello, Sorry for naive question.
I noticed that the``model.generate()`` function performed differently when inferrence r... | https://github.com/huggingface/peft/issues/1245 | closed | [] | 2023-12-10T03:08:54Z | 2023-12-11T11:15:25Z | null | daehuikim |
pytorch/serve | 2,841 | Not able to get the data for inference when using custom handler | I team, I have created my own custom handler by referencing to the base-handler and the vision-handler. What I am observing is that, when I pass data to the model for inference, the data is not reaching to the hosted model endpoint.
The exact error I am getting is:
```
2023-12-09T20:08:03,580 [INFO ] W-9000-vit_... | https://github.com/pytorch/serve/issues/2841 | closed | [
"triaged_wait",
"support"
] | 2023-12-09T20:10:19Z | 2023-12-23T17:13:36Z | 2 | yogendra-yatnalkar |
huggingface/diffusers | 6,113 | How to use the models from sd_control_collection hf repo in diffusers | How to load/convert the models at https://huggingface.co/lllyasviel/sd_control_collection/tree/main with diffusers?
```
>>> pipe = diffusers.StableDiffusionPipeline.from_single_file("diffusers_xl_canny_full.safetensors")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubunt... | https://github.com/huggingface/diffusers/issues/6113 | closed | [] | 2023-12-09T14:11:26Z | 2024-06-11T18:22:03Z | null | anilsathyan7 |
pytorch/TensorRT | 2,525 | ❓[Question] The only valid use of a module is looking up an attribute but found... | ## ❓ Question
<!-- Your question -->
Hello, I have a torch scripted model that I am trying to compile with TensorRT:
```py
import cv2
import numpy as np
import torch
from torchvision.transforms import ToTensor
import torch_tensorrt
if __name__ == "__main__":
# Load the pre-trained model
model = ... | https://github.com/pytorch/TensorRT/issues/2525 | closed | [
"question",
"component: lowering"
] | 2023-12-08T23:09:04Z | 2024-06-11T18:33:42Z | null | edmuthiah |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.