repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/chat-ui
297
Is there a way to deploy without the HF token ?
I'm trying to use chat-ui with my own endpoints and I would like to know if I can get rid of the HF_ACCESS_TOKEN variable and also allow to run every model I want. I tried to modify the TS in modelEndpoint.ts and model.ts but I can't figure how to run it independently to HF (I want it offline), here are the parts I...
https://github.com/huggingface/chat-ui/issues/297
closed
[ "support" ]
2023-06-14T12:11:04Z
2023-06-15T09:52:39Z
2
samichaignonmejai
huggingface/chat-ui
296
Issue when deploying model : Error in 'stream': 'stream' is not supported for this model
I'm trying to use bigscience/bloom-560m with chat-ui I already have an API for the model and it's working well, same for chat-ui when I use my HF token but i get the following error message when I launch a request to my bloom-560m API from chat-ui : ``` Could not parse last message {"error":["Error in `stream`: ...
https://github.com/huggingface/chat-ui/issues/296
closed
[ "support", "models" ]
2023-06-14T09:04:07Z
2023-06-19T10:57:01Z
2
samichaignonmejai
huggingface/datasets
5,951
What is the Right way to use discofuse dataset??
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6) **Below is the following way, as per my understanding , Is it correct :question: :question:** The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** ar...
https://github.com/huggingface/datasets/issues/5951
closed
[]
2023-06-14T08:38:39Z
2023-06-14T13:25:06Z
null
akesh1235
huggingface/chat-ui
295
Facing issue for using custom model deployed locally on flask
I have a chat model which responds on ``` @app.route("/get") #function for the bot response def get_bot_response(): userText = request.args.get('msg') data = T.getResponse(userText) return str(data) ``` I'm not sure about the configuration but I have added `MODELS=[{"name": "mymodel", "endpoints"...
https://github.com/huggingface/chat-ui/issues/295
closed
[ "support" ]
2023-06-14T08:20:41Z
2023-07-24T10:53:41Z
6
awsum0225
pytorch/data
1,184
Roadmap for mixed chain of multithread and multiprocessing pipelines?
### 🚀 The feature [pypeln](https://cgarciae.github.io/pypeln/#mixed-pipelines) has a nice feature to chain pipelines which may run on different kind of workers including process, thread or asyncio. ```python data = ( range(10) | pl.process.map(slow_add1, workers=3, maxsize=4) | pl.thread.filter(slo...
https://github.com/meta-pytorch/data/issues/1184
open
[]
2023-06-14T07:12:36Z
2023-06-15T17:32:46Z
2
npuichigo
pytorch/serve
2,412
How to identify "full" torchserve instances on Google Kubernetes Engine
We're currently trying to deploy torchserve on scale on Kubernetes. We have highly fluctuating requests, basically every 5 minutes some requests come in with nothing in-between, and sometimes there'll be huge spikes. Therefore we want small pods that scale aggressively as soon as load comes in. Here comes the issues...
https://github.com/pytorch/serve/issues/2412
open
[ "triaged", "kubernetes" ]
2023-06-13T20:06:20Z
2023-06-26T17:16:00Z
null
tsteffek
huggingface/optimum
1,106
Onnxruntime support for multiple modalities model types
### Feature request Add support for layout and multi-modal models (e.g. LayoutLM, LayoutLMv3, LILT) to the ORTModels. ### Motivation ORTModels allows to interact with onnxruntime models in the same way as transformers API, which is very convenient, as optimum is a part of huggingface ecosystem and the compatib...
https://github.com/huggingface/optimum/issues/1106
open
[ "feature-request", "onnxruntime" ]
2023-06-13T14:30:10Z
2023-06-14T11:10:49Z
0
mariababich
huggingface/optimum
1,105
IO Binding for ONNX Non-CUDAExecutionProviders
### Feature request When using use_io_binding=True with TensorrtExecutionProvider, a warning appears : ``` No need to enable IO Binding if the provider used is not CUDAExecutionProvider. IO Binding will be turned off. ``` I don't understand the reason for this, as data movement optimization should also work f...
https://github.com/huggingface/optimum/issues/1105
open
[ "help wanted", "onnxruntime" ]
2023-06-13T14:11:31Z
2023-09-26T11:47:17Z
5
cyang49
pytorch/pytorch
103,506
How to add testing capabilities for third party devices
### 🚀 The feature, motivation and pitch The current community test cases are all cpu and cuda based, there is no ability to look after third party devices, for example many test cases use the @onlycuda decorator, any suggestions for improvements for the privateuse1 device? ### Alternatives _No response_ ### Additi...
https://github.com/pytorch/pytorch/issues/103506
closed
[ "triaged", "module: third_party", "module: testing" ]
2023-06-13T12:37:13Z
2023-06-26T17:07:54Z
null
Bin1024
huggingface/datasets
5,946
IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ??
### Describe the bug in <cell line: 1>:1 │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1537 in train ...
https://github.com/huggingface/datasets/issues/5946
open
[]
2023-06-13T07:34:15Z
2023-07-14T12:04:48Z
6
syngokhan
huggingface/safetensors
273
Issue with Loading Model in safetensors Format
### System Info - `transformers` version: 4.30.1 - Platform: macOS-13.4-arm64-arm-64bit - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (...
https://github.com/huggingface/safetensors/issues/273
closed
[ "Stale" ]
2023-06-12T21:25:33Z
2024-03-08T13:28:30Z
11
yachty66
pytorch/data
1,181
Does Collator need to exist?
### 📚 The doc issue Docs for [Collator](https://pytorch.org/data/0.6/generated/torchdata.datapipes.iter.Collator.html#torchdata.datapipes.iter.Collator) leave a lot of questions. > Collates samples from DataPipe to Tensor(s) by a custom collate function What does collate mean in this context? What is the coll...
https://github.com/meta-pytorch/data/issues/1181
open
[]
2023-06-12T15:02:52Z
2023-07-18T00:38:02Z
1
lendle
huggingface/transformers.js
144
Question-Answer Examples
Ca you please send us an example of question-answer please
https://github.com/huggingface/transformers.js/issues/144
closed
[ "question" ]
2023-06-09T21:54:37Z
2023-06-09T22:59:17Z
null
Zenyker
huggingface/optimum
1,095
Installation issue on Openvino NNcf
### System Info ```shell LINUX WSL 2 Distributor ID: Ubuntu Description: Ubuntu 20.04.6 LTS Release: 20.04 Codename: focal OPTIMUM Name: optimum Version: 1.8.6 Summary: Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party lib...
https://github.com/huggingface/optimum/issues/1095
closed
[ "bug" ]
2023-06-09T09:55:45Z
2024-01-05T11:10:06Z
5
DebayanChakraborty
pytorch/tutorials
2,453
💡 [REQUEST] - Add ABI=1 compilation instruction to README
### 🚀 Descirbe the improvement or the new tutorial Under certain usage circumstances, PyTorch needs to have C++11 ABI enabled. Currently there's no docs in README for introducing how to get it enabled. Link https://github.com/pytorch/pytorch/pull/95177 to enable this request. ### Existing tutorials on this topi...
https://github.com/pytorch/tutorials/issues/2453
closed
[]
2023-06-09T07:53:48Z
2023-06-15T07:13:34Z
1
jingxu10
huggingface/transformers.js
140
[Question] OrtRun error code 6 with a longer string for question-answering
Why do I keep running into an OrtRun error code 6 with a longer string for question-answering task: `const result = await model(question, context, { padding: true, truncation: true, }); ` Error: ` models.js:158 An error occurred during model execution: "Error: failed to call OrtRun...
https://github.com/huggingface/transformers.js/issues/140
closed
[ "bug", "question" ]
2023-06-09T04:07:28Z
2023-07-11T11:07:26Z
null
iamfiscus
huggingface/datasets
5,931
`datasets.map` not reusing cached copy by default
### Describe the bug When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was...
https://github.com/huggingface/datasets/issues/5931
closed
[]
2023-06-07T09:03:33Z
2023-06-21T16:15:40Z
1
bhavitvyamalik
huggingface/chat-ui
282
OpenID login
How to get providerURL, client ID and client token to create azure openid login?????
https://github.com/huggingface/chat-ui/issues/282
closed
[ "support" ]
2023-06-06T10:45:46Z
2023-06-19T09:38:34Z
1
sankethgadadinni
pytorch/tutorials
2,435
How can we contribute with videos
How can we contribute videos to GitHub in PyTorch? The video will likely be long and is a link enough to be contributed or should I send with a link
https://github.com/pytorch/tutorials/issues/2435
closed
[ "question" ]
2023-06-06T09:09:59Z
2023-06-12T16:19:56Z
null
Killpit
huggingface/transformers.js
137
[Question] Failed to fetch onnx model when to use AutoModel.from_pretrained
**The code here:** ``` import { AutoModel, AutoTokenizer } from '@xenova/transformers'; const modelPath = 'Xenova/distilgpt2' let tokenizer = await AutoTokenizer.from_pretrained(modelPath); // **successful to fetch model** let model = await AutoModel.from_pretrained(modelPath); // **failed to fetch model** ...
https://github.com/huggingface/transformers.js/issues/137
closed
[ "question" ]
2023-06-06T02:03:41Z
2023-06-20T13:24:37Z
null
peter-up
huggingface/transformers.js
136
[Question] Using CLIP for simple image-text similarity
I'm trying to get a simple image-text similarity thing working with CLIP, and I'm not sure how to do it, or whether it's currently supported with Transformers.js outside of the zero-shot image classification pipeline. Is there a code example somewhere to get me started? Here's what I have so far: ```js import { ...
https://github.com/huggingface/transformers.js/issues/136
closed
[ "question" ]
2023-06-05T14:24:56Z
2023-06-06T13:35:45Z
null
josephrocca
pytorch/pytorch
102,966
how to workaround the error "don't have an op for vulkan_prepack::create_linear_context" ?
### 🐛 Describe the bug I have a modified resnet-50 network, which I want to run on android using vulkan backend. The custom build of pytorch with USE_VULKAN=1 works fine, but I got the error message "We don't have an op for vulkan_prepack::create_linear_context but it isn't a special case." during "optimize_for_mo...
https://github.com/pytorch/pytorch/issues/102966
open
[ "module: build", "triaged", "module: vulkan", "ciflow/periodic" ]
2023-06-05T09:53:28Z
2023-09-12T00:19:52Z
null
ldfandian
huggingface/diffusers
3,669
General question: what are the steps to debug if the image produced is just wrong?
I have a lora(lycoris) that I have tested with A1111's webui and I'm pretty happy with the result. When I tried to use it with `diffusers` it just give me corrupted image. The lora brings some desired effect (like white background), but the overall image is just not right. I have included some personal code to use l...
https://github.com/huggingface/diffusers/issues/3669
closed
[ "stale" ]
2023-06-05T01:44:49Z
2023-07-13T15:03:51Z
null
wangdong2023
pytorch/pytorch
102,939
Not sure what is wrong,
### 🐛 Describe the bug It was working the last time I ran it, I ran an update and now i'm getting this when trying to train a lora ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDet...
https://github.com/pytorch/pytorch/issues/102939
closed
[]
2023-06-04T23:13:41Z
2023-06-05T15:28:14Z
null
NeVeREire
huggingface/chat-ui
275
web search hallucination and prompt results
Hello, great job building web search module. Just a few things i noticed using it for the past hours. 1- It does connect to the web perfectly. 2- It tend to take only the first page result and not contextualize enough the data, trying to mix it with the model data and it ends up destroying the final output. So maybe ...
https://github.com/huggingface/chat-ui/issues/275
open
[]
2023-06-02T23:09:11Z
2023-06-05T08:36:41Z
1
Billyroot
huggingface/peft
537
Where is the PeftModel weights stored?
## expect behavior I am going to check if the model (mt0-xxl [13B](https://huggingface.co/bigscience/mt0-xxl)) weights have been updated. Could you tell me how to check the weights of the model original before using peft? How to check loaded Lora Module weights when using the peft? ## script modified from [this...
https://github.com/huggingface/peft/issues/537
closed
[]
2023-06-02T09:10:09Z
2023-07-10T15:03:40Z
null
dsj96
pytorch/data
1,177
what is the right way to serialize DataLoader2 so that pipeline with shuffle can resume from the right place?
### 🐛 Describe the bug I tried all these versions, the only version that worked was the last one, but it's too hacky. Is there a better way? ```py dp = IterableWrapper(list(range(20))) dp = dp.shuffle() items = [] rs = InProcessReadingService() dl = DataLoader2(dp, reading_service=rs) ...
https://github.com/meta-pytorch/data/issues/1177
open
[]
2023-06-02T06:52:14Z
2023-06-08T17:31:18Z
2
zhengwy888
huggingface/chat-ui
273
Documentation about how to configure custom model endpoints is missing
It seems it has been removed in https://github.com/huggingface/chat-ui/commit/fae93d9fc3be9a39d8efd9ab9993dea13f0ae844.
https://github.com/huggingface/chat-ui/issues/273
closed
[ "documentation" ]
2023-06-01T19:37:44Z
2023-06-19T08:59:15Z
4
djmaze
pytorch/pytorch
102,718
How to support AMD GPU on Mac
### 🚀 The feature, motivation and pitch My computer is running macOS, with intel9900k cpu and amd Rx6600xt gpu. Can I build to support this gpu? ### Alternatives _No response_ ### Additional context _No response_
https://github.com/pytorch/pytorch/issues/102718
closed
[]
2023-06-01T09:03:42Z
2024-06-21T14:05:02Z
null
Aiden-Dong
pytorch/benchmark
1,707
How to execute with docker?
I'm using ARG BASE_IMAGE=ghcr.io/pytorch/torchbench:latest but I am having problems with this container. or should use ghcr.io/pytorch:pytorch-nightly or [ghcr.io/pytorch:pytorch-nightly](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch)
https://github.com/pytorch/benchmark/issues/1707
closed
[]
2023-06-01T07:43:32Z
2023-06-13T03:31:41Z
null
johnnynunez
huggingface/optimum
1,078
[SAM] Split encoder and mask decoder into separate .onnx files
### Feature request Currently, exporting SAM models with optimum results in a single .onnx file (https://huggingface.co/Xenova/sam-vit-base/tree/main/onnx). It would be great if we could add an option to separate the encoder and decoder into separate onnx files (like traditional seq2seq models). Example SAM expor...
https://github.com/huggingface/optimum/issues/1078
closed
[]
2023-05-31T10:47:19Z
2023-08-24T16:05:39Z
8
xenova
pytorch/data
1,175
Mux with MPRS causes operations after sharding_round_robin_dispatcher to run on the same worker
### 📚 The doc issue This doesn't seem to be mentioned in the docs, but if you have two datapipes that use `sharding_round_robin_dispatcher` and then `mux` them together: 1. Any steps between `sharding_round_robin_dispatcher` and `mux` will take place on the same worker process. 2. Only the steps after the `mux` wil...
https://github.com/meta-pytorch/data/issues/1175
open
[]
2023-05-30T20:36:43Z
2023-05-31T07:48:21Z
3
JohnHBrock
pytorch/data
1,174
Support for proper Distributed & Multiprocessing Sharding
### 🚀 The feature In MPI-based training, each process is independent from each other. Each training process might want to speed up dataloading using multiprocessing (MP). This requires data sharding to take place on two levels: A. On a distributed level, usually resulting in big(ger) shards. B. On a MP level la...
https://github.com/meta-pytorch/data/issues/1174
open
[]
2023-05-30T16:33:59Z
2023-05-30T16:40:35Z
0
sehoffmann
pytorch/tutorials
2,355
💡 [REQUEST] - Write a tutorial about how to leverage AMX with PyTorch on the 4th Gen of Xeon
### 🚀 Descirbe the improvement or the new tutorial The 4th Generation Intel® Xeon® Scalable Processor platform is an unique, scalable platform optimized for different workloads acceleration on AI. The new built-in AI acceleration engine, Intel® Advanced Matrix Extensions (AMX) is able to accelerate a variety of AI In...
https://github.com/pytorch/tutorials/issues/2355
closed
[ "docathon-h1-2023", "advanced", "intel" ]
2023-05-30T03:02:23Z
2023-11-02T19:30:05Z
null
mingfeima
huggingface/diffusers
3,602
What is the default for VAE option?
If "VAE" is not specified for "Stable Diffusion," what is the default applied?
https://github.com/huggingface/diffusers/issues/3602
closed
[]
2023-05-29T15:42:19Z
2023-06-08T10:30:27Z
null
Michi-123
pytorch/android-demo-app
322
I have a Whisper-based model. How can I convert it to fairseq.dict format ?
model https://huggingface.co/openai/whisper-large-v2
https://github.com/pytorch/android-demo-app/issues/322
open
[]
2023-05-29T08:52:30Z
2023-05-29T09:00:13Z
null
Roland-Du
huggingface/transformers.js
125
[Question] Why running transformer in js is faster than python?
I created a repo to test how to use transformers. https://github.com/pitieu/huggingface-transformers I was wondering why is it that running the same models in javascript is faster than running them in python? Is `Xenova/vit-gpt2-image-captioning` optimized somehow compared to `nlpconnect/vit-gpt2-image-captioning`...
https://github.com/huggingface/transformers.js/issues/125
closed
[ "question" ]
2023-05-28T05:23:05Z
2023-07-16T17:21:39Z
null
pitieu
huggingface/safetensors
258
ONNX has just become twice as fast as before. Can SafeTensors also achieve that?
Here are some announcements and technical details. It's nice to see that they are making significant improvements. Could some of that be useful and implemented for SafeTensors? https://devblogs.microsoft.com/directx/dml-stable-diffusion/ https://www.tomshardware.com/news/nvidia-geforce-driver-promises-doubled-stabl...
https://github.com/huggingface/safetensors/issues/258
closed
[]
2023-05-27T12:23:01Z
2023-06-07T09:26:24Z
2
WEBPerformace
huggingface/datasets
5,906
Could you unpin responses version?
### Describe the bug Could you unpin [this](https://github.com/huggingface/datasets/blob/main/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version. ### Steps to reproduce the bug could not install this librar...
https://github.com/huggingface/datasets/issues/5906
closed
[]
2023-05-26T20:02:14Z
2023-05-30T17:53:31Z
0
kenimou
pytorch/tutorials
2,352
💡 [REQUEST] - Port TorchRL `Pendulum` tutorial from pytorch.org/rl to pytorch.org/tutorials
### 🚀 Descirbe the improvement or the new tutorial For historical reasons, TorchRL privately hosts a bunch of tutorials. We'd like to bring the most significant ones to pytorch tutorials for more visibility. Here is the [tutorial](https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/pendulum.py). ...
https://github.com/pytorch/tutorials/issues/2352
closed
[ "medium", "docathon-h2-2023" ]
2023-05-26T19:50:31Z
2023-11-09T20:47:06Z
4
vmoens
pytorch/tutorials
2,351
💡 [REQUEST] - Port TorchRL "Coding a DDPG loss" from pytorch.org/rl to pytorch.org/tutorials
### 🚀 Descirbe the improvement or the new tutorial For historical reasons, TorchRL privately hosts a bunch of tutorials. We'd like to bring the most significant ones to pytorch tutorials for more visibility. Here is the [tutorial](https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/coding_ddpg.py)....
https://github.com/pytorch/tutorials/issues/2351
closed
[ "docathon-h1-2023", "medium" ]
2023-05-26T19:45:04Z
2023-06-13T16:15:45Z
2
vmoens
pytorch/tutorials
2,350
~PyTorch Docathon H1 2023~
# 🎉 It's a wrap! 🎉 See our [leaderboard](https://github.com/pytorch/tutorials/blob/main/docathon-leaderboard.md) and [blog post](https://pytorch.org/blog/docathon-h1-2023-wrap-up/). Thank you to everyone who contributed and congrats to the winners! We have a large backlog of issues that we want to address and...
https://github.com/pytorch/tutorials/issues/2350
closed
[ "docathon-h1-2023" ]
2023-05-26T19:09:32Z
2023-06-20T18:59:49Z
14
svekars
pytorch/tutorials
2,349
💡 [REQUEST] - Port TorchRL `Recurrent DQN` tutorial from pytorch.org/rl to pytorch.org/tutorials
### 🚀 Descirbe the improvement or the new tutorial For historical reasons, TorchRL privately hosts a bunch of tutorials. We'd like to bring the most significant ones to pytorch tutorials for more visibility. Here is the [tutorial](https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/dqn_with_rnn.p...
https://github.com/pytorch/tutorials/issues/2349
closed
[ "medium", "docathon-h2-2023" ]
2023-05-26T16:27:51Z
2023-11-08T16:40:10Z
4
vmoens
huggingface/datasets
5,905
Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently
### Feature request I would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset. ### Motivation I am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally...
https://github.com/huggingface/datasets/issues/5905
open
[ "enhancement" ]
2023-05-26T12:33:02Z
2023-06-15T13:34:18Z
1
bruno-hays
pytorch/tutorials
2,347
💡 [REQUEST] - Tutorial on extending TorchX
### 🚀 Descirbe the improvement or the new tutorial Create a better tutorial showing how to extend torchx. ### Existing tutorials on this topic https://pytorch.org/torchx/latest/custom_components.html ### Additional context _No response_ cc @msaroufim @svekars @carljparker @NicolasHug @kit1980 @subramen
https://github.com/pytorch/tutorials/issues/2347
open
[ "advanced", "module: torchx", "docathon-h2-2023" ]
2023-05-25T22:32:28Z
2023-11-19T17:51:58Z
12
sekyondaMeta
pytorch/tutorials
2,346
💡 [REQUEST] - How to use TorchServe on Vertex
### 🚀 Descirbe the improvement or the new tutorial Create a tutorial on how to use TorchServe on Vertex AI ### Existing tutorials on this topic _No response_ ### Additional context _No response_ cc @msaroufim @agunapal @svekars @carljparker @NicolasHug @kit1980 @subramen
https://github.com/pytorch/tutorials/issues/2346
closed
[ "torchserve", "advanced", "docathon-h2-2023" ]
2023-05-25T19:54:42Z
2023-11-15T00:29:15Z
null
sekyondaMeta
pytorch/tutorials
2,345
💡 [REQUEST] - How to use TorchServe on AWS SageMaker
### 🚀 Descirbe the improvement or the new tutorial Create a tutorial on how to use TorchServe on AWS SageMaker ### Existing tutorials on this topic _No response_ ### Additional context _No response_ cc @msaroufim @agunapal @svekars @carljparker @NicolasHug @kit1980 @subramen
https://github.com/pytorch/tutorials/issues/2345
open
[ "torchserve", "advanced", "docathon-h2-2023" ]
2023-05-25T19:53:36Z
2023-11-09T23:01:20Z
null
sekyondaMeta
pytorch/tutorials
2,341
💡 [REQUEST] - How to use TorchServe Large Model Inference: walk through an example
### 🚀 Descirbe the improvement or the new tutorial Create a new tutorial showing a walk through example of TorchServe Large Model Inference ### Additional context You can find some content to use here: https://github.com/pytorch/serve/blob/master/docs/large_model_inference.md https://github.com/pytorch/serv...
https://github.com/pytorch/tutorials/issues/2341
open
[ "torchserve", "advanced", "docathon-h2-2023" ]
2023-05-24T20:39:18Z
2023-11-01T16:48:43Z
null
sekyondaMeta
pytorch/tutorials
2,340
💡 [REQUEST] - How to use TorchServe: Walk through an example
### 🚀 Descirbe the improvement or the new tutorial We could use an updated tutorial/walk through example on how to use TorchServe. The closest thing we have is the TorchServe Getting Started page located [here](https://github.com/pytorch/serve/blob/master/docs/getting_started.md). ### Existing tutorials on this to...
https://github.com/pytorch/tutorials/issues/2340
open
[ "torchserve", "advanced", "docathon-h2-2023" ]
2023-05-24T20:20:52Z
2023-11-06T20:14:07Z
null
sekyondaMeta
huggingface/chat-ui
263
[question] Where should we discuss chat-ui roadmap?
Is there a forum to discuss future features? I need to implement some sort of UI component for answer references. Something like perplexity.ai "pills" under the answer. I guess this is useful for others and I would like to discuss how should I implement such thing before hand. - should I use pills? - should I cr...
https://github.com/huggingface/chat-ui/issues/263
closed
[]
2023-05-24T13:17:47Z
2023-05-26T02:22:29Z
1
fredguth
pytorch/xla
5,063
How can I use the flash attention in pytorch/xla GPU mode?
## ❓ Questions and Help Hello, [Flash Attention](https://arxiv.org/abs/2205.14135) is a method to produce tiled and fused kernels such that the tiled parameters can fit onto the device SRAM. May I ask to what degree this technique has been applied to pytorch/XLA? And How do I use the `flash attention` library in...
https://github.com/pytorch/xla/issues/5063
closed
[ "question" ]
2023-05-24T08:42:40Z
2025-04-30T13:04:03Z
null
wbmc
huggingface/optimum
1,069
llama-7b inference report Failed to allocate memory for requested buffer of size 180355072
### System Info ```shell optimum 1.8.5, 32g v100 ``` ### Who can help? @JingyaHuang ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (gi...
https://github.com/huggingface/optimum/issues/1069
closed
[ "bug", "onnxruntime" ]
2023-05-23T09:50:36Z
2023-06-19T05:05:01Z
6
drxmy
huggingface/chat-ui
258
Language change during chat
While writing in German, it answers in English. Before it always used to work... Photo: ![image](https://github.com/huggingface/chat-ui/assets/133012667/822987c2-b7fe-4eb7-9eec-ccbeb2ce8a66)
https://github.com/huggingface/chat-ui/issues/258
closed
[ "support" ]
2023-05-23T08:41:44Z
2023-07-24T11:46:33Z
2
Mbuni21
huggingface/transformers.js
122
[Question] Basic Whisper Inference vs Speed of Demo Site
Hello, I love the library~ thanks for making it! I am trying to use the Whisper inference method displayed on the demo site, but it's running really slow, It's taking me about 20 seconds to run it locally vs a few seconds on the demo site. Is there some magic behind the scenes that I'm missing? I'm just runn...
https://github.com/huggingface/transformers.js/issues/122
closed
[ "question" ]
2023-05-23T05:55:40Z
2023-06-10T22:41:15Z
null
jpg-gamepad
pytorch/tutorials
2,336
💡 [REQUEST] - Write a Tutorial for PyTorch 2.0 Export Quantization Frontend (Quantizer and Annotation API)
### 🚀 Descirbe the improvement or the new tutorial In PyTorch 2.0, we have a new quantization path that is built on top of the graph captured by torchdynamo.export, see an example flow here: https://github.com/pytorch/pytorch/blob/main/test/quantization/pt2e/test_quantize_pt2e.py#L907, it requires backend developers...
https://github.com/pytorch/tutorials/issues/2336
closed
[ "docathon-h1-2023", "advanced", "intel" ]
2023-05-22T23:14:04Z
2023-06-09T23:16:37Z
2
jerryzh168
pytorch/xla
5,043
graceful shutdown on TPU, the proper way to handle SIGINT / SIGTERM in TPU code (using PJRT runtime)?
## ❓ Questions and Help Hi, I would like to run some cleanup code (writing a final checkpoint, flushing a logger, etc) to run in the process that has `xm.is_master_ordinal() == True`. I am using the pjrt backend. I attempted this: ```python if xm.is_master_ordinal(): signal.signal(signal.SIGINT, my_han...
https://github.com/pytorch/xla/issues/5043
open
[ "question", "needs reproduction" ]
2023-05-22T19:18:43Z
2025-04-30T13:13:59Z
null
hrbigelow
huggingface/datasets
5,880
load_dataset from s3 file system through streaming can't not iterate data
### Describe the bug I have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it <img width="816" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/cc0778d3-36f3-45b5-ac68-4e7c664c2ed0"> <img width="1144" alt="image" src="https://github.c...
https://github.com/huggingface/datasets/issues/5880
open
[]
2023-05-22T07:40:27Z
2023-05-26T12:52:08Z
4
janineguo
huggingface/chat-ui
256
changing model to 30B in the .env file
here is the model am using which is 12B i want to change to 30B: defual one: `MODELS=`[ { "name": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", "datasetName": "OpenAssistant/oasst1", "description": "A good alternative to ChatGPT", "websiteUrl": "https://open-assistant.io", "userMessage...
https://github.com/huggingface/chat-ui/issues/256
closed
[ "support" ]
2023-05-21T18:30:04Z
2023-06-19T09:34:10Z
5
C0deXG
pytorch/xla
5,039
nightly version/ kaggle tpu
## ❓ Questions and Help Hi I installed pytorch xla nightly on kaggle notebook tpu, it was working fine but a week ago it keeps giving this error [FileNotFoundError: [Errno 2] No such file or directory: 'gsutil'] ![Opera Snapshot_2023-05-21_120122_www kaggle com](https://github.com/pytorch/xla/assets/81977280/0d7...
https://github.com/pytorch/xla/issues/5039
open
[ "question" ]
2023-05-21T09:31:40Z
2025-04-30T13:17:50Z
null
dina-fahim103
huggingface/transformers.js
119
[Question] A WebGPU-accelerated ONNX inference run-time
Is it possible to use https://github.com/webonnx/wonnx with transformersjs?
https://github.com/huggingface/transformers.js/issues/119
closed
[ "question" ]
2023-05-21T06:11:20Z
2024-10-18T13:30:07Z
null
ansarizafar
huggingface/chat-ui
255
how to prompt it
how can i prompt this model to act certain way like be `your food assistant and you will provide the best food assistant` how can i prompt it because it all around the place when i run this model :(
https://github.com/huggingface/chat-ui/issues/255
closed
[ "support" ]
2023-05-20T21:41:46Z
2023-06-01T13:00:48Z
1
C0deXG
huggingface/setfit
376
How to get the number of parameters in a SetFitModel object?
The context is I would like to compare the parameter sizes of different models. Is there a way to count the model parameters in a SetFitModel object? Something like model.count_params() in keras. Thanks!
https://github.com/huggingface/setfit/issues/376
closed
[ "question" ]
2023-05-19T23:58:53Z
2023-12-05T14:47:55Z
null
yihangit
pytorch/examples
1,153
Just get a low accuracy of 75.8 with resnet50 on ImageNet
I train resnet50 on ImageNet with GPUs=8, batchsize=256, learning-rate=0.1, epochs=90, and momentum=0.90. The attained top1 accuracy is 75.80, lower than the reported 76.15. The gap is not marginal on the large-scale ImageNet. Why does the difference exist?
https://github.com/pytorch/examples/issues/1153
open
[]
2023-05-19T22:45:33Z
2023-12-12T04:19:09Z
2
mountain111
huggingface/chat-ui
252
Users can't get passed "Start Chatting" modal - ethicsModelAcceptedAt not getting set?
<img width="836" alt="image" src="https://github.com/huggingface/chat-ui/assets/1438064/28a3d7f1-65e4-4b61-a82b-ffc78eb3e074"> let me know what more info you need to debug. just keeps redirecting back to home and never clears the modal.
https://github.com/huggingface/chat-ui/issues/252
open
[ "support", "p2" ]
2023-05-19T19:33:33Z
2024-01-26T08:44:39Z
7
cfregly
pytorch/tutorials
2,326
TorchVision Instance Segmentation Finetuning Tutorial - No module named 'torch._six'
### 🚀 Descirbe the improvement or the new tutorial The torch._six module was deprecated and removed from PyTorch starting from version 1.7.0. The code is not working because of that. How can I adjust it to make it work? ### Existing tutorials on this topic _No response_ ### Additional context _No response_
https://github.com/pytorch/tutorials/issues/2326
closed
[]
2023-05-19T14:41:15Z
2023-08-04T12:00:23Z
3
weronikawiera
huggingface/optimum
1,061
mpt model support?
### Feature request Can you please add mpt model support to this library? ### Motivation just testing things, and mpt seems to be unsupported by multiple huggingface libraries ### Your contribution im just getting started, im not sure if ill be of any help
https://github.com/huggingface/optimum/issues/1061
closed
[]
2023-05-19T09:28:28Z
2023-07-06T16:37:01Z
7
sail1369
huggingface/datasets
5,875
Why split slicing doesn't behave like list slicing ?
### Describe the bug If I want to get the first 10 samples of my dataset, I can do : ``` ds = datasets.load_dataset('mnist', split='train[:10]') ``` But if I exceed the number of samples in the dataset, an exception is raised : ``` ds = datasets.load_dataset('mnist', split='train[:999999999]') ``` > V...
https://github.com/huggingface/datasets/issues/5875
closed
[ "duplicate" ]
2023-05-19T07:21:10Z
2024-01-31T15:54:18Z
1
astariul
pytorch/pytorch
101,860
How to add/save parameters (metadata) to pytorch model
### 🚀 The feature, motivation and pitch When I working on pytorch model, its difficult for me to keep variables required to run the model. If I can add metadata to my model, I am not required to save parameters separately. So any one knows, how to add metadata to pytorch model? ### Alternatives _No response_ ...
https://github.com/pytorch/pytorch/issues/101860
closed
[]
2023-05-19T07:20:06Z
2023-05-20T05:03:08Z
null
naseemap47
huggingface/chat-ui
246
Documentation Request - Clarity around login flow outside of HuggingFace context
Could the docs (if not the code) be improved to make it clear how to: - run this without requiring users to authenticate - handle authentication via a 3rd party cloud (Azure, AWS, GCP, etc) - run this with an arbitrary 3rd party model (OpenAI, Rasa, etc) I originally thought this was the purpose of `OPENID_CLIE...
https://github.com/huggingface/chat-ui/issues/246
closed
[ "documentation", "enhancement" ]
2023-05-19T02:57:56Z
2023-06-01T06:26:49Z
3
hack-r
pytorch/xla
5,034
How to recover from 'Exception in device=TPU:0' sickness without terminating session?
I ran all cells in the [mnist-training.ipynb](https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/mnist-training.ipynb) colab successfully. However, during execution of the last cell: ```python def _mp_fn(rank, flags): global FLAGS FLAGS = flags torch.set_default_tensor_ty...
https://github.com/pytorch/xla/issues/5034
closed
[]
2023-05-19T01:32:17Z
2023-05-19T19:52:59Z
null
hrbigelow
huggingface/chat-ui
245
Strange DNS Behavior
Apparently some part of this leverages DNS right away when you run it, but it doesn't work on any privacy-respecting DNS resolvers. I can demonstrate this via toggling firewall options, resolv.conf, or packet inspection, but I'm not sure what in the code is related to this or how to fix it.
https://github.com/huggingface/chat-ui/issues/245
closed
[]
2023-05-19T01:19:11Z
2023-05-19T02:53:11Z
1
hack-r
pytorch/examples
1,151
How to run rpc/pipeline /main.py on two physical machines?
I want to run the Resnet on two different machines , how to run the main.py When i change the code by add the follow `# on rank 0 dist.init_process_group( backend = "gloo", init_method = 'tcp://172.16.8.196:8864', rank = 0, world_size = 2 ) # on rank 1 dist.init_process_group( backend ...
https://github.com/pytorch/examples/issues/1151
open
[]
2023-05-18T10:54:52Z
2023-05-18T10:54:52Z
null
Unknown-Body
pytorch/examples
1,150
input and output
I really want to know how to make the format of dataset.I have 30-demension variables as input and 0-1class as output .how can I put it into the SAC model?
https://github.com/pytorch/examples/issues/1150
open
[]
2023-05-18T10:18:59Z
2023-05-18T10:18:59Z
0
luzi560
pytorch/xla
5,022
torch.distributed.reduce vs torch_xla.core.xla_model.all_reduce
## ❓ Questions and Help I am a bit confused here. Can we use torch_xla.core.xla_model.all_reduce in place of torch.distributed.reduce? If, yes In torch.distributed.reduce we need a rank destination, how to change that if we use torch_xla.core.xla_model.all_reduce?
https://github.com/pytorch/xla/issues/5022
closed
[ "question", "distributed" ]
2023-05-17T13:26:02Z
2025-05-05T12:42:24Z
null
RishabhPandit-00
huggingface/optimum
1,057
owlvit is not supported
### Feature request The conversion is supported in transfomers[onnx], but not yet supported in optimum. ### Motivation convert open world vocabulary to onnx model for faster inference. ### Your contribution If there is a guideline on how to do it, I think I can help
https://github.com/huggingface/optimum/issues/1057
closed
[]
2023-05-17T07:01:39Z
2023-07-12T13:20:52Z
11
darwinharianto
huggingface/datasets
5,870
Behaviour difference between datasets.map and IterableDatasets.map
### Describe the bug All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs. I basically need to ...
https://github.com/huggingface/datasets/issues/5870
open
[]
2023-05-16T14:32:57Z
2023-05-16T14:36:05Z
1
llStringll
pytorch/PiPPy
801
How to run the gpt2 example on a single node with four GPU?
I am trying to reproduce the [gpt2 example](https://github.com/pytorch/PiPPy/tree/main/examples/hf/gpt2) in a single node without slurm for some performance metrics, but the code only provides slurm scripts. How should I modify the code to implement this example in a single node?
https://github.com/pytorch/PiPPy/issues/801
open
[]
2023-05-16T11:49:37Z
2023-05-16T11:49:37Z
null
lsder
huggingface/chat-ui
232
Possible performance regression in the production model?
I have been using it for 5 days , it could write simple codes for me but now it can't ;/
https://github.com/huggingface/chat-ui/issues/232
closed
[ "bug", "question" ]
2023-05-16T08:39:19Z
2023-09-11T09:30:26Z
null
overvalue
huggingface/chat-ui
230
Task not found for this model
I tried running code on my local system and updated the model name in the .env file from "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5" to "OpenAssistant/oasst-sft-6-llama-30b-xor" and now for every prompt I am getting "Task not found for this model"
https://github.com/huggingface/chat-ui/issues/230
closed
[ "support" ]
2023-05-16T05:18:25Z
2024-12-13T01:28:06Z
4
newway-anshul
huggingface/datasets
5,868
Is it possible to change a cached file and 're-cache' it instead of re-generating?
### Feature request Hi, I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours ### Motivation For large datasets, I think it is very important because we always f...
https://github.com/huggingface/datasets/issues/5868
closed
[ "enhancement" ]
2023-05-16T03:45:42Z
2023-05-17T11:21:36Z
2
zyh3826
pytorch/TensorRT
1,920
how to convert itensor to pytorch tensor in torch-tensorrt fx mode?
Hi: I'm trying to create engine with custom plugin using torch-tensorrt fx. How do I convert ITensor to torch tensor?
https://github.com/pytorch/TensorRT/issues/1920
closed
[ "No Activity" ]
2023-05-15T11:52:46Z
2023-11-24T00:02:13Z
null
shuyuan-wang
huggingface/chat-ui
225
Special tokens for user and assistant turns?
Hi, I've been checking the example that used `OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5` model. This model uses the following tokens to specify the beginning of the user and assistant: ``` "userMessageToken": "<|prompter|>", "assistantMessageToken": "<|assistant|>" ``` I'm trying to run `bigcode/starco...
https://github.com/huggingface/chat-ui/issues/225
closed
[]
2023-05-15T10:32:06Z
2023-05-15T11:06:23Z
3
frandominguezl
huggingface/chat-ui
218
Support for Contrastive Search?
Context: https://huggingface.co/blog/introducing-csearch Passing only: "penalty_alpha":0.6, "top_k": 4, Does not seem to work, as truncate, and temperature is still required. When passing this: <pre> "parameters": { "temperature": 0.9, "penalty_alpha":0.6, "top_k": 4, "trunca...
https://github.com/huggingface/chat-ui/issues/218
closed
[]
2023-05-13T22:02:37Z
2023-09-18T13:27:20Z
2
PhNyx
huggingface/setfit
374
Resolving confusion between fine-grained classes
My dataset has 131 classes. Some of them are fine-grained, for example: - Flag fraud on the account -> **Open Dispute** - Find out if there is a fraud hold on my debit card ->**Dispute Inquiry** The model is getting confused between such classes. I have roughly 20 samples per class in my dataset and I am using `...
https://github.com/huggingface/setfit/issues/374
closed
[ "question" ]
2023-05-13T10:13:15Z
2023-11-24T15:09:55Z
null
vahuja4
huggingface/transformers.js
108
[Question] Problem when converting an embedding model.
A thirst I would like to thank everyone for providing and maintaining this library. It makes working with ML in JavaScript a breeze. I was working with the embedding models and tried to convert a multilingual model [("paraphrase-multilingual-MiniLM-L12-v2")](https://huggingface.co/sentence-transformers/paraphrase-mul...
https://github.com/huggingface/transformers.js/issues/108
closed
[ "question" ]
2023-05-13T09:54:12Z
2023-05-15T17:24:16Z
null
falcon027
huggingface/setfit
372
Update Previous Model with New Categories
Is there a way to add categories based on new data? For example - Initially I trained a model with 5 categories and saved the model. I now have new data that I want to feed into the model but this new data has 8 categories. Would I have to start from scratch or can I use the original model I trained? Thank you!
https://github.com/huggingface/setfit/issues/372
closed
[ "question" ]
2023-05-12T21:22:12Z
2023-11-24T15:10:46Z
null
ronils428
huggingface/dataset-viewer
1,174
Add a field, and rename another one, in /opt-in-out-urls
The current response for /opt-in-out-urls is: ``` { "urls_columns": ["url"], "has_urls_columns": true, "num_opt_in_urls": 0, "num_opt_out_urls": 4052, "num_scanned_rows": 12452281, "num_urls": 12452281 } ``` I think we should: - rename `num_urls` into `num_scanned_urls` - add `num_rows` wit...
https://github.com/huggingface/dataset-viewer/issues/1174
closed
[ "question" ]
2023-05-12T13:15:40Z
2023-05-12T13:54:14Z
null
severo
huggingface/chat-ui
207
MongoParseError: Invalid scheme
I tried to run chat-ui on my mac (Intel 2020, MacOS Ventura 13.3.1), and I get the following error: ```bash (base) thibo@mac-M:~/Documents/chat-ui$ npm install added 339 packages, and audited 340 packages in 39s 72 packages are looking for funding run `npm fund` for details found 0 vulnerabilities ...
https://github.com/huggingface/chat-ui/issues/207
closed
[]
2023-05-12T07:32:22Z
2023-05-12T08:26:39Z
1
thiborose
pytorch/pytorch
101,246
Tool for identifying where in eager model an operation is nondeterministic
### 🐛 Describe the bug Let's say you have a model code and when you run it twice you get bitwise different results. Where did it diverge? We can use TorchFunctionMode/TorchDispatchMode to localize where the first divergence occurred. ### Versions master cc @mruberry @kurtamohler
https://github.com/pytorch/pytorch/issues/101246
open
[ "triaged", "module: determinism" ]
2023-05-12T02:50:04Z
2023-05-12T14:21:45Z
null
ezyang
pytorch/TensorRT
1,912
❓ [Question] How to correctly convert model by using torch-tensorrt
## ❓ Question Hi, I am trying to convert resnet_rmac_fpn model which is used for image retrieval. I am unable to convert it to tensorrt model by using torch-tensorrt. According to debug information, some of the operators are not supported by Torch-TensorRT. However, if I export the model into onnx and then conver...
https://github.com/pytorch/TensorRT/issues/1912
closed
[ "question", "No Activity" ]
2023-05-11T18:40:58Z
2023-08-21T00:02:10Z
null
HtutLynn
huggingface/chat-ui
202
Help wanted: Installing `@huggingface` package from NPM registry
👋🏻 Sorry if I am opening a dumb issue but I was just looking into fixing some UI issues and not entirely sure how to run this project locally. I've created a `.env.local` with: ``` MONGODB_URL= HF_ACCESS_TOKEN=XXX ``` Haven't actually set the `MONGODB_URL` but did create an access token for HF. Running i...
https://github.com/huggingface/chat-ui/issues/202
closed
[]
2023-05-11T17:38:24Z
2023-05-12T11:07:10Z
5
eertmanhidde
huggingface/datasets
5,841
Abusurdly slow on iteration
### Describe the bug I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment: ```python a=torch.randn(100,224) a=torch.stack([a] * 10000) a.shape # %% ds=Dataset.from_d...
https://github.com/huggingface/datasets/issues/5841
closed
[]
2023-05-11T08:04:09Z
2023-05-15T15:38:13Z
4
fecet
huggingface/optimum
1,046
Make torchvision optional?
### Feature request Currently torchvision is a required dependency https://github.com/huggingface/optimum/blob/22e4fd6de3ac5e7780571570f962947bd8777fd4/setup.py#L20 ### Motivation I only work on text so I don't need vision support ### Your contribution I am sure the change would be more difficult than just "rem...
https://github.com/huggingface/optimum/issues/1046
closed
[]
2023-05-10T10:49:18Z
2023-05-12T23:05:46Z
4
BramVanroy
huggingface/datasets
5,838
Streaming support for `load_from_disk`
### Feature request Support for streaming datasets stored in object stores in `load_from_disk`. ### Motivation The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data ...
https://github.com/huggingface/datasets/issues/5838
closed
[ "enhancement" ]
2023-05-10T06:25:22Z
2024-10-28T14:19:44Z
12
Nilabhra
pytorch/TensorRT
1,898
❓ [Question] is there any example on how to convert T5 model that compatible with huggingace's generate function?
## ❓ Question is there any example on how to convert T5 model that is compatible with huggingface's generate function? and able to handle dynamic shapes ?.
https://github.com/pytorch/TensorRT/issues/1898
closed
[ "question", "No Activity" ]
2023-05-09T18:51:06Z
2023-08-20T00:02:15Z
null
dathudeptrai
huggingface/datasets
5,834
Is uint8 supported?
### Describe the bug I expect the dataset to store the data in the `uint8` data type, but it's returning `int64` instead. While I've found that `datasets` doesn't yet support float16 (https://github.com/huggingface/datasets/issues/4981), I'm wondering if this is the case for other data types as well. Is there a way ...
https://github.com/huggingface/datasets/issues/5834
closed
[]
2023-05-09T17:31:13Z
2023-05-13T05:04:21Z
5
ryokan0123
pytorch/xla
4,994
Different Graph generations
## 🐛 Bug <!-- A clear and concise description of what the bug is. --> This code snippet is extracted from the AdamW optimizer. This optimizer for different ranges of learning rate and weight decay generates different graphs. This is causing unexpected compilations during the running of the application. The fix is ...
https://github.com/pytorch/xla/issues/4994
closed
[ "question", "lowering" ]
2023-05-09T07:18:12Z
2025-05-05T12:57:35Z
null
amithrm
pytorch/pytorch
100,859
how to calculate the macs after prune?
### 🚀 The feature, motivation and pitch I use torch.nn.utils.prune as prune to prune the model, then I use torchprofile.profile_macs() to calculate the macs of Pruned_model, but I find the macs will be increase before prune.remove() to make the pruning permanent. it is normal because additional calculate wil be weigh...
https://github.com/pytorch/pytorch/issues/100859
closed
[ "oncall: quantization", "triaged" ]
2023-05-08T08:06:34Z
2023-10-05T23:32:18Z
null
machengjie321
pytorch/tutorials
2,313
how to calculate the macs after prune?
### 🚀 Descirbe the improvement or the new tutorial I use torch.nn.utils.prune as prune to prune the model, then I use torchprofile.profile_macs() to calculate the macs of Pruned_model, but I find the macs will be increase before prune.remove() to make the pruning permanent. it is normal because additional calculate ...
https://github.com/pytorch/tutorials/issues/2313
open
[ "question" ]
2023-05-08T08:02:31Z
2023-05-26T20:02:13Z
null
machengjie321
pytorch/pytorch
100,827
How to install standalone torch dynamo with pytorch1.x
### 🐛 Describe the bug For many reasons, the environment is not compatible with pytorch2.0. For example, Megatron-LM compiles its transformer operators written in C++, which confine it to the limit of torch 1.x c++ extension, otherwise many compile errors. For another example, DeepSpeed implements their distributed t...
https://github.com/pytorch/pytorch/issues/100827
closed
[ "dependency issue", "oncall: pt2" ]
2023-05-07T09:55:43Z
2023-05-07T21:50:41Z
null
2catycm