repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets | 4,101 | How can I download only the train and test split for full numbers using load_dataset()? | How can I download only the train and test split for full numbers using load_dataset()?
I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help. | https://github.com/huggingface/datasets/issues/4101 | open | [
"enhancement"
] | 2022-04-05T16:00:15Z | 2022-04-06T13:09:01Z | 1 | Nakkhatra |
pytorch/TensorRT | 960 | β [Question] Problem with cudnn dependency when compiling plugins on windows? | ## β Question
<!-- Your question -->
I am trying to compile a windows dll for torch-tensorRT, however I get the following traceback:
ERROR: C:/users/48698/source/libraries/torch-tensorrt-1.0.0/core/plugins/BUILD:10:11: Compiling core/plugins/register_plugins.cpp failed: undeclared inclusion(s) in rule '//core/pl... | https://github.com/pytorch/TensorRT/issues/960 | closed | [
"question",
"channel: windows"
] | 2022-04-04T00:38:36Z | 2022-09-02T17:51:14Z | null | pepinu |
huggingface/datasets | 4,074 | Error in google/xtreme_s dataset card | **Link:** https://huggingface.co/datasets/google/xtreme_s
Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
| https://github.com/huggingface/datasets/issues/4074 | closed | [
"documentation",
"dataset bug"
] | 2022-03-31T18:07:45Z | 2022-04-01T08:12:56Z | 1 | wranai |
pytorch/TensorRT | 947 | hown to compile model for multi inputs? | 1οΌMy model : out1, out2 = model(input1, input2)
2οΌHow should i set compile settings, just like this:
trt_ts_module = torch_tensorrt.compile(torch_script_module,
inputs = [example_tensor, # Provide example tensor for input shape or...
torch_tensorrt.Input( # Specify input object with shape and dtype
... | https://github.com/pytorch/TensorRT/issues/947 | closed | [
"question"
] | 2022-03-31T08:00:41Z | 2022-03-31T20:29:39Z | null | shuaizzZ |
pytorch/data | 339 | Build the nightlies a little earlier | `torchdata` builds the nightlies at 15:00 UTC+0
https://github.com/pytorch/data/blob/198cffe7e65a633509ca36ad744f7c3059ad1190/.github/workflows/nightly_release.yml#L6
and publishes them roughly 30 minutes later. The `torchvision` nightlies are build at 11:00 UTC+0 and also published roughly 30 minutes later.
T... | https://github.com/meta-pytorch/data/issues/339 | closed | [] | 2022-03-29T15:42:24Z | 2022-03-29T19:24:52Z | 5 | pmeier |
pytorch/torchx | 441 | [Req] LSF scheduler support | ## Description
LSF scheduler support
Does torchx team have plan to support LSF scheduler?
Or is there any guide for extension, I would make PR.
## Motivation/Background
Thanks for torchx utils. We can target various scheduler by configure torchxconfig.
## Detailed Proposal
It would be better to support L... | https://github.com/meta-pytorch/torchx/issues/441 | open | [
"enhancement",
"module: runner",
"scheduler-request"
] | 2022-03-29T04:47:30Z | 2022-10-10T22:27:47Z | 6 | ckddls1321 |
pytorch/data | 335 | [BE] Unify `buffer_size` across datapipes | The `buffer_size` parameter is currently fairly inconsistent across datapipes:
| name | default `buffer_size` | infinite `buffer_size` | warn on infinite |
|--------------------|-------------------------|--------------------------|--------------------|
| Demultiplexer | ... | https://github.com/meta-pytorch/data/issues/335 | open | [
"Better Engineering"
] | 2022-03-28T17:36:32Z | 2022-07-06T18:44:05Z | 8 | pmeier |
huggingface/datasets | 4,041 | Add support for IIIF in datasets | This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred.
## What is [IIIF](https://iiif.io/)?
IIIF (International Image Inte... | https://github.com/huggingface/datasets/issues/4041 | open | [
"enhancement"
] | 2022-03-28T15:19:25Z | 2022-04-05T18:20:53Z | 1 | davanstrien |
pytorch/vision | 5,686 | Question on segmentation code | ### π The feature
Hello.
I want to ask you a simple question.
I'm not sure if it's right to post a question in this 'Feature request' category.
In train.py code in the reference/segmentation, the get_dataset function is set the coco dataset classes 21.
Why the number of classes is 21?
Is it wrong to set the ... | https://github.com/pytorch/vision/issues/5686 | closed | [
"question",
"topic: semantic segmentation"
] | 2022-03-28T06:05:39Z | 2022-03-28T07:29:35Z | null | kcs6568 |
pytorch/torchx | 435 | [torchx/examples] Remove usages of custom components in app/pipeline examples | ## π Documentation
Since we are making TorchX focused on Job launching and less about authoring components and AppDefs, we need to adjust our app and pipeline examples to demonstrate running the applications with the builtin `dist.ddp` and `utils.python` components rather than showing how to author a component for ... | https://github.com/meta-pytorch/torchx/issues/435 | closed | [
"documentation"
] | 2022-03-25T23:34:26Z | 2022-05-25T22:52:40Z | 0 | kiukchung |
huggingface/datasets | 4,027 | ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme' | ## Describe the bug
I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch
```
from datasets import load_dataset
squad = load_dataset('crime_and_punish', split='train[:1000]')
```
When I run the line:
`sq... | https://github.com/huggingface/datasets/issues/4027 | closed | [
"bug",
"duplicate"
] | 2022-03-25T16:22:28Z | 2022-04-07T10:29:52Z | 2 | MoritzLaurer |
pytorch/tutorials | 1,872 | Transfer learning tutorial: Loss and Accuracy curves the wrong way | Hey,
I have a question concerning the transfer learning tutorial (https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html).
For a few days, I've been trying to figure out why the validation and training curves are reversed there. By this, I mean that for general neural networks the training curves ... | https://github.com/pytorch/tutorials/issues/1872 | closed | [
"question",
"intro"
] | 2022-03-25T15:23:39Z | 2023-03-06T21:50:25Z | null | AlexanderGeng |
pytorch/pytorch | 74,741 | [FSDP] How to use fsdp in GPT model in Megatron-LM | ### π The feature, motivation and pitch
Are there any examples similar to DeepSpeed ββthat can experience the fsdp function of pytorch. It would be nice to provide the GPT model in Megatron-LM.
### Alternatives
I hope to provide examples of benchmarking DeepSpeed ββto facilitate the in-depth use of the fsdp functio... | https://github.com/pytorch/pytorch/issues/74741 | closed | [] | 2022-03-25T08:30:05Z | 2022-03-25T21:12:04Z | null | Baibaifan |
pytorch/text | 1,662 | How to install LTS (0.9.2)? | ## β Questions and Help
**Description**
I've found that my PyTorch version is 1.8.2, so according to https://github.com/pytorch/text/#installation , the torchtext version is 0.9.2:

But as I use `conda... | https://github.com/pytorch/text/issues/1662 | closed | [] | 2022-03-25T08:12:03Z | 2024-03-11T00:55:30Z | null | PolarisRisingWar |
pytorch/pytorch | 74,740 | How to export onnx with dynamic batch size for models with multiple outputs? | ## Issue description
I want to export my model to onnx. Following is my code:
torch.onnx._export(
model,
dummy_input,
args.output_name,
input_names=[args.input],
output_names=args.output,
opset_version=args.opset,
)
It works well. But I want to export it with dynamic batch size. So I try this:
torch.onnx... | https://github.com/pytorch/pytorch/issues/74740 | closed | [] | 2022-03-25T07:55:45Z | 2022-03-25T08:15:58Z | null | LLsmile |
pytorch/pytorch | 74,616 | __rpow__(self, other) OpInfo should not test the case where `other` is a Tensor | ### π Describe the bug
After https://github.com/pytorch/pytorch/pull/74280 (cc @mruberry), the `__rpow__` OpInfo has a sample input where `other` is a Tensor. This cannot happen during normal execution: to get to `Tensor.__rpow__` a user does the following:
```
# self = some_tensor
# other = not_a_tensor
not_a_... | https://github.com/pytorch/pytorch/issues/74616 | open | [
"module: tests",
"triaged"
] | 2022-03-23T15:28:17Z | 2022-04-18T02:34:55Z | null | zou3519 |
pytorch/TensorRT | 936 | β[Question] RuntimeError: [Error thrown at core/conversion/converters/impl/select.cpp:236] Expected const_layer to be true but got false | ## β Question
when i convert jit model, got the error
this is my forward code:
input `x` shape is `(batch, 6, height, width)`, first step is to split `x` into two tensors, but failed
```
def forward(self, x):
fg = x[:,0:3,:,:] ## this line got error
bg = x[:,3:,:,:]
fg... | https://github.com/pytorch/TensorRT/issues/936 | closed | [
"question",
"component: converters",
"No Activity"
] | 2022-03-22T02:40:39Z | 2023-02-10T00:13:18Z | null | pupumao |
pytorch/text | 1,661 | what's is the replacement of legacy? | ## β Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
in torchtext0.12.0, the module legacy has been removed, so how to implement the same functions as the class legacy.Field?
thanks for your help. | https://github.com/pytorch/text/issues/1661 | closed | [] | 2022-03-21T11:03:55Z | 2022-10-04T01:51:51Z | null | 1152545264 |
pytorch/serve | 1,518 | How to return a dict response, not a list | <!--
Thank you for suggesting an idea to improve torchserve model serving experience.
Please fill in as much of the template below as you're able.
-->
## Is your feature request related to a problem? Please describe.
<!-- Please describe the problem you are trying to solve. -->
when I retuan a dict value, se... | https://github.com/pytorch/serve/issues/1518 | closed | [] | 2022-03-20T10:30:09Z | 2022-03-25T20:14:17Z | null | liuhuiCNN |
pytorch/data | 310 | MapDatapipe Mux/Demux Support | ### π The feature
MapDatapipes are missing Mux and Demux pipes as noted in https://github.com/pytorch/pytorch/issues/57031
Talked to @ejguan on https://discuss.pytorch.org/t/mapdatapipe-support-mux-demux/146305, I plan to do a PR with Mux/Demux added. However, I will add rough outlines / ideas here first. I plan... | https://github.com/meta-pytorch/data/issues/310 | open | [] | 2022-03-19T19:31:49Z | 2022-03-27T03:31:32Z | 7 | josiahls |
pytorch/data | 303 | DataPipe for GCS (Google Cloud Storage) | ### π The feature
Build a DataPipe that allows users to connect to GCS (Google Cloud Storage). There is a chance that existing DataPipes may suffice, so we should examine the relevant APIs first.
### Motivation, pitch
GCS (Google Cloud Storage) is one of the commonly used cloud storage for storing data.
##... | https://github.com/meta-pytorch/data/issues/303 | closed | [] | 2022-03-16T19:01:03Z | 2023-03-07T14:49:15Z | 2 | NivekT |
pytorch/data | 302 | Notes on shuffling, sharding, and batchsize | (I'm writing this down here to have a written trace, but I'm looking forward to discuss this with you all in our upcoming meetings :) )
I spent some time porting the torchvision training recipes to use datapipes, and I noticed that the model I trained on ImageNet with DPs was much less accurate than the one with reg... | https://github.com/meta-pytorch/data/issues/302 | open | [] | 2022-03-16T18:08:41Z | 2022-05-24T12:55:18Z | 28 | NicolasHug |
pytorch/data | 301 | Add TorchArrow Nightly CI Test | ### π The feature
TorchArrow nightly build is now [available for Linux](https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html) (other versions will be next).
We should add TorchArrow nightly CI tests for these [TorchArrow dataframe related unit tests](https://github.com/pytorch/data/blob/main/test/test... | https://github.com/meta-pytorch/data/issues/301 | closed | [
"good first issue"
] | 2022-03-16T17:28:27Z | 2022-05-09T15:38:31Z | 1 | NivekT |
pytorch/pytorch | 74,288 | How to Minimize Rounding Error in torch.autograd.functional.jacobian? | ### π Describe the bug
Before I start, let me express my sincerest gratitude to issue #49171, in making it possible to take the jacobian wrt all model parameters! A great functionality indeed!
I am raising an issue about the approximation error when the jacobian function goes to high dimensions. This is necessar... | https://github.com/pytorch/pytorch/issues/74288 | closed | [
"module: numerical-stability",
"module: autograd",
"triaged"
] | 2022-03-16T09:25:18Z | 2022-03-17T14:17:29Z | null | QiyaoWei |
pytorch/pytorch | 74,256 | Create secure credential storage for metrics credentials and associated documentation on how to regenerate them if needed | cc @seemethere @malfet @pytorch/pytorch-dev-infra | https://github.com/pytorch/pytorch/issues/74256 | open | [
"module: ci",
"triaged"
] | 2022-03-15T20:21:20Z | 2022-03-16T17:30:02Z | null | seemethere |
pytorch/torchx | 422 | kubernetes: add support for persistent volume claim volumes | ## Description
<!-- concise description of the feature/enhancement -->
Add support for PersistentVolumeClaim mounts to Kubernetes scheduler.
## Motivation/Background
<!-- why is this feature/enhancement important? provide background context -->
https://github.com/pytorch/torchx/pull/420 adds bindmounts t... | https://github.com/meta-pytorch/torchx/issues/422 | closed | [] | 2022-03-15T18:21:10Z | 2022-03-16T22:12:26Z | 0 | d4l3k |
pytorch/TensorRT | 929 | β [Question] Expected isITensor() to be true but got false Requested ITensor from Var, however Var type is c10::IValue | I try to use python trtorch==0.4.1 to compile my own pytorch jit traced model, and I find that it goes wrong with the following information:
`
Traceback (most recent call last):
File "./prerecall_server.py", line 278, in <module>
ModelServing(args),
File "./prerecall_server.py",, line 133, in __init__
... | https://github.com/pytorch/TensorRT/issues/929 | closed | [
"question",
"No Activity",
"component: partitioning"
] | 2022-03-15T10:17:07Z | 2023-04-01T00:02:11Z | null | clks-wzz |
pytorch/tutorials | 1,860 | Where is the mnist_sample notebook? | In tutorial [WHAT IS TORCH.NN REALLY?](https://pytorch.org/tutorials/beginner/nn_tutorial.html#closing-thoughts), `Closing thoughts` part:
```
To see how simple training a model can now be, take a look at the mnist_sample sample notebook.
```
Does`mnist_sample notebook ` refer to https://github.com/pytorch/tuto... | https://github.com/pytorch/tutorials/issues/1860 | closed | [] | 2022-03-14T12:21:14Z | 2022-08-18T17:35:34Z | null | Yang-Xijie |
pytorch/torchx | 421 | Document usage of .torchxconfig | ## π Documentation
## Link
Current `.torchxconfig` docs (https://pytorch.org/torchx/main/runner.config.html) explain how it works and its APIs but does not provide any practical guidance on what configs can be put into it and why its useful.
## What does it currently say?
Nothing wrong with what it currently s... | https://github.com/meta-pytorch/torchx/issues/421 | closed | [] | 2022-03-12T00:30:59Z | 2022-03-28T20:58:44Z | 1 | kiukchung |
pytorch/torchx | 418 | cli/colors: crash when importing if sys.stdout is closed | ## π Bug
<!-- A clear and concise description of what the bug is. -->
Sometimes `sys.stdout` is closed and `isatty()` throws an error at https://github.com/pytorch/torchx/blob/main/torchx/cli/colors.py#L11
Switching to a variant that checks if it's closed should work:
```
not sys.stdout.closed and sys.stdou... | https://github.com/meta-pytorch/torchx/issues/418 | closed | [
"bug",
"cli"
] | 2022-03-11T19:24:44Z | 2022-03-11T23:32:30Z | 0 | d4l3k |
pytorch/extension-cpp | 76 | How to debug in cuda-pytorch env? | Hi! I am wondering how to debug in such environment? I have tried to insert a "printf("hello wolrd")" sentence in .cu file, but it compiles failure! If I delete it, everything works fine..... So how you debug in such environment? Thank you!!!! | https://github.com/pytorch/extension-cpp/issues/76 | open | [] | 2022-03-10T07:45:31Z | 2022-03-10T07:45:31Z | null | Arsmart123 |
huggingface/datasets | 3,881 | How to use Image folder | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Trace... | https://github.com/huggingface/datasets/issues/3881 | closed | [
"question"
] | 2022-03-09T21:18:52Z | 2022-03-11T08:45:52Z | null | rozeappletree |
pytorch/examples | 969 | DDP: why does every process allocate memory of GPU 0 and how to avoid it? | Run [this](https://github.com/pytorch/examples/tree/main/imagenet) example with 2 GPUs.
process 2 will allocate some memory on GPU 0.
```
python main.py --multiprocessing-distributed --world-size 1 --rank 0
```

testing_data = load_dataset("common_voice", "en", split="test[:200]")
I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this?
**Typical Voice Accent Prop... | https://github.com/huggingface/datasets/issues/3854 | closed | [
"question"
] | 2022-03-08T09:40:52Z | 2024-03-23T12:40:58Z | null | amanjaiswal777 |
pytorch/TensorRT | 912 | β¨[Feature] New Release for pip | Would it be possible to get a new release for use with pip?
There have been quite a few features and bug-fixes added since November, and it would be great to have an up to date version available.
I know that docker containers are often recommended, but that's often not a viable option.
Thank you for all of the... | https://github.com/pytorch/TensorRT/issues/912 | closed | [
"question"
] | 2022-03-06T05:27:27Z | 2022-03-06T21:25:13Z | null | dignakov |
pytorch/torchx | 405 | SLURM quality of life improvements | ## Description
Making a couple of requests to improve QoL on SLURM
## Detailed Proposal
It would be helpful to have -
- [x] The ability to specify the output path. Currently, you need to cd to the right path for this, which generally needs a helper function to set up the directory, cd to it, and then launch via ... | https://github.com/meta-pytorch/torchx/issues/405 | open | [
"slurm"
] | 2022-03-04T17:42:08Z | 2022-04-14T21:42:21Z | 5 | mannatsingh |
pytorch/serve | 1,487 | how to get model.py file ? | `https://github.com/pytorch/serve/blob/master/docker/README.md#create-torch-model-archiver-from-container` in
the 4 step ,how to get model.py fileοΌ
I followed the doc step by step οΌbut in step 4
`torch-model-archiver --model-name densenet161 --version 1.0 --model-file /home/model-server/examples/image_classifier... | https://github.com/pytorch/serve/issues/1487 | closed | [] | 2022-03-04T01:41:59Z | 2022-03-04T20:03:41Z | null | jaffe-fly |
pytorch/pytorch | 73,699 | How to get tolerance override in OpInfo-based test? | ### π Describe the bug
The documentation appears to be wrong, it suggests to use self.rtol and self.precision:
https://github.com/pytorch/pytorch/blob/4168c87ed3ba044c9941447579487a2f37eb7973/torch/testing/_internal/common_device_type.py#L1000
self.tol doesn't seem to exist in my tests.
I did find a self.rel_t... | https://github.com/pytorch/pytorch/issues/73699 | open | [
"module: docs",
"triaged",
"module: testing"
] | 2022-03-02T22:48:11Z | 2022-03-07T14:42:39Z | null | zou3519 |
pytorch/vision | 5,510 | [RFC] How do we want to deal with images that include alpha channels? | This discussion started in https://github.com/pytorch/vision/pull/5500#discussion_r816503203 and @vfdev-5 and I continued offline.
PIL as well as our image reading functions support RGBA images
https://github.com/pytorch/vision/blob/95d418970e6dbf2e4d928a204c4e620da7bccdc0/torchvision/io/image.py#L16-L31
but o... | https://github.com/pytorch/vision/issues/5510 | closed | [
"module: datasets",
"module: transforms",
"prototype"
] | 2022-03-02T09:43:42Z | 2023-03-28T13:01:09Z | null | pmeier |
pytorch/pytorch | 73,600 | Add a section in DDP tutorial to explain why DDP sometimes is slower than local training and how to improve it | ### π The doc issue
Add a section in DDP tutorial to explain why DDP sometimes is slower than local training and how to improve it
### Suggest a potential alternative/fix
_No response_
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-... | https://github.com/pytorch/pytorch/issues/73600 | open | [
"oncall: distributed",
"triaged",
"module: ddp"
] | 2022-03-01T20:34:58Z | 2022-03-08T22:03:17Z | null | zhaojuanmao |
pytorch/tensorpipe | 431 | How to enable CudaGdrChannel registration in tensorpipeAgent when using pytorch's rpc | Can we just enable it by define some environment variables or we need to recompile pytorch? Thx! | https://github.com/pytorch/tensorpipe/issues/431 | closed | [] | 2022-03-01T08:14:17Z | 2022-03-01T12:09:53Z | null | eedalong |
pytorch/tutorials | 1,839 | Missing 'img/teapot.jpg', 'img/trilobite.jpg' for `MODEL UNDERSTANDING WITH CAPTUM` tutorial. | Running this tutorial: https://pytorch.org/tutorials/beginner/introyt/captumyt.html
Could not found 'img/teapot.jpg', 'img/trilobite.jpg' under _static folder.
Could anyone help to provide?
Thanks! | https://github.com/pytorch/tutorials/issues/1839 | closed | [
"question"
] | 2022-02-26T10:32:52Z | 2022-10-17T16:24:06Z | null | MonkandMonkey |
pytorch/data | 256 | Support `keep_key` in `Grouper`? | `IterKeyZipper` has an option to keep the key that was zipped on:
https://github.com/pytorch/data/blob/2cf1f208e76301f3e013b7569df0d75275f1aaee/torchdata/datapipes/iter/util/combining.py#L53
Is this something we want to support going forward? If yes, it would be nice to have this also on `Grouper` and possibly ot... | https://github.com/meta-pytorch/data/issues/256 | closed | [
"good first issue"
] | 2022-02-25T08:39:53Z | 2023-01-27T19:03:08Z | 15 | pmeier |
pytorch/TensorRT | 894 | β [Question] Can you convert model that operates on custom classes? | ## β Question
I have a torch module that creates objects of custom classes that have tensors as fields. It can be torch.jit.scripted but torch.jit.trace can be problematic. When I torch.jit.script module and then torch_tensorrt.compile it I get the following error: `Unable to get schema for Node %317 : __torch__.src... | https://github.com/pytorch/TensorRT/issues/894 | closed | [
"question"
] | 2022-02-24T09:51:13Z | 2022-05-18T21:21:05Z | null | MarekPokropinski |
pytorch/xla | 3,391 | I want to Multi-Node Multi GPU training, how should I configure the environment | ## β Questions and Help
Running XLA MultiGPU MultiNodeοΌI know that I need to set XRT_SHARD_WORLD_SIZE and XRT_WORKERS, but I don't know how to configure the variable value of XRT_WORKERS.
Are there some examples that exist for me to refer to? | https://github.com/pytorch/xla/issues/3391 | closed | [
"stale",
"xla:gpu"
] | 2022-02-23T06:52:01Z | 2022-04-28T00:10:36Z | null | ZhongYFeng |
pytorch/TensorRT | 881 | β [Question] How do you convert part of the model to TRT? | ## β Question
Is it possible to convert only part of the model to TRT. I have model that cannot be directly converted to trt because it uses custom classes. I wanted to convert only modules that can be converted but as I tried it torch cannot save it.
## What you have already tried
I tried the following:
`... | https://github.com/pytorch/TensorRT/issues/881 | closed | [
"question"
] | 2022-02-18T09:00:43Z | 2022-02-19T23:57:17Z | null | MarekPokropinski |
pytorch/TensorRT | 880 | β [Question] What is the difference between docker built on PyTorch NGC Container and PyTorch NGC Container? | ## β Question
Since PyTorch NGC 21.11+ already includes Torch-TensorRT, is it possible to use Torch-TensorRT directly in PyTorch NGC Container?
## What you have already tried
I read the README and tried to build docker according to it, but it keeps failing.
## Environment
> Build information about Torch-... | https://github.com/pytorch/TensorRT/issues/880 | closed | [
"question"
] | 2022-02-18T08:40:22Z | 2022-02-19T23:56:30Z | null | Guangyun-Xu |
pytorch/serve | 1,440 | [Discussion]: How to extend the base handler | Recently we've realized that an easy place for new contributors to improve torchserve is to either
1. Add a reference example in `examples`
2. Make an improvement to the base handler
1 is easiest but makes means that users that want to benefit from that example, need to go through source code and adapt it to their... | https://github.com/pytorch/serve/issues/1440 | closed | [
"enhancement"
] | 2022-02-17T16:15:09Z | 2022-05-04T03:57:34Z | null | msaroufim |
pytorch/TensorRT | 876 | β [Question] How to Enable the Torch-TensorRT Partition Feature ? | ## β Question
HelloοΌ
I want to use TensorRT to run VectorNet from https://github.com/xk-huang/yet-another-vectornet
HoweverοΌ when I try to convert torchscript using torchtrtcοΌ it terminates by showing an unsupported opοΌtorch_scatter::scatter_max
```
terminate called after throwing an instance of 'torch::j... | https://github.com/pytorch/TensorRT/issues/876 | closed | [
"question"
] | 2022-02-16T08:01:25Z | 2022-02-19T23:57:32Z | null | huangxiao2008 |
pytorch/text | 1,615 | How to build pytorch text with system third_party libraries? | ## β Questions and Help
**Description**
Three packages are under [pytorch text third_party](https://github.com/pytorch/text/tree/main/third_party). However, I personally prefer using system installed packages,
- libre2-dev
- libdouble-conversion-dev
- libsentencepiece-dev
In addition, isn't there a **CMake... | https://github.com/pytorch/text/issues/1615 | open | [] | 2022-02-16T03:03:31Z | 2023-04-18T06:07:10Z | null | jiapei100 |
pytorch/torchx | 388 | RFC: Improve OCI Image Python Tooling | ## Description
<!-- concise description of the feature/enhancement -->
Quite a few of the cloud services / cluster tools for running ML jobs use OCI/Docker containers so I've been looking into how to make dealing with these easier.
Container based services:
* Kubernetes / Volcano scheduler
* AWS EKS / Batch
*... | https://github.com/meta-pytorch/torchx/issues/388 | open | [
"enhancement",
"RFC",
"kubernetes",
"slurm"
] | 2022-02-11T04:47:27Z | 2023-01-23T14:54:10Z | 1 | d4l3k |
huggingface/nn_pruning | 33 | What is the difference between "finetune" and "final-finetune" in `/example`. | Hello,
Thanks for the amazing repo!
I'm wondering what is the difference between "finetune" and "final-finetune" in `/example`.
Do we train the model and the mask score in the finetune stage, and only train the optimized model in the final-finetune stage?
Is there a way to directly save the optimized model an... | https://github.com/huggingface/nn_pruning/issues/33 | open | [] | 2022-02-11T03:25:13Z | 2023-01-08T14:27:37Z | null | eric8607242 |
pytorch/TensorRT | 862 | β [Question] Running a same torchscript using the same input producing different results. | ## β Question
I'm trying to run a pretrained resnet50 model from torch.torchvision.models. enabled_precisions is set to torch.half.
Each time I load the same resnet50 torchscript, using the same inputοΌwhich is set to zero using np.zerosοΌ. But after running serveral times I've found the output is not stable.
## W... | https://github.com/pytorch/TensorRT/issues/862 | closed | [
"question",
"No Activity"
] | 2022-02-10T12:18:34Z | 2022-09-10T00:02:32Z | null | SeTriones |
pytorch/TensorRT | 858 | β [Question] ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory | ## β Question
As I can't install `torch-tensorrt` for some reason in this method:`pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases
`
I download `torch-tensorrt` from here `https://github.com/NVIDIA/Torch-TensorRT/releases/tag/v1.0.0`
using `pip install torch_tensorrt-1.0.0-cp36-cp3... | https://github.com/pytorch/TensorRT/issues/858 | closed | [
"question",
"No Activity"
] | 2022-02-09T03:27:44Z | 2022-06-19T12:55:25Z | null | Biaocsu |
pytorch/TensorRT | 856 | β [Question] Is it possibile to use a model optimized through TorchTensorRT in LibTorch under Windows? | ## β Question
I would need to optimize an already trained segmentation model through TorchTensorRT, the idea would be to optimize the model by running the [newest PyTorch NGC docker image](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_22-01.html#rel_22-01) under WSL2, exporting the model ... | https://github.com/pytorch/TensorRT/issues/856 | closed | [
"question",
"No Activity",
"channel: windows"
] | 2022-02-08T10:22:57Z | 2022-08-27T00:03:53Z | null | andreabonvini |
pytorch/TensorRT | 852 | How to set custom GCC path when compiling the source code | ## β Question
How to set the GCC path when compiling the source code
## What you have already tried
I try to build Torch-TensorRT using locally installed cuDNN & TensorRT, But the following error occurred
 -> T:
r"""Cast... | https://github.com/pytorch/pytorch/issues/72365 | closed | [
"triaged",
"module: numpy",
"module: ux"
] | 2022-02-04T21:44:27Z | 2023-05-13T06:07:10Z | null | marcozullich |
pytorch/text | 1,581 | Specified Field dtype <torchtext.legacy.data.pipeline.Pipeline object at ...> can not be used with use_vocab=False because we do not know how to numericalize it. | ## β Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
I am trying to implement a sequence (multi-output) regression task using `torchtext`, but I am getting the error in the title.
torch version: 1.10.1
torchtext version: 0.11.1
Here's how I proceed:
**Given.** se... | https://github.com/pytorch/text/issues/1581 | open | [
"legacy"
] | 2022-02-04T16:25:50Z | 2022-04-17T08:46:36Z | null | MSiba |
pytorch/data | 195 | Documentation Improvements Tracker | Here are some improvements that we should make to the documentation. Some of these likely should be completed before beta release.
Crucial:
- [x] Add docstrings for the class `IterDataPipe` and `MapDataPipe`
https://github.com/pytorch/pytorch/pull/72618
- [x] Review the categorization of `IterDataPipe` in `torc... | https://github.com/meta-pytorch/data/issues/195 | open | [
"todo"
] | 2022-02-03T19:39:09Z | 2022-06-02T15:18:39Z | 3 | NivekT |
pytorch/TensorRT | 843 | β [Question] Trying to find compatible versions between two different environments | ## β Question
I'm trying to save a serialized tensorRT optimized model using torch_tensorrt from one environment and then load it in another environment (different GPUs. one has Quadro M1000M, and another has Tesla P100.
In both environments I don't have full sudo control where I can install whatever I want (i.e... | https://github.com/pytorch/TensorRT/issues/843 | closed | [
"question",
"No Activity"
] | 2022-02-01T19:33:31Z | 2022-05-20T00:02:07Z | null | hanbrianlee |
pytorch/functorch | 433 | Determine how to mitigate the challenge of pytorch/pytorch changes breaking functorch | We get broken by pytorch/pytorch on an almost daily basis. Some of these changes are easy to resolve, some are not easy to resolve. This has cost me 10s of hours so far and going forward will cost even more. We should come up with some way to mitigate this.
There are at least two axes for the proposals. On one axis ... | https://github.com/pytorch/functorch/issues/433 | closed | [
"actionable",
"needs design"
] | 2022-02-01T15:54:27Z | 2022-10-17T19:55:44Z | null | zou3519 |
huggingface/transformers | 15,404 | what is the equivalent manner for those lines? | https://github.com/huggingface/transformers/issues/15404 | closed | [] | 2022-01-29T16:03:12Z | 2022-02-18T21:37:08Z | null | mathshangw | |
huggingface/dataset-viewer | 124 | Cache /valid? | <strike>It is called multiple times per second by moon landing, and it impacts a lot the loading time of the /datasets page (https://github.com/huggingface/moon-landing/issues/1871#issuecomment-1024414854).</strike>
Currently, several queries are done to check all the valid datasets on every request | https://github.com/huggingface/dataset-viewer/issues/124 | closed | [
"question"
] | 2022-01-28T17:37:47Z | 2022-01-31T20:31:41Z | null | severo |
pytorch/pytorch | 71,991 | How to make an LSTM Bidirectional? | ### π Describe the bug
Goal: make LSTM `self.classifier()` learn from bidirectional layers.
`# !` = code lines of interest
**Question:**
What changes to `LSTMClassifier` do I need to make, in order to have this LSTM work bidirectionally?
---
I *think* the problem is in `forward()`. It learns from the **... | https://github.com/pytorch/pytorch/issues/71991 | closed | [] | 2022-01-28T16:03:23Z | 2022-01-31T09:59:27Z | null | danielbellhv |
pytorch/TensorRT | 830 | β [Question] Why BERT Base is slower w/ Torch-TensorRT than native PyTorch? | ## β Question
<!-- Your question -->
I'm trying to optimize hugging face's BERT Base uncased model using Torch-TensorRT, the code works after disabling full compilation (`require_full_compilation=False`), and the avg latency is ~10ms on T4. However, it it slower than native PyTorch implementation (~6ms on T4). In c... | https://github.com/pytorch/TensorRT/issues/830 | closed | [
"question",
"No Activity",
"performance"
] | 2022-01-26T10:55:56Z | 2023-11-09T09:13:15Z | null | void-main |
pytorch/torchx | 375 | [torchx/config] Generate docs on the available configuration options in .torchxconfig | ## π Documentation
Note: not a request for correction of documentation!
## Link
https://pytorch.org/torchx/latest/experimental/runner.config.html
## What does it currently say?
Nothing wrong with the current docs, but would be nice to have a list of the options that are "set-able" via .torchxconfig
## Wh... | https://github.com/meta-pytorch/torchx/issues/375 | open | [] | 2022-01-25T23:49:07Z | 2022-04-08T18:23:57Z | 2 | kiukchung |
pytorch/TensorRT | 824 | β [Question] How to use FP16 precision in C++ | ## β Question
I am trying run inference on an FP16-Engine in C++. `engine->getBindingDataType(i)` correctly returns '1' (kHALF) for all Bindings. However, when I am using the following lines to get the output, the compiler is obviously interpreting it as normal floats (=FP32)
```
std::vector<float> cpu_output(ge... | https://github.com/pytorch/TensorRT/issues/824 | closed | [
"question"
] | 2022-01-25T09:52:19Z | 2022-01-25T10:01:40Z | null | DavidBaldsiefen |
pytorch/text | 1,537 | [META] how do we want to handle stale issues/PRs? | ## β Questions and Help
There are many issues and PRs in the repo either related to long-gone legacy APIs or have been overcome by events. How do we want to track/manage these potentially stale issues?
Options:
- A bot
- I don't like this option because it can permit false positives which makes it hard for us... | https://github.com/pytorch/text/issues/1537 | closed | [] | 2022-01-24T17:24:28Z | 2022-03-07T22:52:11Z | null | erip |
pytorch/TensorRT | 823 | β [Question] How do you override or remove evaluators | ## β Question
I am trying to use YOLOv5 with Torch-TensorRT. When I load the model, I get the following error message (among others):
```
ERROR: [Torch-TensorRT TorchScript Conversion Context] - 4: [layers.cpp::validate::2385] Error Code 4: Internal Error (%3264 : Tensor = aten::mul(%3263, %3257) # /home/.../yol... | https://github.com/pytorch/TensorRT/issues/823 | closed | [
"question",
"component: converters",
"No Activity"
] | 2022-01-24T08:43:17Z | 2022-11-21T16:12:05Z | null | DavidBaldsiefen |
pytorch/TensorRT | 820 | β [Question] Have anyone encounter this: RuntimeError: expected type comment but found 'eof' here | ## β Question
when I run compile command like this:
```python
trt_ts_module = torch_tensorrt.compile(model,
inputs=[torch_tensorrt.Input((1, 3, 128, 128), dtype=torch.float32),
torch_tensorrt.Input((1, 3, 320, 320), dtype=torch.float32)],
... | https://github.com/pytorch/TensorRT/issues/820 | closed | [
"question",
"No Activity"
] | 2022-01-20T13:57:51Z | 2022-05-05T00:02:27Z | null | laisimiao |
pytorch/data | 175 | Refactor test suite to be more readable? | While working on #174, I also worked on the test suite. In there we have the ginormous tests that are hard to parse, because they do so many things at the same time:
https://github.com/pytorch/data/blob/c06066ae360fc6054fb826ae041b1cb0c09b2f3b/test/test_datapipe.py#L382-L426
I was wondering if there is a reason f... | https://github.com/meta-pytorch/data/issues/175 | open | [
"Better Engineering"
] | 2022-01-20T09:52:17Z | 2023-04-11T16:59:28Z | 6 | pmeier |
pytorch/functorch | 400 | how to get related commits of pytorch/pytorch and pytorch/functorch ? | For some reason, i need to install newest **pytorch/functorch** from sources. but i don't know the related **pytorch/pytorch** newest source. if the pytorch/pytorch and pytorch/functorch is not compatible, functorch will not work. how i get a newest relative pair of pytorch/pytorch commit and pytorch/functorch commit ?... | https://github.com/pytorch/functorch/issues/400 | open | [] | 2022-01-20T03:25:26Z | 2022-01-20T15:43:40Z | null | GipsonLeo |
huggingface/transformers | 15,223 | where is the 4.16.0dev?? | I'm running the run_mlm.py script.
There is such a line,
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.16.0.dev0")
but where is it?
can't find by pip,no in github too. | https://github.com/huggingface/transformers/issues/15223 | closed | [] | 2022-01-19T11:41:04Z | 2022-02-27T15:02:00Z | null | sipie800 |
pytorch/TensorRT | 819 | Build torch-trt failed in Ubuntu18.04 | I try to build the project from source according to the guide in. https://nvidia.github.io/Torch-TensorRT/tutorials/installation.html with bazel but failed.
My environment:
```
os: Ubuntu18.04
gcc: 7.5.0
g++: 7.5.0
cuda: 11.3
cudnn: 8.2
tensorRT: 8.2
torch-trt branch: ngc-21.12
bazel: 4.2.1 (installed in co... | https://github.com/pytorch/TensorRT/issues/819 | closed | [
"question"
] | 2022-01-19T11:24:27Z | 2022-01-20T01:42:26Z | null | Mookel |
pytorch/xla | 3,305 | how to get relative commits of pytorch/pytorch and pytorch/xla ? | ## β Questions and Help
For some reason, i need to install newest torch XLA from sources. but i don't know the related pytorch/pytorch newest source. if the pytorch/pytorch and pytorch/xla is not compatible, xla will not work. how i get a newest relative pair of pytorch/pytorch commit and pytorch/xla commit ?
Fo... | https://github.com/pytorch/xla/issues/3305 | closed | [] | 2022-01-19T08:38:55Z | 2022-02-19T00:30:08Z | null | GipsonLeo |
pytorch/pytorch | 71,272 | UserWarning: Seems like `optimizer.step()` has been overridden after learning rate scheduler initialization. Please, make sure to call `optimizer.step()` before `lr_scheduler.step()`. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Seems like `optimizer.step()... | ### π Describe the bug
I am following the same way that is provided [here ](https://pytorch.org/docs/1.10.1/generated/torch.optim.lr_scheduler.StepLR.html#torch.optim.lr_scheduler.StepLR) for using `StepLR`:
```python
scheduler = StepLR(optimizer, step_size=30, gamma=0.1)
for epoch in range(100):
train(...... | https://github.com/pytorch/pytorch/issues/71272 | open | [
"needs reproduction",
"module: optimizer",
"triaged",
"module: LrScheduler"
] | 2022-01-13T19:03:46Z | 2022-01-20T16:33:17Z | null | seyeeet |
pytorch/xla | 3,283 | How to benchmark the JIT / XLA? | ## β Questions and Help
Dear JAX developers,
I am trying to better understand the performance of JAX and its underlying just-in-time compilation architecture, but am puzzled how to get access to this information. For example, it would be helpful to distinguish how much time is spent tracing in Python, doing HLO o... | https://github.com/pytorch/xla/issues/3283 | closed | [] | 2022-01-08T16:31:55Z | 2022-01-10T08:26:40Z | null | wjakob |
pytorch/pytorch | 71,058 | `torch.Tensor.where` cannot work when `y` is float | ### π Describe the bug
Based on the [documentation](https://pytorch.org/docs/stable/generated/torch.Tensor.where.html?highlight=where#torch.Tensor.where) of `torch.Tensor.where`, `self.where(condition, y)` is equivalent to `torch.where(condition, self, y)`. However, `torch.where` will succeed when `y` is a float but ... | https://github.com/pytorch/pytorch/issues/71058 | open | [
"triaged",
"module: type promotion"
] | 2022-01-08T15:18:11Z | 2022-01-11T15:36:54Z | null | TestSomething22 |
pytorch/pytorch | 70,923 | type promotion is broken in `torch.where` | The [array API specification stipulates](https://data-apis.org/array-api/latest/API_specification/searching_functions.html?highlight=where#id7) that the return value of `torch.where` should undergo regular type promotion. Currently we do not support different dtypes for `x` and `y`:
```python
import torch
condit... | https://github.com/pytorch/pytorch/issues/70923 | closed | [
"triaged",
"module: type promotion",
"module: python array api"
] | 2022-01-06T14:39:05Z | 2022-01-07T07:50:40Z | null | pmeier |
pytorch/serve | 1,389 | how to determine number of workers and batch size to obtain best performance? | I have one model and 3 gpus. I register my model with the command:
curl -X POST "localhost:8444/models?url=yoyo_ai.mar&**batch_size=8**&max_batch_delay=8000&**initial_workers=8**"
In this setup, gpu:0 is assigned 2 workers and others are assigned 3 workers. (2 + 3 + 3)
I make requests with the following code where... | https://github.com/pytorch/serve/issues/1389 | closed | [
"help wanted"
] | 2022-01-06T08:14:27Z | 2022-02-03T22:27:03Z | null | orkunozturk |
pytorch/tutorials | 1,781 | tutorials/advanced_source/super_resolution_with_onnxruntime.py is maybe outdated? | I am working at the moment trough the [tutorial](https://github.com/pytorch/tutorials/blob/master/advanced_source/super_resolution_with_onnxruntime.py) and realized, that the entry notes are not up-to-date.
- line 19 says, onnx is available/compatible between 3.5 to 3.7:
- I tested installation in a venv with ... | https://github.com/pytorch/tutorials/issues/1781 | closed | [
"content",
"docathon-h1-2023",
"easy"
] | 2022-01-05T15:29:57Z | 2023-06-02T22:24:09Z | 2 | MaKaNu |
pytorch/serve | 1,385 | How to decode response after post process? | Hello. I'm using custom bert model on my custom handler using Korean.
When I request input text, handler encodes it and process like this.
``` {'body': bytearray(b'[\n\t\t\t["\xec\x9a\x94\xec\xa6\x98 \xeb\xb6\x80\xeb\xaa\xa8\xeb\x8b\x98\xea\xb3\xbc \xeb\xa7\x8e\xec\x9d\xb4 \xeb\xb6\x80\xeb\x94\xaa\xed\x98\x80.",... | https://github.com/pytorch/serve/issues/1385 | closed | [
"help wanted"
] | 2022-01-04T01:33:02Z | 2022-01-07T17:32:22Z | null | MinsuKim3095 |
pytorch/text | 1,476 | How to get all tokens in a Vocab using text | ## π Feature
<!-- A clear and concise description of the feature proposal -->
**Motivation**
Hi,
When I load a vocab or have built a vocab using torchtext.vocab, I can not print its all token in the Vocab
| https://github.com/pytorch/text/issues/1476 | closed | [] | 2022-01-01T06:53:51Z | 2022-01-01T14:07:08Z | null | yipliu |
huggingface/datasets-tagging | 28 | Why datasets version is pinned in requirements.txt? | In file `requirements.txt`, the version of `datasets` is pinned. Why? | https://github.com/huggingface/datasets-tagging/issues/28 | open | [
"question"
] | 2021-12-29T09:39:40Z | 2021-12-29T11:51:59Z | null | albertvillanova |
pytorch/xla | 3,271 | How to specify compute capability when building from soruce to support GPU? | Hello, when I finish building from soruce to support GPU, and run the test script test_train_mp_imagenet.py, a warning is shown:
TensorFlow was not built with CUDA kernel binaries compatible with compute capability 7.5. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
I am wonder... | https://github.com/pytorch/xla/issues/3271 | closed | [
"xla:gpu"
] | 2021-12-28T06:05:07Z | 2022-02-19T00:36:41Z | null | yxd886 |
pytorch/pytorch | 70,413 | PyTorch crashes without an error message, when running this code snippet with torch.tensor subclassing & forward hooks (Not sure what the exact cause is, but the code snippet reliably causes it) | ### π Describe the bug
While working a project for PyTorch's [Captum](https://github.com/pytorch/captum) library, I came across a bug that I've been struggling to narrow down the cause of. I've done my best to simplify what is happening in the Captum code, and the snippet of code below should reliably reproduce the... | https://github.com/pytorch/pytorch/issues/70413 | open | [
"triaged",
"Stale",
"tensor subclass"
] | 2021-12-26T18:33:55Z | 2022-02-26T21:02:46Z | null | ProGamerGov |
pytorch/pytorch | 70,411 | How to use custom dataset with SSD | I am trying to use SSD and retinanet from torchvision on my own dataset. However I cant find any reference on how to use my own dataset and what format requuired. Could any one please advice me
| https://github.com/pytorch/pytorch/issues/70411 | closed | [] | 2021-12-26T12:37:21Z | 2021-12-28T16:19:14Z | null | myasser63 |
pytorch/tutorials | 1,778 | [Help Wanted] Why take the log function and then apply exp? | In [line of code](https://github.com/pytorch/tutorials/blob/master/beginner_source/transformer_tutorial.py#L113), you calculate positional encoding for Transformers by taking the log first and then apply the exponential function.
Would you please elaborate on why you do this instead of directly doing the calculation... | https://github.com/pytorch/tutorials/issues/1778 | closed | [
"question",
"intro",
"docathon-h1-2023",
"easy"
] | 2021-12-24T17:09:56Z | 2024-05-24T18:34:43Z | null | Superhzf |
pytorch/TensorRT | 788 | β [Question] How do you ....? | ## β Question
Hi, could you please explain how this is better than pytorch to Onnx to TensorRT export path?
| https://github.com/pytorch/TensorRT/issues/788 | closed | [
"question"
] | 2021-12-22T17:36:54Z | 2022-01-04T23:56:04Z | null | andrei-pokrovsky |
pytorch/TensorRT | 786 | β [Question] How do you ....? | ## β Question
How can do you use [OpenAI's CLIP](https://github.com/openai/CLIP)
## What you have already tried
```
import clip
from torchvision import transforms
import torch_tensorrt
import torch
device = "cuda:0"
batch_size = 4
clip_model_name = "ViT-B/32"
scripted_model , preprocess = c... | https://github.com/pytorch/TensorRT/issues/786 | closed | [
"question"
] | 2021-12-22T09:10:31Z | 2022-01-25T10:01:54Z | null | hfawaz |
pytorch/pytorch | 70,280 | How to create build-in buffers which is writable during onnx inference? | ### π The feature, motivation and pitch
First, I'm sorry that this question may not be strictly relative to a feature request, but it has been posted on discuss.pytorch.org without any replies for one week.
Hi, I try to create a first-in-first-out queue as a pytorch model, export it to onnx and infer with onnxrunt... | https://github.com/pytorch/pytorch/issues/70280 | closed | [
"module: onnx",
"triaged"
] | 2021-12-22T02:26:28Z | 2022-01-05T01:46:44Z | null | lawlict |
pytorch/TensorRT | 783 | β [Question] Is there a way to visualize the TRT model? | ## β Question
<!-- Your question -->
I'm wondering if there is a way to get the TRT model after compilation and visualize it. I trying to compare a PTQ model to a QAT model. I know I might have to do some further optimization just trying to visualize the graphs and see what is going on . Currently using DenseNet169... | https://github.com/pytorch/TensorRT/issues/783 | closed | [
"question"
] | 2021-12-21T16:39:50Z | 2022-05-18T20:34:06Z | null | jessicarcassidy |
pytorch/pytorch | 70,244 | [feature request]how to merge many models to one model with shared backbone just use some code ,not a create a new model | I train some models with different datas ,these models' some parameters are shared ,
when i inference the models ,i need merge the models to one model ,i know the shared op ,so ,i want to merge these models
shared op to one op with seperate head only when inference not train.
i don't want to write a new mod... | https://github.com/pytorch/pytorch/issues/70244 | closed | [] | 2021-12-21T13:11:35Z | 2021-12-23T16:55:09Z | null | designerZhou |
pytorch/android-demo-app | 222 | how 640*640 to 320*320 | Input 640*640 model to 320*320 model. I changed the relevant parameters and the program flashed back. How do I change it to 320*320 input | https://github.com/pytorch/android-demo-app/issues/222 | closed | [] | 2021-12-21T02:41:41Z | 2021-12-21T05:58:05Z | null | mozeqiu |
pytorch/TensorRT | 779 | β [Question] Failed to compile trtorch use pre cxx11 abi | ## β Question
I'm trying to build trtorch v0.2.0 with pre cxx11 abi
But I always get the error like below
INFO: Analyzed target //:libtrtorch (40 packages loaded, 2667 targets configured).
INFO: Found 1 target...
ERROR: /root/git_source/Torch-TensorRT-0.2.0/cpp/trtorchc/BUILD:10:10: Linking cpp/trtorchc/trtorc... | https://github.com/pytorch/TensorRT/issues/779 | closed | [
"question"
] | 2021-12-21T02:13:39Z | 2021-12-21T03:03:08Z | null | Fans0014 |
pytorch/tensorpipe | 420 | [Question]How to detect pipe(obtained from ctx->connect()) is writable? | Hi,
when I get a pipe via `ctx->context(address)`, how do I know the pipe is ready for write or read? A return from `ctx->connect()` does not mean the connection has been built, right? If I call `pipe->write()` immediately, such write could fail as the underlying connection has not built yet. | https://github.com/pytorch/tensorpipe/issues/420 | open | [] | 2021-12-19T02:14:39Z | 2022-02-16T01:51:04Z | null | Rhett-Ying |
pytorch/data | 144 | Multiprocessing with any DataPipe writing to local file | ### π Describe the bug
We need to take extra care all DataPipe that would write to file system when DataLoader2 triggered multiprocessing. If the file name on the local file system is same across multiple processes, it would be a racing condition.
This is found when TorchText team is using `on_disk_cache` to cache... | https://github.com/meta-pytorch/data/issues/144 | closed | [
"bug",
"good first issue",
"help wanted",
"high priority"
] | 2021-12-18T03:40:43Z | 2022-05-19T03:59:34Z | 13 | ejguan |
pytorch/pytorch | 70,099 | Question: what is "Parameter indices"? | I meet the error. I know some variables which do not contribute to loss. How I can know these parameters' name? I don't know whether "Parameter indices" help me or not?
> Parameter indices which did not receive grad for rank 7: 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 137... | https://github.com/pytorch/pytorch/issues/70099 | open | [
"oncall: distributed",
"Stale"
] | 2021-12-17T09:34:29Z | 2022-02-15T15:02:44Z | null | shoutOutYangJie |
pytorch/TensorRT | 776 | could not support geluοΌ | I use this docker( nvcr.io/nvidia/pytorch:21.11-py3 ) you suggested to test torch-tensorrt, but can not trans pytorch model to torchscript model. It seems like gelu is not support, but I also use this docker (pytorch-20.12-py3) to trans pytorch model to torchscript model, it can work well.
File "/opt/conda/lib/pytho... | https://github.com/pytorch/TensorRT/issues/776 | closed | [
"question",
"No Activity"
] | 2021-12-17T08:38:37Z | 2022-04-01T00:02:17Z | null | daeing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.