repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/pytorch | 100,800 | [cpu inductor] where is silently incorrect when SIMD code is generated. | ### 🐛 Describe the bug
```python
import torch
input_tensor = torch.ones(3, 3)
def f(x):
return torch.where(torch.ones_like(x).to(torch.bool), torch.zeros_like(x), torch.ones_like(x)* 2)
res1 = f(input_tensor)
print(res1)
jit_func = torch.compile(f)
res2 = jit_func(input_tensor)
print(res2)
```
Output
```
tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
tensor([[2., 2., 2.],
[2., 2., 2.],
[2., 2., 0.]])
```
Reason:
Implementation of where relies on `blendv` where MSB of the mask element should be 0 for first element of the packed vector to be copied.
https://github.com/pytorch/pytorch/blob/8d56b0a5b57cf3e82402556ceb5c7080c0f9d5b6/torch/_inductor/codegen/cpp.py#L572-L573
blendv: https://www.intel.com/content/www/us/en/docs/cpp-compiler/developer-guide-reference/2021-8/mm256-blendv-ps.html
Found in https://github.com/pytorch/pytorch/pull/100799#issuecomment-1537136218
### Versions
master
cc @soumith @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire | https://github.com/pytorch/pytorch/issues/100800 | closed | [
"triaged",
"module: inductor"
] | 2023-05-06T13:03:01Z | 2023-05-10T02:16:14Z | null | kshitij12345 |
huggingface/transformers.js | 104 | [Question] npm install error in windows | I install transformers.js with npm but I get an error:
```
2135 info run canvas@2.11.2 install node_modules/canvas node-pre-gyp install --fallback-to-build --update-binary
2136 info run sharp@0.32.1 install node_modules/sharp (node install/libvips && node install/dll-copy && prebuild-install) || (node install/can-compile && node-gyp rebuild && node install/dll-copy)
2137 info run sharp@0.32.1 install { code: 1, signal: null }
2138 warn cleanup Failed to remove some directories [
2138 warn cleanup [
2138 warn cleanup 'D:\\project\\BLOGKLIN\\node_modules',
2138 warn cleanup [Error: EBUSY: resource busy or locked, rmdir 'D:\project\BLOGKLIN\node_modules\canvas'] {
2138 warn cleanup errno: -4082,
2138 warn cleanup code: 'EBUSY',
2138 warn cleanup syscall: 'rmdir',
2138 warn cleanup path: 'D:\\project\\BLOGKLIN\\node_modules\\canvas'
2138 warn cleanup }
2138 warn cleanup ]
2138 warn cleanup ]
2139 timing reify:rollback:createSparse Completed in 4980ms
2140 timing reify:rollback:retireShallow Completed in 0ms
2141 timing command:i Completed in 46786ms
2142 verbose stack Error: command failed
2142 verbose stack at ChildProcess.<anonymous> (C:\Users\admin\AppData\Roaming\npm\node_modules\npm\node_modules\@npmcli\promise-spawn\lib\index.js:63:27)
2142 verbose stack at ChildProcess.emit (node:events:390:28)
2142 verbose stack at maybeClose (node:internal/child_process:1064:16)
2142 verbose stack at Process.ChildProcess._handle.onexit (node:internal/child_process:301:5)
2143 verbose pkgid sharp@0.32.1
2144 verbose cwd D:\project\BLOGKLIN
2145 verbose Windows_NT 10.0.19044
2146 verbose node v16.13.0
2147 verbose npm v8.7.0
2148 error code 1
2149 error path D:\project\BLOGKLIN\node_modules\sharp
2150 error command failed
2151 error command C:\Windows\system32\cmd.exe /d /s /c (node install/libvips && node install/dll-copy && prebuild-install) || (node install/can-compile && node-gyp rebuild && node install/dll-copy)
2152 error sharp: Downloading https://github.com/lovell/sharp-libvips/releases/download/v8.14.2/libvips-8.14.2-win32-x64.tar.br
2152 error sharp: Please see https://sharp.pixelplumbing.com/install for required dependencies
2153 error sharp: Installation error: read ECONNRESET
2154 verbose exit 1
2155 timing npm Completed in 46886ms
2156 verbose unfinished npm timer reify 1683364060656
2157 verbose unfinished npm timer reify:build 1683364075028
2158 verbose unfinished npm timer build 1683364075029
2159 verbose unfinished npm timer build:deps 1683364075029
2160 verbose unfinished npm timer build:run:install 1683364075174
2161 verbose unfinished npm timer build:run:install:node_modules/canvas 1683364075175
2162 verbose unfinished npm timer build:run:install:node_modules/sharp 1683364075190
2163 verbose code 1
2164 error A complete log of this run can be found in:
2164 error C:\Users\admin\AppData\Local\npm-cache\_logs\2023-05-06T09_07_40_559Z-debug-0.log
```
os: windows 10
node: v16.13.0 | https://github.com/huggingface/transformers.js/issues/104 | closed | [
"question"
] | 2023-05-06T09:13:41Z | 2023-05-06T12:48:23Z | null | DominguitoLamo |
pytorch/TensorRT | 1,889 | Multi-GPU: optimize for cuda:1 but model also gets pushed on cuda:0, why??? | ## ❓ Question
I have two GPUs in my system. When optimize my model for the cuda:1 device the model gets somehow ALSO loaded onto the cuda:0 device (probably because that's the default device?). This happends during the optimization process which is called with:
`optModel = torch_tensorrt::torchscript::compile(model, compile_settings);`
With `nvidia-smi` I can clearly see that the optimization is performed on cuda:1 (as expected) as I explicitly tell to do so. Shortly before the optimization is finished the model is also loaded on cuda:0?
How can I stop the loading on cuda:0?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- Libtorch Version: 1.10.2+cu113
- CPU Architecture:
- OS (e.g., Linux): Linux
- CUDA version: 11.3
- GPU models and configuration: both GPUs are Nvidia RTX A4000
- TensorRT: 8.4.0.6
- Torch-TensorRT: torch-tensorrt for libtorch-1.10.2 | https://github.com/pytorch/TensorRT/issues/1889 | closed | [
"question"
] | 2023-05-05T11:43:50Z | 2023-07-06T15:04:44Z | null | bjaeger1 |
huggingface/datasets | 5,818 | Ability to update a dataset | ### Feature request
The ability to load a dataset, add or change something, and save it back to disk.
Maybe it's possible, but I can't work out how to do it, e.g. this fails:
```py
import datasets
dataset = datasets.load_from_disk("data/test1")
dataset = dataset.add_item({"text": "A new item"})
dataset.save_to_disk("data/test1")
```
With the error:
```
PermissionError: Tried to overwrite /mnt/c/Users/david/py/learning/mini_projects/data_sorting_and_filtering/data/test1 but a dataset can't overwrite itself.
```
### Motivation
My use case is that I want to process a dataset in a particular way but it doesn't fit in memory if I do it in one go. So I want to perform a loop and at each step in the loop, process one shard and append it to an ever-growing dataset. The code in the loop will load a dataset, add some rows, then save it again.
Maybe I'm just thinking about things incorrectly and there's a better approach. FWIW I can't use `dataset.map()` to do the task because that doesn't work with `num_proc` when adding rows, so is confined to a single process which is too slow.
The only other way I can think of is to create a new file each time, but surely that's not how people do this sort of thing.
### Your contribution
na | https://github.com/huggingface/datasets/issues/5818 | open | [
"enhancement"
] | 2023-05-04T01:08:13Z | 2023-05-04T20:43:39Z | 3 | davidgilbertson |
pytorch/data | 1,149 | [RFC] Performance Profiling Tools | ### 🚀 The feature
1. Store usage statistics in `Prefetcher`
- By tracking statistics within `Prefetcher`, we can reasonably determine whether upstream processes or downstream processes are faster. For example, the emptiness of the buffer queue may imply consumers are faster than producers. Users can insert this into various points in the pipeline to examine various behaviors. A common pattern we expect is to examine whether the pipeline is IO bound or compute bound.
- [ ] #1141
2. `DataLoader2` main process
- `torch` profilers (e.g. `torch.profiler.profile`) currently work with `DataLoader2`, however, it only tracks functions and DataPipes that are executed within the main process. Nonetheless, we should validate that the information is helpful if most of the computations take place within the main process (e.g. using `InProcessReadingService` or dispatching process.
- After 1 is completed, we can add APIs to `DataLoader2` to fetch the relevant statistics from `Prefetcher`'s buffer, such as the one that exists at the end of the main loop. It should allow users to examine whether the model is consuming faster than the preparation of samples.
- [ ] PR pending
- [ ] Tutorial pending
3. `DataLoader2` worker process profiling
- Two main options under considerations are:
1. Attaching the profiler to worker process in order to get worker level metrics/trace. This will allow us to use existing profilers without re-implementing their features.
2. `MultiprocessingReadingService` can provide methods to retrieve and aggregate metrics from certain DataPipes (mainly `Prefetcher`)
4. Integration with other tools (e.g. tracers)
- We will likely want main and worker processes' to be visible within tracers (e.g. useful when integrated with TorchTNT).
### Motivation, pitch
This set of tools and features aim to answer the questions:
1. Is my model training bottlenecked by data loading?
2. If so, which part of the pipeline? IO? Compute?
### Alternatives
_No response_
### Additional context
Comments and suggestions are welcomed. | https://github.com/meta-pytorch/data/issues/1149 | open | [
"topic: new feature"
] | 2023-05-03T22:01:19Z | 2023-05-30T11:27:53Z | 3 | NivekT |
pytorch/TensorRT | 1,882 | ❓ [Question] Request for a model which is supported by Torch-TRT(FX) | ## ❓ Question
I'm trying to evaluate the Torch-TensorRT tool, using FX backend for running models in the C++ library.
My goal is to convert models which are not fully supported by TRT, and accelerate them by running some of the sub-graphs on TRT(as explained by this notebook- https://github.com/pytorch/TensorRT/blob/main/examples/fx/fx2trt_example_next.py)
The steps I have already completed-
- I converted a small model which is fully supported by TRT, and I received a single sub-graph as expected.
The model runs successfully using the python lib, and the C++ lib also.
- I have already tried to convert the Resnet50 model, and did it successfully. But this is a fully TRT supported model.
- I converted a small model with a TRT unsupported operator, so the model was divided to 3 sub-graphs.
The model runs successfully using the python Torch-TensorRT lib, and the C++ lib also.
The problem is that the Torch-TensorRT model's latency is twice bigger than the original Torch model's latency.
I thought that maybe there is overhead because of the passes between sub-graphs on Torch and TRT back and forth. So I want to take a larger model, which is not fully supported by TensorRT, but still supported by Torch-TensorRT(by dividing the graph to TRT and Torch sub-grpahs, using the FX backend).
I tried some models, but I can't find such a model.
Is there a model that you tested the tool with?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.0
- CPU Architecture: x86-64
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): -
- Are you using local sources or building from archives: Torch-TensorRT which has been built from sources
- Python version: 3.8.10
- CUDA version: 11.8
- GPU models and configuration: NVIDIA T1000
- Any other relevant information: -
@OronG13 | https://github.com/pytorch/TensorRT/issues/1882 | closed | [
"question",
"No Activity"
] | 2023-05-03T13:53:40Z | 2023-11-17T00:02:12Z | null | DanielLevi6 |
huggingface/datasets | 5,815 | Easy way to create a Kaggle dataset from a Huggingface dataset? | I'm not sure whether this is more appropriately addressed with HuggingFace or Kaggle. I would like to somehow directly create a Kaggle dataset from a HuggingFace Dataset.
While Kaggle does provide the option to create a dataset from a URI, that URI must point to a single file. For example:

Is there some mechanism from huggingface to represent a dataset (such as that from `load_dataset('wmt14', 'de-en', split='train')` as a single file? Or, some other way to get that into a Kaggle dataset so that I can use the huggingface `datasets` module to process and consume it inside of a Kaggle notebook?
Thanks in advance!
| https://github.com/huggingface/datasets/issues/5815 | open | [] | 2023-05-02T21:43:33Z | 2023-07-26T16:13:31Z | 4 | hrbigelow |
huggingface/optimum | 1,024 | How to decrease inference time of LayoutXLM and LiLT models through Optimum? | ### System Info
```shell
Last version of transformers and Optimum libraries.
```
### Who can help?
@JingyaHuang , @echarlaix, @mi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Example with LiLT model:
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
model_id = "pierreguillou/lilt-xlm-roberta-base-finetuned-with-DocLayNet-base-at-paragraphlevel-ml512"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForTokenClassification.from_pretrained(model_id, device_map="auto")
from optimum.bettertransformer import BetterTransformer
model = BetterTransformer.transform(model, keep_original_model=False)
```
Error message
```
NotImplementedError: The model type lilt is not yet supported to be used with BetterTransformer. Feel free to open
an issue at https://github.com/huggingface/optimum/issues if you would like this model type to be supported.
Currently supported models are: dict_keys(['albert', 'bart', 'bert', 'bert-generation', 'blenderbot', 'camembert',
'clip', 'codegen', 'data2vec-text', 'deit', 'distilbert', 'electra', 'ernie', 'fsmt', 'gpt2', 'gptj', 'gpt_neo',
'gpt_neox', 'hubert', 'layoutlm', 'm2m_100', 'marian', 'markuplm', 'mbart', 'opt', 'pegasus', 'rembert',
'prophetnet', 'roberta', 'roc_bert', 'roformer', 'splinter', 'tapas', 't5', 'vilt', 'vit', 'vit_mae', 'vit_msn',
'wav2vec2', 'whisper', 'xlm-roberta', 'yolos']).
```
### Expected behavior
Hi,
I'm using Hugging Face libraries in order to run LayoutXLM and LiLT models.
How can I decrease inference time through Optimum? Which code to use?
I've already tested BetterTransformer (Optimum) and ONNX but none of them accepts LayoutXLM and LiLT models.
- BetterTransformer:
- "NotImplementedError: The model type layoutlmv2 is not yet supported to be used with BetterTransformer."
- "NotImplementedError: The model type lilt is not yet supported to be used with BetterTransformer."
- ONNX:
- "KeyError: 'layoutlmv2 is not supported yet.'"
- "KeyError: 'lilt is not supported yet.'"
Can you update the Optimum library so that `BetterTransformer() `and/or `ONNX `works on LayoutXLM and LiLT models?
Thank you. | https://github.com/huggingface/optimum/issues/1024 | open | [
"bug"
] | 2023-05-02T09:42:15Z | 2023-06-12T11:40:23Z | 4 | piegu |
pytorch/kineto | 756 | urgent!!! profiler: Profiler is not initialized: skipping step() invocation | I got the warning, when using torch profiler to profiling, the steps are merged into one:
[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation
[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation
[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation
[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation
[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation
[W kineto_shim.cpp:330] Profiler is not initialized: skipping step() invocation
image
image
Versions
Collecting environment information...
PyTorch version: 2.0.0a0+1767026
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 470.103.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping: 6
CPU MHz: 799.871
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] numpy==1.22.2
[pip3] pytorch-lightning==1.9.2
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.0.0a0+1767026
[pip3] torch-accl==0.3.0
[pip3] torch-tb-profiler==0.4.1
[pip3] torch-tensorrt==1.4.0.dev0
[pip3] torchmetrics==0.6.0
[pip3] torchtext==0.13.0a0+fae8e8c
[pip3] torchvision==0.15.0a0
[pip3] triton==2.0.0
[conda] Could not collect
image
import argparse
import nvtx
from typing import Tuple
from tqdm import tqdm
import torch
from torch import nn, optim
from torch.distributed import Backend
from torch.nn.parallel.distributed import DistributedDataParallel
from torch.utils.data import DataLoader, DistributedSampler
from torchvision import datasets, transforms
def create_data_loaders(rank: int,
| https://github.com/pytorch/kineto/issues/756 | closed | [
"question"
] | 2023-05-01T23:35:54Z | 2024-04-23T15:28:55Z | null | Johnsonms |
pytorch/TensorRT | 1,872 | ❓ [Question] How do you ....? | ## ❓ Question
<!-- Your question -->
How to compile torch-tensorrt for NVIDIA Jetson TX2 (jetpack4.6)
## What you have already tried
Hi, @kneatco
I have the same issue when I downlograded numpy version from 1.19.5 to 1.19.4.
I did following steps.
1. Downloading docker image for TX2 (jetpack=4.6)
```
# Pull the image
docker pull nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.10-py3
# Run the image
sudo docker run -it --runtime nvidia --network host nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.10-py3
```
3. Installing bazel in the container
```
# Download torch-tensorrt repo
git clone -b v1.1.0 https://github.com/pytorch/TensorRT.git
# Install bazel
export BAZEL_VERSION=$(cat <PATH_TO_TORCHTRT_ROOT>/.bazelversion)
mkdir bazel
cd bazel
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
unzip bazel-$BAZEL_VERSION-dist.zip
bash ./compile.sh
cp output/bazel /usr/local/bin/
```
5. Modifiying WORKSPACE as follows:
```
workspace(name = "Torch-TensorRT")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
http_archive(
name = "rules_python",
sha256 = "778197e26c5fbeb07ac2a2c5ae405b30f6cb7ad1f5510ea6fdac03bded96cc6f",
url = "https://github.com/bazelbuild/rules_python/releases/download/0.2.0/rules_python-0.2.0.tar.gz",
)
load("@rules_python//python:pip.bzl", "pip_install")
http_archive(
name = "rules_pkg",
sha256 = "038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
"https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
],
)
load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
git_repository(
name = "googletest",
commit = "703bd9caab50b139428cea1aaff9974ebee5742e",
remote = "https://github.com/google/googletest",
shallow_since = "1570114335 -0400",
)
# External dependency for torch_tensorrt if you already have precompiled binaries.
local_repository(
name = "torch_tensorrt",
path = "/opt/conda/lib/python3.8/site-packages/torch_tensorrt"
)
# CUDA should be installed on the system locally
new_local_repository(
name = "cuda",
build_file = "@//third_party/cuda:BUILD",
path = "/usr/local/cuda-10.2/",
)
new_local_repository(
name = "cublas",
build_file = "@//third_party/cublas:BUILD",
path = "/usr",
)
#############################################################################################################
# Tarballs and fetched dependencies (default - use in cases when building from precompiled bin and tarballs)
#############################################################################################################
#http_archive(
# name = "libtorch",
# build_file = "@//third_party/libtorch:BUILD",
# sha256 = "8d9e829ce9478db4f35bdb7943308cf02e8a2f58cf9bb10f742462c1d57bf287",
# strip_prefix = "libtorch",
# urls = ["https://download.pytorch.org/libtorch/cu113/libtorch-cxx11-abi-shared-with-deps-1.11.0%2Bcu113.zip"],
#)
#http_archive(
# name = "libtorch_pre_cxx11_abi",
# build_file = "@//third_party/libtorch:BUILD",
# sha256 = "90159ecce3ff451f3ef3f657493b6c7c96759c3b74bbd70c1695f2ea2f81e1ad",
# strip_prefix = "libtorch",
# urls = ["https://download.pytorch.org/libtorch/cu113/libtorch-shared-with-deps-1.11.0%2Bcu113.zip"],
#)
# Download these tarballs manually from the NVIDIA website
# Either place them in the distdir directory in third_party and use the --distdir flag
# or modify the urls to "file:///<PATH TO TARBALL>/<TARBALL NAME>.tar.gz
#http_archive(
# name = "cudnn",
# build_file = "@//third_party/cudnn/archive:BUILD",
# sha256 = "0e5d2df890b9967efa6619da421310d97323565a79f05a1a8cb9b7165baad0d7",
# strip_prefix = "cuda",
# urls = [
# "https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.2.4/11.4_20210831/cudnn-11.4-linux-x64-v8.2.4.15.tgz",
# ],
#)
#http_archive(
# name = "tensorrt",
# build_file = "@//third_party/tensorrt/archive:BUILD",
# sha256 = "826180eaaecdf9a7e76116855b9f1f3400ea9b06e66b06a3f6a0747ba6f863ad",
# strip_prefix = "TensorRT-8.2.4.2",
# urls = [
# "https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/8.2.4/tars/tensorrt-8.2.4.2.linux.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz",
# ],
#)
####################################################################################
# Locally installed dependencies (use in cases of custom dependencies or aarch64)
####################################################################################
# NOTE: In the case you are using just the pre-cxx11-abi path or just the cxx11 abi path
# with | https://github.com/pytorch/TensorRT/issues/1872 | closed | [
"question"
] | 2023-05-01T13:53:19Z | 2023-05-19T18:30:16Z | null | godhj93 |
pytorch/TensorRT | 1,871 | ❓ [Question] torch.fx.proxy.TraceError: Proxy object cannot be iterated | ## ❓ Question
I'm trying to convert an nn.Module of ASLfeat(Pytorch) to a runtime Torch-TensorRT model(for C++)
The steps I followed are the same as written in- https://github.com/pytorch/TensorRT/blob/main/examples/fx/fx2trt_example_next.py
But for some reason, the tracing step fails every time.
The error message is-
`torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors`
The line which cause it is- `n_samples, n_channel, *_ = dense_feat_map.shape`
The same is happening when I'm trying to use the FasterRCNN model from torchvision.
The same error occurs because of a loop running on the input list in the model.
Is there a workaround for these cases?
ASLfeat and FasterRCNN are complicated models which I can't convert directly to TensorRT, so Torch-TensorRT could be a very useful option for me.
## To Reproduce
Steps to reproduce the behavior(for FasterRCNN):
1. `from torchvision.models.detection.faster_rcnn import fasterrcnn_resnet50_fpn,FasterRCNN_ResNet50_FPN_Weights`
2. `model = fasterrcnn_resnet50_fpn(weights=FasterRCNN_ResNet50_FPN_Weights.DEFAULT).cuda().eval()`
3. `inputs = [torch.rand((1, 300, 400), device="cuda"), torch.rand((1, 500, 400), device="cuda")]`
4. `traced = acc_tracer.trace(model, inputs)`
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
The result-
`Traceback (most recent call last):`
` File "/home/Projects/Torch-TensorRT/python/trainingPath/step_7_fasterRCNNInference-X.py", line 44, in <module>`
` traced = acc_tracer.trace(model, inputs)`
` File "/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py",` `line 667, in trace`
` traced = rewriter_base_trace(mod, ast_rewriter_allow_list, leaf_module_list)`
` File "/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py",` `line 585, in rewriter_base_trace`
` rewritten_graph, rewritten_mod = AccRewritingTracer().trace(`
` File "/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py",` `line 309, in trace`
` return super().trace(rewritten, concrete_args), rewritten`
` File "/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace`
` (self.create_arg(fn(*args)),),`
` File "/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torchvision/models/detection/generalized_rcnn.py",` `line 75, in forward`
` for img in images:`
` File "/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch/fx/proxy.py", line 385, in __iter__`
` return self.tracer.iter(self)`
` File "/home/Projects/Torch-TensorRT/python/venv/lib/python3.8/site-packages/torch/fx/proxy.py", line 285, in iter`
` raise TraceError('Proxy object cannot be iterated. This can be '`
`torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors`
## What you have already tried
I also tried to use torch.jit.trace which traces the ASLfeat model successfully, but I need to trace the model using fx(because I need to convert it to a runtime model)
## Expected behavior
Receive a traced model, prepared for the torch-tensorrt usage.
<!-- A clear and concise description of what you expected to happen. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.0
- CPU Architecture: x86-64
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (conda, pip, libtorch, source): pip
- Build command you used (if compiling from source): -
- Are you using local sources or building from archives: Torch-TensorRT which has been built from sources
- Python version: 3.8.10
- CUDA version: 11.8
- GPU models and configuration: NVIDIA T1000
- Any other relevant information: -
@OronG13 | https://github.com/pytorch/TensorRT/issues/1871 | closed | [
"question",
"No Activity",
"component: fx"
] | 2023-05-01T11:24:21Z | 2023-08-21T00:02:11Z | null | DanielLevi6 |
huggingface/datasets | 5,809 | wiki_dpr details for Open Domain Question Answering tasks | Hey guys!
Thanks for creating the wiki_dpr dataset!
I am currently trying to combine wiki_dpr and my own datasets. but I don't know how to make the embedding value the same way as wiki_dpr.
As an experiment, I embeds the text of id="7" of wiki_dpr, but this result was very different from wiki_dpr. | https://github.com/huggingface/datasets/issues/5809 | closed | [] | 2023-04-30T06:12:04Z | 2023-07-21T14:11:00Z | 1 | yulgok22 |
pytorch/hub | 328 | Need help on how to contribute | Hello everyone.
I wanted to add simplenet architecture from 2016 which outperforms vggnets resnet18, resbet34 and the likes while being a plain CNN with 5m to 9m parameters to the pytorch hub.
I read the docs but I'm a bit confused. Could you kindly help me get this sorted out?
Here are my issues:
1.where exactly should I put the hubconf.py in my
repository? My repository([Link](https://github.com/Coderx7/SimpleNet_Pytorch/tree/master) ) is organized like this :
-cifar10
-imagenet
--simplenet.py
--readme.md
Should I add the hubconf.py at the root of the repository, or next to the model inside the imagenet directory for this to work?
2.and to be clear, hubconf.py will only cobtain the functions for instantiating each model variant right?
That is one entry for simplenetv1_5m1, another for simplenetv1_5m2, and so on right? And I should only import these from my simplenet.py right?
3.And then fork this repo, create a new .md file right?
How should I name that file? Should I use my real name fir owner or my GitHub handle name? i.e. coderx7_SimpleNet_Pytorch_title?
What should I write for title here? Can I leave it out? How many characters are allowed?
Is the repo name case sensitive?
When I created all of that , I make a pull request and that's it?
| https://github.com/pytorch/hub/issues/328 | closed | [] | 2023-04-29T14:52:26Z | 2023-05-03T09:56:37Z | null | Coderx7 |
pytorch/pytorch | 100,293 | How to get nn.MultiheadAttention mid layer output | ### 📚 The doc issue
Hello, I have a quetion about MultiheadAttention(short for MA). Not about the [doc explaination](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html?highlight=multiheadattention#torch.nn.MultiheadAttention), but is about using this module. I want to plot a heatmap(CAM) for my neural network based on transformer. In this process, I need to get the MA mid layer output, especially the dot product results for query-key pairs. How can I get it? If can't get it, I have to calculate the output dot product to estimate the result for the self attention layers. But this estimation may cause some errors. So do you have any idea to get the mid-layer results?
I want to use `register_forward_hook`, but this module architecture output really makes me confused cause it doesn't show me the component layer that I need.
```
>>> print(self_attn)
MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
```
So can you help me? Thank you very much!
### Suggest a potential alternative/fix
_No response_ | https://github.com/pytorch/pytorch/issues/100293 | closed | [] | 2023-04-28T23:30:29Z | 2023-04-30T05:51:23Z | null | Lucky-Light-Sun |
huggingface/datasets | 5,805 | Improve `Create a dataset` tutorial | Our [tutorial on how to create a dataset](https://huggingface.co/docs/datasets/create_dataset) is a bit misleading.
1. In **Folder-based builders** section it says that we have two folder-based builders as standard builders, but we also have similar builders (that can be created from directory with data of required format) for `csv`, `json/jsonl`, `parquet` and `txt` files. We have info about these loaders in separate [guide for loading](https://huggingface.co/docs/datasets/loading#local-and-remote-files) but it's worth briefly mentioning them in the beginning tutorial because they are more common and for consistency. Would be helpful to add the link to the full guide.
2. **From local files** section lists methods for creating a dataset from in-memory data which are also described in [loading guide](https://huggingface.co/docs/datasets/loading#inmemory-data).
Maybe we should actually rethink and restructure this tutorial somehow. | https://github.com/huggingface/datasets/issues/5805 | open | [
"documentation"
] | 2023-04-28T13:26:22Z | 2024-07-26T21:16:13Z | 4 | polinaeterna |
huggingface/dataset-viewer | 1,104 | Delete finished jobs immediately? | Currently, finished jobs are deleted after 7 days by an index. See https://github.com/huggingface/datasets-server/blob/259fd092c12d240d9b8d733c965c4b9362e90684/libs/libcommon/src/libcommon/queue.py#L144
But we never use the finished jobs, so:
- we could delete them immediately after finishing
- we could reduce the duration from 7 days to 1 hour (can be complementary to the previous action, to clean uncaught jobs)
For point 2, see https://github.com/huggingface/datasets-server/pull/1103
Stats:
- 9.805.591 jobs
- 13.345 are not finished! (0.1% of the jobs) | https://github.com/huggingface/dataset-viewer/issues/1104 | closed | [
"question",
"improvement / optimization"
] | 2023-04-28T11:49:10Z | 2023-05-31T12:20:38Z | null | severo |
pytorch/pytorch | 100,181 | [Dynamo] How to better handle customized list/dict | ### 🐛 Describe the bug
This is a pattern I found from Meta internal user case:
```
import torch
import logging
import torch._dynamo
from typing import Any, List, Optional
torch._logging.set_logs(dynamo=logging.DEBUG)
class _non_none_list(list):
def append(self, obj: Any):
if obj is not None:
super().append(obj)
def extend(self, lst: Optional[List[Any]]):
if lst is not None:
super().extend(x for x in lst if x is not None)
def fn(x):
a = _non_none_list()
a.append(x)
a.append(x + 1)
return torch.cat(a, dim=1)
x = torch.rand(2, 2)
print(fn(x))
opt_fn = torch.compile(backend="eager")(fn)
print(opt_fn(x))
```
There are three major graph breaks:
* ```Unsupported: call_function UserDefinedClassVariable() [] {}``` when calling ```_non_none_list()```.
* ```Unsupported: non-function or method super: <method 'append' of 'list' objects>``` when calling ```super().append(obj)```
* ```Unsupported: call_function args: UserDefinedObjectVariable(_non_none_list) ConstantVariable(int)``` when calling ```torch.cat(a, dim=1)```.
However, if I switch the customized list to python builtin list, there is no graph break. I'd like to know what is dynamo's story to handle customized list/dict.
### Versions
N/A
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire | https://github.com/pytorch/pytorch/issues/100181 | closed | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2023-04-27T16:26:39Z | 2023-05-03T04:25:40Z | null | yanboliang |
pytorch/TensorRT | 1,861 | ❓ [Question] Binding index warnings while using fx backend | ## ❓ Question
I want to convert a torch model(from python-nn.Module) to a runtime model(in C++), using the torch.fx capabilities. That will allow me to accelerate a model that isn't fully supported by TensorRT.
The model I'm using is-
`class Model(nn.Module):`
` def __init__(self):`
` super().__init__()`
` self.linear = nn.Linear(10, 10)`
` self.relu = nn.ReLU()`
` def forward(self, input_0):`
` input_0 = self.linear(input_0)`
` input_0 = self.relu(input_0)`
` input_0 = torch.linalg.norm(input_0, ord=2, dim=1)`
` output_0 = self.relu(input_0)`
` return output_0`
The compile command I'm using right after that is-
`model = Model().cuda().eval()`
`trt_fx_module_f = torch_tensorrt.fx.compile(`
` model, input=[torch.randn(1, 10, device="cuda")], lower_precision="fp32", min_acc_module_size=1, explicit_batch_dimension=True, use_experimental_fx_rt=True`
`)`
(I wrote it according to the response I received in torch's forums- https://discuss.pytorch.org/t/using-torchtrt-fx-backend-on-c/170639/6)
But after I'm trying to use that, so I see warnings about the binding indexes of the model. For example, here is one of the warnings-
`WARNING: [Torch-TensorRT] - ICudaEngine::getProfileDimensions: bindingIndex 0 is not in profile 2. Using bindingIndex = 4 instead.`
On the other hand, when I'm using the flow as written in this notebook(on the same model)-
https://github.com/pytorch/TensorRT/blob/main/examples/fx/fx2trt_example_next.py
so the warnings are being vanished.
Can you tell me what are the actual differences between these two flows?
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.0
- CPU Architecture: x86-64
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): -
- Are you using local sources or building from archives: Torch-TensorRT which has been built from sources
- Python version: 3.8.10
- CUDA version: 11.8
- GPU models and configuration: NVIDIA T1000
- Any other relevant information: -
@OronG13 | https://github.com/pytorch/TensorRT/issues/1861 | closed | [
"question",
"No Activity"
] | 2023-04-27T13:31:59Z | 2023-08-10T00:02:37Z | null | DanielLevi6 |
huggingface/transformers.js | 102 | How to convert Whisper Large v2 | Hello!
How to convert whisper-large-v2 model to onnx?
I'm using this command
`python3.9 -m scripts.convert --model_id whisper-large-v2 --quantize --task automatic-speech-recognition`
But when i try to connect the converted model i get the following error:
`Error: File not found. Could not locate "encoder_model.onnx".`
Thank you! | https://github.com/huggingface/transformers.js/issues/102 | closed | [
"question"
] | 2023-04-27T13:30:33Z | 2023-05-31T13:18:33Z | null | hotmeatballs |
huggingface/datasets | 5,797 | load_dataset is case sentitive? | ### Describe the bug
load_dataset() function is case sensitive?
### Steps to reproduce the bug
The following two code, get totally different behavior.
1. load_dataset('mbzuai/bactrian-x','en')
2. load_dataset('MBZUAI/Bactrian-X','en')
### Expected behavior
Compare 1 and 2.
1 will download all 52 subsets, shell output:
```Downloading and preparing dataset json/MBZUAI--bactrian-X to xxx```
2 will only download single subset, shell output
```Downloading and preparing dataset bactrian-x/en to xxx```
### Environment info
Python 3.10.11
datasets Version: 2.11.0 | https://github.com/huggingface/datasets/issues/5797 | open | [] | 2023-04-26T18:19:04Z | 2023-04-27T11:56:58Z | 2 | haonan-li |
huggingface/chat-ui | 122 | Add pre-prompt | cc @OlivierDehaene
> Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.
> `-----`
> `<current prompt>`
> `-----`
Is this something we want to do ASAP @julien-c @gary149 ? | https://github.com/huggingface/chat-ui/issues/122 | closed | [] | 2023-04-26T15:58:55Z | 2023-04-26T16:46:05Z | 1 | coyotte508 |
pytorch/TensorRT | 1,860 | ❓ [Question] Runtimes for timm + TensorRT | ## ❓ Question
I created a script to compare inference runtimes with `torch`, `torch.compile` and `torch_tensorrt.compile` for any timm model, input shape and dtype and some runtimes are worse using TensorRT, why ?
## What you have already tried
I used [latest NVIDIA pytorch container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch)(`nvcr.io/nvidia/pytorch:23.04-py3`, released today) on a g5.2xlarge AWS instance (A10g GPU). You can find the script (`benchmark.py`) at the end of this issue and the command used to run it below :
```bash
docker run --gpus all --rm --volume $DIR:/app nvcr.io/nvidia/pytorch:23.04-py3 /bin/bash -c "pip install --pre timm && python /app/benchmark.py"
```
with `$DIR` the path to the directory where I saved the script. Here are a few results :
| model | dtype | shape | torch | torch.compile | torch_tensorrt.compile |
|--------|--------|--------|--------|--------|--------|
| resnet50 | float32 | (16, 3, 224, 224) | 16.0ms | 11.4ms | **7.6ms** |
| resnet50 | float16 | (16, 3, 224, 224) | 9.0ms | 6.3ms | **3.6ms** |
| convnext_large | float32 | (16, 3, 224, 224) | 70.5ms | **56.7ms** | 145.9ms |
| convnext_large | float16 | (16, 3, 224, 224) | 35.4ms | **28.3ms** | 64.8ms |
| vit_base_patch16_224 | float32 | (16, 3, 224, 224) | 28.6ms | **28.2ms** | 30.5ms |
| vit_large_patch14_clip_336 | float32 | (16, 3, 336, 336) | 288.1ms | 284.2ms | 310.2ms |
| vit_large_patch14_clip_336 | float16 | (16, 3, 336, 336) | 129.1ms | 127.5ms | error° |
(error° : `Expected input tensors to have type Half, found type float`, maybe some forcing on Layernorm layers is applied and I should enable mixed precision somehow ?)
Everything goes well for the resnet50 model but for the convnext_large and vit models the `torch_tensorrt.compile` option get lower throughput and even fail in one case. And of course these models are the ones I am interested in 😅
Several questions :
- Do you see any issue with the script I provided or how I ran it ?
- How can I minimize the runtimes for the convnext_large and vit_large_patch14_clip_336 models ? Would using ONNX + TensorRT provide different results ? Is it related to how these models are implemented in timm ?
I can provide more details if needed (*e.g.* stack track),
Thanks for your help and support,
Simon
____
```python
from time import time
import timm
import torch
import torch_tensorrt
def benchmark(model, inputs, compile_torch=False, compile_tensorrt=False, n_warmups=5, n_runs=100):
"""
1. Optionally compile the model
2. Warmup phase (n_warmups)
3. Benchmark phase (n_runs)
"""
assert not (compile_torch and compile_tensorrt), "Cannot compile both torch and tensorrt"
# 1. Compile
if compile_tensorrt:
model = torch_tensorrt.compile(model,
inputs=[torch_tensorrt.Input(inputs.shape, dtype=inputs.dtype)],
enabled_precisions={inputs.dtype})
if compile_torch:
model = torch.compile(model)
# 2. Warmup
for _ in range(n_warmups):
with torch.no_grad():
model(inputs)
torch.cuda.synchronize()
# 3. Benchmark
runtimes = []
for _ in range(n_runs):
with torch.no_grad():
start = time()
model(inputs)
torch.cuda.synchronize()
runtimes.append(time() - start)
# Print result
print('*' * 80)
print(f"Average: {1000*sum(runtimes)/n_runs:.2f}ms")
print('*' * 80)
if __name__ == '__main__':
# To run this script using the latest pytorch docker image, save it into a directory (DIR) and run:
# docker run --gpus all --rm --volume $DIR:/app nvcr.io/nvidia/pytorch:23.04-py3 /bin/bash -c "pip install --pre timm && python /app/benchmark.py"
# Parameters
model_name = 'resnet50'
shape = (16, 3, 224, 224)
dtype = torch.float32
# Prepare model and inputs
model = timm.create_model(model_name)
model.eval().cuda().type(dtype)
inputs = torch.randn(*shape).type(dtype).cuda()
benchmark(model, inputs)
benchmark(model, inputs, compile_torch=True)
benchmark(model, inputs, compile_tensorrt=True)
``` | https://github.com/pytorch/TensorRT/issues/1860 | closed | [
"question",
"No Activity"
] | 2023-04-26T15:19:14Z | 2024-10-04T15:58:16Z | null | SimJeg |
huggingface/setfit | 367 | Massive Text Embedding Benchmark (MTEB) Leaderboard | https://huggingface.co/spaces/mteb/leaderboard
Can we use all of these with setfit? | https://github.com/huggingface/setfit/issues/367 | closed | [
"question"
] | 2023-04-26T09:18:27Z | 2023-12-05T14:48:55Z | null | vahuja4 |
huggingface/huggingface.js | 165 | Add E2E where the module is downloaded (or linked) to a TS project | To prevent things like #164 | https://github.com/huggingface/huggingface.js/issues/165 | closed | [
"tooling"
] | 2023-04-25T20:23:17Z | 2023-05-07T09:18:47Z | null | coyotte508 |
pytorch/TensorRT | 1,858 | ❓ [Question] Why was this Repo renamed to TensorRT ? | Thank you all for the great work on Torch-TensorRT.
It's been a pleasure to see it evolve since the days of TRTorch.
This repo went through multiple names but I think the current one is extremely confusing, if I clone both this repo and the original TensorRT repo I now have two TensorRT folders.
This is extremely confusing and at times infuriating. Would it be possible to know more about what prompted this naming choice ?
Wouldn't it be clearer to use Torch-TensorRT ?
PS: it seems [other people are confused and are posting issues for TensorRT here](https://github.com/pytorch/TensorRT/issues/1703)
| https://github.com/pytorch/TensorRT/issues/1858 | closed | [
"question"
] | 2023-04-25T12:03:06Z | 2023-05-02T10:08:41Z | null | MatthieuToulemont |
huggingface/transformers.js | 100 | Whisper on webGPU? | Somewhat related to [this thread](https://github.com/xenova/transformers.js/issues/20).
Is it within scope to implement a webGPU accelerated version of Whisper?
Not sure if this helps, but there is a [C port for Whisper wirh CPU implementation](https://github.com/ggerganov/whisper.cpp), and as mentioned in [this discussion](https://github.com/ggerganov/whisper.cpp/discussions/126), the main thing that needs to be offloaded to the GPU is the GGML_OP_MUL_MAT operator. | https://github.com/huggingface/transformers.js/issues/100 | closed | [
"question"
] | 2023-04-25T09:34:10Z | 2024-10-18T13:30:07Z | null | sandorkonya |
pytorch/data | 1,140 | Shuffle batches across workers | ### 🚀 The feature
I have a Dataloader with n workers. My understanding is that each worker constructs a full batch independently, which is then served by the dataloader. My samples are large, so I cannot increase the shuffle buffer size in each worker. Is there a way to perform the batching and shuffling only in the main process?
### Motivation, pitch
This would improve shuffling for a fixed memory usage.
### Alternatives
I tried having an inner dataloader with n workers that produces samples, and an outer dataloader that shuffles and batches them. I couldnt get the inner dataloader to use n workers.
### Additional context
_No response_ | https://github.com/meta-pytorch/data/issues/1140 | closed | [] | 2023-04-24T15:53:08Z | 2023-04-28T02:49:08Z | 2 | platers |
pytorch/text | 2,159 | how to use Field ,RawField with torchtext 0.15.0 , don't need lower version | ## 🐛 Bug
**Describe the bug** A clear and concise description of what the bug is.
- PyTorch Version (e.g., 1.0): 1.12
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:3.8
- CUDA/cuDNN version: 10.2
- GPU models and configuration:
- Any other relevant information:
as enviroment is pytorch 1.12 + ,but i want to torchtext 0.12+ ,but torchtext 0.12+ have remove Filed, how to use torchtext 0.12+ Field, casue when pip isntall torchtext below o.12, it default install pytorch and override the version i install before, i want to use pytorch 0.12+ as well as torrchtext Field , RawField, together. how to achieve this?
| https://github.com/pytorch/text/issues/2159 | open | [] | 2023-04-22T03:17:29Z | 2023-04-23T07:51:49Z | null | cqray1990 |
huggingface/optimum | 1,002 | Add a README & log at export | ### Feature request
The logs of the ONNX export are insightful.
Moreover, it would be good to generate automatically a README/json containing:
* which params were used at export
* For decoders, how to use the obtained `.onnx` models, as it can be a bit involved for somebody who does not use the Optimum ORT integration but wants to rewrite a custom implementation (in whatever language).
### Motivation
Readability for models on the Hub, reproducibility
### Your contribution
/ | https://github.com/huggingface/optimum/issues/1002 | open | [
"feature-request",
"onnx",
"tflite"
] | 2023-04-21T15:31:43Z | 2023-04-21T15:31:43Z | 0 | fxmarty |
huggingface/optimum | 999 | Remove attention mask creation for batch size = 1 when using SDPA | ### Feature request
Some pieces of transformers code are not useful when using SDPA with batch size = 1, for example:
https://github.com/huggingface/transformers/blob/874c7caf1966b1d0ee2749046703ada7a12ed797/src/transformers/models/gpt2/modeling_gpt2.py#L804-L822
https://github.com/huggingface/transformers/blob/874c7caf1966b1d0ee2749046703ada7a12ed797/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L495-L512
Removing them could speed up generation.
An example of how to do this is in https://github.com/huggingface/optimum/pull/998
### Motivation
Remove unnecessary overhead
### Your contribution
/ | https://github.com/huggingface/optimum/issues/999 | closed | [
"feature-request",
"bettertransformer",
"Stale"
] | 2023-04-21T14:41:04Z | 2025-05-29T02:14:32Z | 1 | fxmarty |
pytorch/serve | 2,253 | Troubled me too, How to solve this problem in TorchServe 0.7.1 | Just to let you know that I have the same kind of issue on Windows server 2019, with TorchServe 0.7.1.
From Anaconda Prompt (ran as admninistrator), I run `torchserve --start ...`, everything goes fine including the inference test on the served model. I stop the `torchserve --start ...` command with CTRL+C.
I guess the SIGINT is not catched by `torchserve.exe` on Windows to delete the `.model_server.pid` from `%APP_DATA%\Local\Temp\1\` so I have to delete it manually before running the next `torchserve --start ...` command.
_Originally posted by @khelkun in https://github.com/pytorch/serve/issues/1866#issuecomment-1425916308_
| https://github.com/pytorch/serve/issues/2253 | closed | [
"triaged",
"windows"
] | 2023-04-21T04:09:42Z | 2023-10-28T19:39:28Z | null | Z863058 |
pytorch/TensorRT | 1,845 | ❓ [Question] Can I use TensorRT8.5.3.1 and torch1.10.1 with torch_TensorRT? | ## ❓ Question
<!-- Your question -->
I found that when pip install torch_tensorrt corresponding to TensorRT8.5.3.1, torch must be 1.13. Can I use TensorRT8.5.3.1 and torch1.10.1 with torch_TensorRT?
And if I use c++ torch_tensorrt, can i avoid this situation?
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.10.1
- CPU Architecture: x86
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Python version: 3.9.16
- CUDA version: 11.6
- GPU models and configuration: rtx3090 | https://github.com/pytorch/TensorRT/issues/1845 | closed | [
"question"
] | 2023-04-20T12:34:27Z | 2023-04-23T08:05:33Z | null | Yoh-Z |
pytorch/TensorRT | 1,844 | ❓ [Question] Internal Error-given invalid tensor name | ## ❓ Question
I want to convert a torch model(from python) to a runtime model(in C++), using the torch.fx capabilities. That will allow me to accelerate a model that isn't fully supported by TensorRT.
I understand that this flow is experimental, so I used the examples which are given in this repository.
By using this example-
https://github.com/pytorch/TensorRT/blob/main/examples/fx/fx2trt_example_next.py
I got some internal errors while running this code part(and also while running inference after that, but the error messages are identical as before, so I guess it's related.)-
`trt_mod = TRTModule(`
` name="my_module",`
` serialized_engine=engine_str,`
` input_binding_names=r.input_names,`
` output_binding_names=r.output_names,`
` target_device=Device(f"cuda:{torch.cuda.current_device()}"),`
` )`
The error messages are-
`ERROR: [Torch-TensorRT] - 3: [engine.cpp::getProfileObliviousBindingIndex::1386] Error Code 3: Internal Error (getTensorShape given invalid tensor name: input_0)`
`ERROR: [Torch-TensorRT] - 3: [engine.cpp::getProfileObliviousBindingIndex::1386] Error Code 3: Internal Error (getTensorDataType given invalid tensor name: input_0)`
`ERROR: [Torch-TensorRT] - 3: [engine.cpp::getProfileObliviousBindingIndex::1386] Error Code 3: Internal Error (getTensorShape given invalid tensor name: output_0)`
`ERROR: [Torch-TensorRT] - 3: [engine.cpp::getProfileObliviousBindingIndex::1386] Error Code 3: Internal Error (getTensorDataType given invalid tensor name: output_0)
`
What can cause these errors?
I tried to find other way to define the model inputs and outputs(which will maybe affect the input and output names in some way, as hinted from the error messages), but I don't see other way in the examples.
## What you have already tried
I have already tried the notebook I linked before, and on other flow I got in the torch forum-
https://discuss.pytorch.org/t/using-torchtrt-fx-backend-on-c/170639/6
The code for this flow is-
`model_fx = model_fx.cuda()`
`inputs_fx = [i.cuda() for i in inputs_fx]`
`trt_fx_module_f16 = torch_tensorrt.compile(`
` model_fx,`
` ir="fx",`
` inputs=inputs_fx,`
` enabled_precisions={torch.float16},`
` use_experimental_fx_rt=True,`
` explicit_batch_dimension=True`
`)`
`torch.save(trt_fx_module_f16, "trt.pt")`
`reload_trt_mod = torch.load("trt.pt")`
`scripted_fx_module = torch.jit.trace(trt_fx_module_f16, example_inputs=inputs_fx)`
`scripted_fx_module.save("/tmp/scripted_fx_module.ts")`
`scripted_fx_module = torch.jit.load("/tmp/scripted_fx_module.ts") #This can also be loaded in C++`
The error is the same, while running the torch.compile method, using the "use_fx_experimental_rt=True" flag
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.13.1
- CPU Architecture: x86-64
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): -
- Are you using local sources or building from archives: I used the pre-built version of Torch-TensorRT 1.3.0 release
- Python version: 3.8.10
- CUDA version: 11.8
- GPU models and configuration: NVIDIA T1000
- Any other relevant information: -
| https://github.com/pytorch/TensorRT/issues/1844 | closed | [
"question"
] | 2023-04-20T10:35:35Z | 2023-04-27T16:10:46Z | null | DanielLevi6 |
huggingface/optimum | 987 | Have optimum supported BLIP-2 model converted to onnx? | Hi, have optimum supported BLIP-2 model converted to onnx? | https://github.com/huggingface/optimum/issues/987 | closed | [] | 2023-04-20T07:07:53Z | 2023-04-21T11:45:41Z | 1 | joewale |
pytorch/serve | 2,242 | How to send a json body to Torchserve | I'd like to do a post request to torch serve with application/json as its content-type, instead of a file. data could be `{"text": "hi"}`. Is that possible?
In the docs it is shown how you can send binary file data
```
import requests
res = requests.post("http://localhost:8080/predictions/squeezenet1_1", files={'data': open('docs/images/dogs-before.jpg', 'rb'), 'data': open('docs/images/kitten_small.jpg', 'rb')})
```
Can't get anything like this to work:
```
import requests
import io
str = "oi321op4"
raw_data = io.BytesIO(str.encode())
files = {"data": raw_data}
res = requests.post("myurl", files=files)
res.json()
``` | https://github.com/pytorch/serve/issues/2242 | closed | [] | 2023-04-19T16:17:51Z | 2023-04-19T22:47:55Z | null | nihiluis |
huggingface/setfit | 364 | Understanding the trainer parameters | I am looking at the SetFit example with SetFitHead:
```
# Create trainer
trainer = SetFitTrainer(
model=model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
loss_class=CosineSimilarityLoss,
metric="accuracy",
batch_size=16,
num_iterations=20, # The number of text pairs to generate for contrastive learning
num_epochs=1, # The number of epochs to use for contrastive learning
column_mapping={"sentence": "text", "label": "label"} # Map dataset columns to text/label expected by trainer
)
```
Here, what exactly is the meaning of `num_iterations`? And, why is `num_epochs =1`? Is that sufficient?
```
trainer.unfreeze(keep_body_frozen=False)
trainer.train(
num_epochs=25, # The number of epochs to train the head or the whole model (body and head)
batch_size=16,
body_learning_rate=1e-5, # The body's learning rate
learning_rate=1e-2, # The head's learning rate
l2_weight=0.0, # Weight decay on **both** the body and head. If `None`, will use 0.01.
)
metrics = trainer.evaluate()
```
Here the number of epochs is 25 and we are training the head and the body with different learning rates. Any reason why the per-epoch metrics are not displayed?
| https://github.com/huggingface/setfit/issues/364 | closed | [
"question"
] | 2023-04-19T15:19:42Z | 2023-11-24T13:22:31Z | null | vahuja4 |
huggingface/diffusers | 3,151 | What is the format of the training data | Hello,I'm training Lora, but I don't know what the data format looks like,
The error is as follows:
--caption_column' value 'text' needs to be one of: image
What is the data format? | https://github.com/huggingface/diffusers/issues/3151 | closed | [
"stale"
] | 2023-04-19T07:51:16Z | 2023-08-04T10:20:18Z | null | WGS-note |
pytorch/TensorRT | 1,835 | ❓ [Question] Is torch-tensorrt compiled code device agnostic? | Thanks for this wonderful repo!
Is the torch-tensorrt compiled code runnable on any (Nvidia) device or should it be compiled on the target device? I know that the usual tensorrt programs (compiled from onnx) need to be compiled on the target device. I would expect the same from torch-tensorrt. However, the docs on [deployment](https://pytorch.org/TensorRT/tutorials/runtime.html#runtime) do not specify this and rather make me believe that the compiled code is device agnostic.
| https://github.com/pytorch/TensorRT/issues/1835 | closed | [
"question"
] | 2023-04-18T16:44:56Z | 2023-04-18T17:05:52Z | null | FabianSchuetze |
huggingface/setfit | 360 | Token padding makes ONNX inference 6x slower, is attention_mask being used properly? | Here's some code that loads in my ONNX model and tokenizes 293 short examples. The longest length in the set is 153 tokens:
```python
input_text = test_ds['text']
import onnxruntime
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
inputs = tokenizer(
input_text,
max_length=512,
padding='longest',
truncation=True,
return_attention_mask=True,
return_token_type_ids=True,
return_tensors="np",
)
session = onnxruntime.InferenceSession(onnx_path)
```
```python
onnx_preds = session.run(None, dict(inputs))[0]
```
This runs in about 15-20 seconds for me. However, when I set `padding='max_length'` it takes about 1min20secs. Isn't the point of `attention_mask` to avoid this? The base model is `intfloat/e5-small`, Microsoft's e5 model which AFAICT is similar to mpnet. | https://github.com/huggingface/setfit/issues/360 | open | [
"question"
] | 2023-04-18T15:33:01Z | 2023-04-19T05:40:02Z | null | bogedy |
huggingface/datasets | 5,767 | How to use Distill-BERT with different datasets? | ### Describe the bug
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Steps to reproduce the bug
I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)?
### Expected behavior
Distill-BERT should work with different datasets.
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0 | https://github.com/huggingface/datasets/issues/5767 | closed | [] | 2023-04-18T06:25:12Z | 2023-04-20T16:52:05Z | 1 | sauravtii |
huggingface/transformers.js | 93 | [Feature Request] "slow tokenizer" format (`vocab.json` and `merges.txt`) | Wondering whether this code is supposed to work (or some variation on the repo URL - I tried a few different things):
```js
await import("https://cdn.jsdelivr.net/npm/@xenova/transformers@1.4.2/dist/transformers.min.js");
let tokenizer = await AutoTokenizer.from_pretrained("https://huggingface.co/cerebras/Cerebras-GPT-1.3B/resolve/main");
```
The `cerebras/Cerebras-GPT-1.3B` repo only has a `config.json` (no `tokenizer.json`), but the `config.json` has `"model_type": "gpt2",` and has `vocab.json` and `merges.txt`. It does load successfully with the Python Transformers lib. | https://github.com/huggingface/transformers.js/issues/93 | closed | [
"question"
] | 2023-04-18T05:11:31Z | 2023-04-23T07:41:27Z | null | josephrocca |
huggingface/datasets | 5,766 | Support custom feature types | ### Feature request
I think it would be nice to allow registering custom feature types with the 🤗 Datasets library. For example, allow to do something along the following lines:
```
from datasets.features import register_feature_type # this would be a new function
@register_feature_type
class CustomFeatureType:
def encode_example(self, value):
"""User-provided logic to encode an example of this feature."""
pass
def decode_example(self, value, token_per_repo_id=None):
"""User-provided logic to decode an example of this feature."""
pass
```
### Motivation
Users of 🤗 Datasets, such as myself, may want to use the library to load datasets with unsupported feature types (i.e., beyond `ClassLabel`, `Image`, or `Audio`). This would be useful for prototyping new feature types and for feature types that aren't used widely enough to warrant inclusion in 🤗 Datasets.
At the moment, this is only possible by monkey-patching 🤗 Datasets, which obfuscates the code and is prone to breaking with library updates. It also requires the user to write some custom code which could be easily avoided.
### Your contribution
I would be happy to contribute this feature. My proposed solution would involve changing the following call to `globals()` to an explicit feature type registry, which a user-facing `register_feature_type` decorator could update.
https://github.com/huggingface/datasets/blob/fd893098627230cc734f6009ad04cf885c979ac4/src/datasets/features/features.py#L1329
I would also provide an abstract base class for custom feature types which users could inherit. This would have at least an `encode_example` method and a `decode_example` method, similar to `Image` or `Audio`.
The existing `encode_nested_example` and `decode_nested_example` functions would also need to be updated to correctly call the corresponding functions for the new type. | https://github.com/huggingface/datasets/issues/5766 | open | [
"enhancement"
] | 2023-04-17T15:46:41Z | 2024-03-10T11:11:22Z | 4 | jmontalt |
huggingface/transformers.js | 92 | [Question] ESM module import in the browser (via jsdelivr) | Wondering how to import transformers.js as a module (as opposed to `<script>`) in the browser? I've tried this:
```js
let { AutoTokenizer } = await import("https://cdn.jsdelivr.net/npm/@xenova/transformers@1.4.2/dist/transformers.min.js");
```
But it doesn't seem to export anything. I might be making a mistake here, but if not: Woudl it be possible to get a module-based js file for the browser?
---
Also, as an aside, can I suggest using a versioned URL in the readme? Or something like:
```
https://cdn.jsdelivr.net/npm/@xenova/transformers@X.Y.Z/dist/transformers.min.js
```
With a note telling them to replace `X.Y.Z` with the latest version. This allows you to make breaking changes in the future without breaking a bunch of sites. Often newbie devs don't realise that they have to swap for a versioned URL, and this can lead to "web rot" where old webpages eventuallly become broken or buggy. | https://github.com/huggingface/transformers.js/issues/92 | closed | [
"question"
] | 2023-04-17T10:06:55Z | 2023-04-22T19:17:56Z | null | josephrocca |
pytorch/serve | 2,236 | How to get image name | I use curl http://localhost:8080/predictions/resnet-18 -T kitten_small.jpg
I want to get the image name like kitten_small.jpg but the data in the handler is only image | https://github.com/pytorch/serve/issues/2236 | closed | [] | 2023-04-17T08:31:55Z | 2023-10-28T19:39:20Z | null | zzh1230 |
pytorch/data | 1,132 | torchdata.datapipes.map.Shuffler should return a MapDataPipe | ### 🐛 Describe the bug
Hello. I am working on mixing two speech datasets, both of them are indexable datasets. Using MapDataPipe, shuffle one of the speech datasets, and zip them together with one zipper:
```python
import torchdata.datapipes as dp
dp1 = dp.map.SequenceWrapper([0, 1, 2, 3, 4, 5]) # speech 1
dp2 = dp.map.SequenceWrapper(['a', 'b', 'c']) # speech 2
dp1 = dp.map.Shuffler(dp1) # shuffle one
dpz = dp.map.Zipper(dp1, dp2)
print(list(dpz))
print(list(dpz))
print(list(dpz))
print()
```
However, the shuffler returns one IterDataPipe... So the code above raises an Error:
```
Traceback (most recent call last):
File "/mnt/home/quancs/projects/NBSS_pmt/testxxx.py", line 6, in <module>
dpz = dp.map.Zipper(dp1, dp2)
File "/mnt/home/quancs/miniconda3/envs/torch2/lib/python3.10/site-packages/torch/utils/data/datapipes/map/combining.py", line 80, in __init__
raise TypeError("Expected all inputs to be `MapDataPipe`")
TypeError: Expected all inputs to be `MapDataPipe`
```
I tried to use IterDataPipe, but I don't know how to sample it in ddp situation (in one epoch, each sample is sampled only once).
### Versions
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.17
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
CPU MHz: 1500.000
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.63
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
NUMA node4 CPU(s): 64-79
NUMA node5 CPU(s): 80-95
NUMA node6 CPU(s): 96-111
NUMA node7 CPU(s): 112-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-lightning==2.0.1.post0
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.0
[pip3] torchdata==0.6.0
[pip3] torcheval==0.0.6
[pip3] torchmetrics==0.11.4
[pip3] torchtnt==0.0.7
[pip3] torchvision==0.15.0
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.5 py310hd5efca6_0
[conda] numpy-base 1.23.5 py310h8e6c178_0
[conda] pytorch 2.0.0 py3 | https://github.com/meta-pytorch/data/issues/1132 | closed | [] | 2023-04-17T02:29:55Z | 2023-04-18T14:56:05Z | 7 | quancs |
huggingface/optimum | 973 | How to run the encoder part only of the model transformed by BetterTransformer? | ### Feature request
If I want to run the encoder part of the model, e.g., "bert-large-uncased", skipping the word embedding stage, I could run with `nn.TransformerEncoder` as the Pytorch eager mode. How could I implement the BetterTransformer version encoder?
```
encoder_layer = nn.TransformerEncoderLayer(d_model=hidden_dim, nhead=head_num)
hf_encoder = nn.TransformerEncoder(encoder_layer, num_layers=layer_num).to(device)
```
Based on the code above, `BetterTransformer.transform` cannot accept `hf_encoder` as the input, it gave me the error `AttributeError: 'TransformerEncoder' object has no attribute 'config'`.
### Motivation
When I want to compare the performance of BetterTransformer and FasterTransformer ([link](https://github.com/NVIDIA/FasterTransformer/blob/main/docs/bert_guide.md#run-fastertransformer-bert-on-pytorch)) I need to run the encoder part of the model only to compare with FasterTransformer.
### Your contribution
I could add more guidelines into Docs/ README. | https://github.com/huggingface/optimum/issues/973 | closed | [
"Stale"
] | 2023-04-17T02:29:44Z | 2025-06-04T02:15:33Z | 2 | WarningRan |
huggingface/datasets | 5,759 | Can I load in list of list of dict format? | ### Feature request
my jsonl dataset has following format:
```
[{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...]
[{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...]
```
I try to use `datasets.load_dataset('json', data_files=path)` or `datasets.Dataset.from_json`, it raises
```
File "site-packages/datasets/arrow_dataset.py", line 1078, in from_json
).read()
File "site-packages/datasets/io/json.py", line 59, in read
self.builder.download_and_prepare(
File "site-packages/datasets/builder.py", line 872, in download_and_prepare
self._download_and_prepare(
File "site-packages/datasets/builder.py", line 967, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "site-packages/datasets/builder.py", line 1749, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "site-packages/datasets/builder.py", line 1892, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Motivation
I wanna use features like `Datasets.map` or `Datasets.shuffle`, so i need the dataset in memory to be `arrow_dataset.Datasets` format
### Your contribution
PR | https://github.com/huggingface/datasets/issues/5759 | open | [
"enhancement"
] | 2023-04-16T13:50:14Z | 2023-04-19T12:04:36Z | 1 | LZY-the-boys |
huggingface/setfit | 358 | Domain adaptation | Does setfit cover Adapter Transformers? https://arxiv.org/pdf/2007.07779.pdf | https://github.com/huggingface/setfit/issues/358 | closed | [
"question"
] | 2023-04-16T12:44:50Z | 2023-12-05T14:49:36Z | null | Elahehsrz |
huggingface/diffusers | 3,120 | The controlnet trained by diffusers scripts produce always same result no matter what the input images is | ### Describe the bug
I train a controlnet with the base model Chilloutmix-Ni and datasets Abhilashvj/vto_hd_train using the train_controlnet.py script provided in diffuses repo
After training I got a controlnet model.
When I inference the image with the model, if I use the same prompt and seed, no matter how I change the control image I used, the pipeline always output the same image as result, which means that the controlnet model doesn't accept the control image as a condition at all.
### Reproduction
The train the controlnet with the scripts in example/controlnet
`accelerate launch train_controlnet.py --pretrained_model_name_or_path="/root/autodl-tmp/chilloutmixckpt" --output_dir="/root/autodl-tmp/mycontrolnet" --dataset_name=Abhilashvj/vto_hd_train --resolution=512 --learning_rate=2e-6 --train_batch_size=1 --gradient_accumulation_steps=4 --num_train_epochs=10 --tracker_project_name="train_controlnet" --checkpointing_steps=10000`
And the code I use for inference is as below
```
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler,DPMSolverMultistepScheduler
from diffusers.utils import load_image
import torch
base_model_path = "/root/autodl-tmp/chilloutmixckpt"
controlnet_path = "/root/autodl-tmp/mycontrolnet"
controlnet = ControlNetModel.from_pretrained(controlnet_path)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
base_model_path, controlnet=controlnet
)
# speed up diffusion process with faster scheduler and memory optimization
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
#pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
control_image = load_image("https://datasets-server.huggingface.co/assets/Abhilashvj/vto_hd_train/--/Abhilashvj--vto_hd_train/train/5/conditioning_image/image.jpg")
control_image.save("./control8.png")
prompt = "1girl, best quality, ultra high res, high quality, ultra-detailed, professional lighting"
negative_prompt = 'paintings, sketches, extremely worst quality, worst quality, extremely low quality, low quality, normal quality, lowres, normal quality, monochrome, grayscale, missing fingers, extra fingers, bad teeth, bad anatomy, bad hands, bad feet, blurry face, bad eyes, slanted eyes, fused eye, skin spots, acnes, skin blemishes, age spot'
# generate image
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0]
image.save("./output8.png")
```
### Logs
_No response_
### System Info
diffusers0.15.0
ubuntu
python3.8 | https://github.com/huggingface/diffusers/issues/3120 | closed | [
"bug",
"stale"
] | 2023-04-16T11:16:58Z | 2023-07-08T15:03:12Z | null | garyhxfang |
pytorch/data | 1,131 | What does it mean for a DataPipe to be 'replicable'? | ### 📚 The doc issue
In the [ReadingService docs](https://pytorch.org/data/main/reading_service.html?highlight=replicable) the different sharding options and that one applies to replicable and one to non-replicable datapipes, but it's not really explained what that means.
Indirectly related, I'm also confused by the names `ShardingRoundRobinDispatcher` and `ShardingFilter`. The docs for `ShardingFilter` say
> each instance of the DataPipe (on different workers) will have every n-th element of the original DataPipe, where n equals to the number of instances.
Is that not essentially the definition of round robin distribution? How is that different than what the the DataPipes downstream of a `ShardingRoundRobinDispatcher` on different workers receive?
### Suggest a potential alternative/fix
Clarify more the difference between `ShardingRoundRobinDispatcher` and `ShardingFilter` and explain what 'replicable' means in that context.
Possibly consider renaming `ShardingRoundRobinDispatcher` and `ShardingFilter`, if the answers to my questions above are 'yes' to something more meaningful. | https://github.com/meta-pytorch/data/issues/1131 | open | [] | 2023-04-15T03:27:12Z | 2023-05-27T21:47:09Z | 4 | lendle |
pytorch/TensorRT | 1,824 | ❓ [Question] Pytorch 2.0 Compatability? | ## ❓ Question
Thanks for this repo. Is TensorRT compatible with pytorch 2.0? I see that the latest release targets pytorch 1.13. Is there some way I can use TensorRT with pytorch 2.0?
| https://github.com/pytorch/TensorRT/issues/1824 | closed | [
"question"
] | 2023-04-14T17:38:33Z | 2023-04-22T21:21:34Z | null | FabianSchuetze |
huggingface/transformers.js | 87 | Can whisper-tiny speech-to-text translate to English as well as transcribe foreign language? | I know there is a separate translation engine (t5-small), but I'm wondering if speech-to-text with whisper-tiny (not whisper-tiny.en) can return English translation alongside the foreign-language transcription? -- I read Whisper.ai can do this. It seems like it would just be a parameter, but I don't know where to look. | https://github.com/huggingface/transformers.js/issues/87 | closed | [
"enhancement",
"question"
] | 2023-04-14T16:23:14Z | 2023-06-23T19:07:31Z | null | patrickinminneapolis |
pytorch/pytorch | 99,143 | No documentation to show how to implement aten::view for custom backend | ### 📚 The doc issue
The original code is:
```py
x = torch.empty([1024], device='privateuseone:0')
y = x.view([2, -1]) # raise error by missing aten::view
```
Then I get following errors:
```txt
NotImplementedError: Could not run 'aten::view' with arguments from the 'PrivateUse1' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::view' is only available for ..
```
According to some interface declaration in Pytorch source code, the extension looks like this:
```cpp
static at::Tensor __view(c10::DispatchKeySet ks, const at::Tensor & self, c10::SymIntArrayRef size) {
return at::_ops::view::redispatch(ks, self, size);
}
TORCH_LIBRARY_IMPL(aten, Antares, m) {
m.impl("view", __view);
}
```
However, it results in infinite recursive call of this function and ends with stack overflow.
I don't think `x.view([2, -1])` really requires user to define its implementation. If this definition is a must, what documentation can I refer to get it passed correctly?
### Suggest a potential alternative/fix
An document example of how to implement custom `aten::view`, or any simpler solutions to solve the reshape problem above.
cc @malfet @zou3519 @svekars @carljparker | https://github.com/pytorch/pytorch/issues/99143 | open | [
"module: cpp-extensions",
"module: docs",
"triaged"
] | 2023-04-14T11:36:09Z | 2024-04-16T16:18:30Z | null | ghostplant |
huggingface/text-generation-inference | 182 | Is bert-base-uncased supported? | Hi,
I'm trying to deploy bert-base-uncased model by [v0.5.0](https://github.com/huggingface/text-generation-inference/tree/v0.5.0), but got an error: ValueError: BertLMHeadModel does not support `device_map='auto'` yet.
<details>
```
root@nick-test1-8zjwg-135105-worker-0:/usr/local/bin# ./text-generation-launcher --model-id bert-base-uncased
2023-04-14T07:24:23.167920Z INFO text_generation_launcher: Args { model_id: "bert-base-uncased", revision: None, sharded: None, num_shard: Some(1), quantize: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_input_length: 1000, max_total_tokens: 1512, max_batch_size: 32, max_waiting_tokens: 20, port: 80, shard_uds_path: "/tmp/text-generation-server", master_addr: "localhost", master_port: 29500, huggingface_hub_cache: Some("/data"), weights_cache_override: None, disable_custom_kernels: false, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None }
2023-04-14T07:24:23.168401Z INFO text_generation_launcher: Starting shard 0
2023-04-14T07:24:29.874262Z ERROR shard-manager: text_generation_launcher: "Error when initializing model
Traceback (most recent call last):
File \"/opt/miniconda/envs/text-generation/bin/text-generation-server\", line 8, in <module>
sys.exit(app())
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/typer/main.py\", line 311, in __call__
return get_command(self)(*args, **kwargs)
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/click/core.py\", line 1130, in __call__
return self.main(*args, **kwargs)
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/typer/core.py\", line 778, in main
return _main(
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/typer/core.py\", line 216, in _main
rv = self.invoke(ctx)
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/click/core.py\", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/click/core.py\", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/click/core.py\", line 760, in invoke
return __callback(*args, **kwargs)
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/typer/main.py\", line 683, in wrapper
return callback(**use_params) # type: ignore
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/text_generation_server/cli.py\", line 55, in serve
server.serve(model_id, revision, sharded, quantize, uds_path)
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/text_generation_server/server.py\", line 135, in serve
asyncio.run(serve_inner(model_id, revision, sharded, quantize))
File \"/opt/miniconda/envs/text-generation/lib/python3.9/asyncio/runners.py\", line 44, in run
return loop.run_until_complete(main)
File \"/opt/miniconda/envs/text-generation/lib/python3.9/asyncio/base_events.py\", line 634, in run_until_complete
self.run_forever()
File \"/opt/miniconda/envs/text-generation/lib/python3.9/asyncio/base_events.py\", line 601, in run_forever
self._run_once()
File \"/opt/miniconda/envs/text-generation/lib/python3.9/asyncio/base_events.py\", line 1905, in _run_once
handle._run()
File \"/opt/miniconda/envs/text-generation/lib/python3.9/asyncio/events.py\", line 80, in _run
self._context.run(self._callback, *self._args)
> File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/text_generation_server/server.py\", line 104, in serve_inner
model = get_model(model_id, revision, sharded, quantize)
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/text_generation_server/models/__init__.py\", line 130, in get_model
return CausalLM(model_id, revision, quantize=quantize)
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/text_generation_server/models/causal_lm.py\", line 308, in __init__
self.model = AutoModelForCausalLM.from_pretrained(
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/transformers-4.28.0.dev0-py3.9-linux-x86_64.egg/transformers/models/auto/auto_factory.py\", line 471, in from_pretrained
return model_class.from_pretrained(
File \"/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/transformers-4.28.0.dev0-py3.9-linux-x86_64.egg/transformers/modeling_utils.py\", line 2644, in from_pretrained
raise ValueError(f\"{model.__class__.__name__} does not support `device_map='{device_map}'` yet.\")
ValueError: BertLMHeadModel does not support `device_map='auto'` yet.
" rank=0
2023-04-14T07:24:30.475420Z ERROR text_generation_launcher: Shard 0 failed to start.
2023-04-14T07:24:30.475495Z INFO text_generation_launcher: Shut | https://github.com/huggingface/text-generation-inference/issues/182 | open | [
"question"
] | 2023-04-14T07:26:05Z | 2023-11-17T09:20:30Z | null | nick1115 |
huggingface/setfit | 355 | ONNX conversion of multi-ouput classifier | Hi,
I am trying to do onnx conversion for multilabel model using the multioutputclassifier
`model = SetFitModel.from_pretrained(model_id, multi_target_strategy="multi-output")`.
When I tried `export_onnx(model.model_body,
model.model_head,
opset=12,
output_path=output_path)`, it gave me an error indicating there's no `coef_`, I understand there's no coef_ in the multioutput classifier, but is there a way to do onnx conversion for this model?
Thanks!
| https://github.com/huggingface/setfit/issues/355 | open | [
"question"
] | 2023-04-13T22:08:13Z | 2023-04-20T17:00:48Z | null | jackiexue1993 |
pytorch/examples | 1,136 | examples/imagenet/main.py Multiple Gpus use for training | By setting up multiple Gpus for use, the model and data are automatically loaded to these Gpus for training. What is the difference between this way and single-node multi-GPU distributed training?
| https://github.com/pytorch/examples/issues/1136 | open | [] | 2023-04-13T12:05:39Z | 2023-04-30T01:18:17Z | 1 | Ansor-ZJJ |
pytorch/tutorials | 2,284 | [BUG] - module 'torch' has no attribute '_six' | ### Add Link
https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html
### Describe the bug
When I try to run the data loader section, it keeps returning this error of torch not having the attribute _six. I made sure that my dataroot is right and the files are there but it just doesn't seem to fix the problem.
### Describe your environment
Mac and PyTorch version 2.0.0 | https://github.com/pytorch/tutorials/issues/2284 | closed | [
"question"
] | 2023-04-13T04:46:34Z | 2024-11-20T14:19:23Z | null | vanilladucky |
huggingface/transformers.js | 84 | [Question] New demo type/use case: semantic search (SemanticFinder) | Hi @xenova,
first of all thanks for the amazing library - it's awesome to be able to play around with the models without a backend!
I just created [SemanticFinder](https://do-me.github.io/SemanticFinder/), a semantic search engine in the browser with the help of transformers.js and [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2).
You can find some technical details in the [blog post](https://geo.rocks/post/semanticfinder-semantic-search-frontend-only/).
I was wondering whether you'd be interested in showcasing semantic search as new demo type. Technically, it's not a new model but it's a new **use case** with an existing model so I don't know whether it's out of scope.
Anyway, just wanted to let you know that you're work is very much appreciated! | https://github.com/huggingface/transformers.js/issues/84 | closed | [
"question"
] | 2023-04-12T18:57:38Z | 2025-10-13T05:03:30Z | null | do-me |
huggingface/diffusers | 3,075 | Create a Video ControlNet Pipeline | **Is your feature request related to a problem? Please describe.**
Stable Diffusion video generation lacks precise movement control and composition control. This is not surprising, since the model was not trained or fine-tuned with videos.
**Describe the solution you'd like**
By following an analogous extension process that gave Stable Diffusion more composition control with ControlNet, we can address this issue, extending the `TextToVideoZeroPipeline` with an additional ControlNet guidance image _sequence_.
Specifically, I believe this will involve creating `ControlNet3DModel` that extends to the `ControlNetModel` to provide the proper down and mid sample residuals to the a new `TextToVideoZeroControlNetPipeline`.
The `TextToVideoZeroControlPipeline` will extend the `TextToVideoZeroPipeline` so it can be initialized `ControlNet3DModel`. During the forward pass we will add an additional list of images (or 3D tensor) parameter. This will be passed to the `ControlNet3DModel` to create the residuals for the 3D U-Net.
**Describe alternatives you've considered**
Alternative one can create special purpose pipelines to use additional guidance image sequences. An example of this process is the "Follow Your Pose" approach: https://follow-your-pose.github.io/
However, extending a Stable Diffusion video pipeline with a `ControlNet3DModel` opens the door to numerous possible other extensions without the need to make a new pipeline. For example:
- A sketch temporal ControlNet would let users turn sketches to colored animations
- A optical flow ControlNet could transfer movement in a similar way to EBSynth
- A pose ControlNet could precisely control the movement of characters in a video.
**Additional context**
This is idea is part of the JAX/ControlNet sprint for the "Stable Diffusion for Animation" project. I was hoping that our work could lead to a PR that is acceptable for the repo, so I wanted to get a conversation going on the approach.
Tagging the maestro @takuma104 to get your thoughts as well.
| https://github.com/huggingface/diffusers/issues/3075 | closed | [
"question"
] | 2023-04-12T17:51:35Z | 2023-04-13T16:21:28Z | null | jfischoff |
huggingface/setfit | 352 | False Positives | I had built a model using a muti-label dataset. But I see that I am getting so many False Positive outputs during inference.
For eg:
FIRST NOTICE OF LOSS SENT TO AGENT'S CUSTOMER ACTIVITY ---> This is predicted as 'Total Loss' (Total Loss is one of my labels given fed through the dataset).
I see that there is a word 'Loss' present in the dataset but it is not supposed to be predicted as 'Total Loss'.
There are so many absurd outputs as well.
Here is the pre-trained model which I am using for fine-tuning :
Hyper-parameters : num_iterations = 30, batch_size = 16, num_epochs = 1
What went wrong?
| https://github.com/huggingface/setfit/issues/352 | closed | [
"question"
] | 2023-04-12T17:42:44Z | 2023-05-18T16:19:27Z | null | cassinthangam4996 |
huggingface/setfit | 349 | Hard Negative Mining vs random sampling | Has anyone tried doing hard negative mining when generating the sentence pairs as opposed to random sampling? @tomaarsen - is random sampling the default? | https://github.com/huggingface/setfit/issues/349 | open | [
"question"
] | 2023-04-12T09:24:53Z | 2023-04-15T16:04:27Z | null | vahuja4 |
pytorch/android-demo-app | 311 | What is MemoryFormat.CHANNELS_LAST? | And What is BitmaptoFloat32Tensor?
Thx. | https://github.com/pytorch/android-demo-app/issues/311 | open | [] | 2023-04-12T02:03:49Z | 2023-04-12T02:03:49Z | null | NeighborhoodCoding |
huggingface/tokenizers | 1,216 | What is the correct way to remove a token from the vocabulary? | I see that it works when I do something like this
```
del tokenizer.get_vocab()[unwanted_token]
```
~~And then it will work when running encode~~, but when I save the model the unwanted tokens remain in the json. Is there a blessed way to remove unwanted tokens?
EDIT:
Now that I tried again see that does not actually work.
| https://github.com/huggingface/tokenizers/issues/1216 | closed | [
"Stale"
] | 2023-04-11T15:40:48Z | 2024-02-10T01:47:15Z | null | tvallotton |
huggingface/optimum | 964 | onnx conversion for custom trained trocr base stage1 | ### Feature request
I have trained the base stage1 trocr on my custom dataset having multiline images. The trained model gives good results while using the default torch format for loading the model. But while converting the model to onnx, the model detects only first line or part of it in first line. I have used this [https://github.com/huggingface/transformers/issues/19811#issuecomment-1303072202](url)
for converting the model to onnx. Can you kindly provide the insights about what i should do differently, in order to get the desired multiline output from the onnx converted model.
### Motivation
How to update the onnx conversion of trocr in order to support multiline trocr trained model
### Your contribution
Trained a trocr base stage1 model for multiline dataset. | https://github.com/huggingface/optimum/issues/964 | open | [
"onnx"
] | 2023-04-11T10:10:23Z | 2023-10-16T14:20:42Z | 1 | Mir-Umar |
huggingface/datasets | 5,727 | load_dataset fails with FileNotFound error on Windows | ### Describe the bug
Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps:
(1) create conda environment
(2) activate environment
(3) install with: ``conda` install -c huggingface -c conda-forge datasets`
Then
```
from datasets import load_dataset
# this or any other example from the website fails with the FileNotFoundError
glue = load_dataset("glue", "ax")
```
**Below I have pasted the error omitting the full path**:
```
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at C:\Users\...\glue\glue.py or any data file in the same directory. Couldn't find 'glue' on the Hugging Face Hub either: FileNotFoundError: [WinError 3] The system cannot find the path specified:
'C:\\Users\\...\\.cache\\huggingface'
```
### Steps to reproduce the bug
On Windows 10
1) create a minimal conda environment (with just Python)
(2) activate environment
(3) install datasets with: ``conda` install -c huggingface -c conda-forge datasets`
(4) import load_dataset and follow example usage from any dataset card.
### Expected behavior
The expected behavior is to load the file into the Python session running on my machine without error.
### Environment info
```
# Name Version Build Channel
aiohttp 3.8.4 py311ha68e1ae_0 conda-forge
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
arrow-cpp 11.0.0 h57928b3_13_cpu conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 22.2.0 pyh71513ae_0 conda-forge
aws-c-auth 0.6.26 h1262f0c_1 conda-forge
aws-c-cal 0.5.21 h7cda486_2 conda-forge
aws-c-common 0.8.14 hcfcfb64_0 conda-forge
aws-c-compression 0.2.16 h8a79959_5 conda-forge
aws-c-event-stream 0.2.20 h5f78564_4 conda-forge
aws-c-http 0.7.6 h2545be9_0 conda-forge
aws-c-io 0.13.19 h0d2781e_3 conda-forge
aws-c-mqtt 0.8.6 hd211e0c_12 conda-forge
aws-c-s3 0.2.7 h8113e7b_1 conda-forge
aws-c-sdkutils 0.1.8 h8a79959_0 conda-forge
aws-checksums 0.1.14 h8a79959_5 conda-forge
aws-crt-cpp 0.19.8 he6d3b81_12 conda-forge
aws-sdk-cpp 1.10.57 h64004b3_8 conda-forge
brotlipy 0.7.0 py311ha68e1ae_1005 conda-forge
bzip2 1.0.8 h8ffe710_4 conda-forge
c-ares 1.19.0 h2bbff1b_0
ca-certificates 2023.01.10 haa95532_0
certifi 2022.12.7 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py311h7d9ee11_3 conda-forge
charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 40.0.1 py311h28e9c30_0 conda-forge
dataclasses 0.8 pyhc8e2a94_3 conda-forge
datasets 2.11.0 py_0 huggingface
dill 0.3.6 pyhd8ed1ab_1 conda-forge
filelock 3.11.0 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py311ha68e1ae_0 conda-forge
fsspec 2023.4.0 pyh1a96a4e_0 conda-forge
gflags 2.2.2 ha925a31_1004 conda-forge
glog 0.6.0 h4797de2_0 conda-forge
huggingface_hub 0.13.4 py_0 huggingface
idna 3.4 pyhd8ed1ab_0 conda-forge
importlib-metadata 6.3.0 pyha770c72_0 conda-forge
importlib_metadata 6.3.0 hd8ed1ab_0 conda-forge
intel-openmp 2023.0.0 h57928b3_25922 conda-forge
krb5 1.20.1 heb0366b_0 conda-forge
libabseil 20230125.0 cxx17_h63175ca_1 conda-forge
libarrow 11.0.0 h04c43f8_13_cpu conda-forge
libblas 3.9.0 16_win64_mkl conda-forge
libbrotlicommon 1.0.9 hcfcfb64_8 conda-forge
libbrotlidec 1.0.9 hcfcfb64_8 conda-forge
libbrotlienc 1.0.9 hcfcfb64_8 conda-forge
libcblas 3.9.0 16_win64_mkl conda-forge
libcrc32c 1.1.2 h0e60522_0 | https://github.com/huggingface/datasets/issues/5727 | closed | [] | 2023-04-10T23:21:12Z | 2023-07-21T14:08:20Z | 4 | joelkowalewski |
pytorch/examples | 1,131 | New examples requested | Hi everyone, @svekars and I are looking to increase the number of new contributions to pytorch/examples, this might be especially interesting to you if you've never contributed to an open source project before.
At a high level, we're looking for new interesting models.
So here's what you need to do
1. Check out our contributing guide: https://github.com/pytorch/examples/blob/main/CONTRIBUTING.md
2. Pick a model idea - I've listed a few below, comment on this task so others know you're working on it
3. Implement your model from scratch using PyTorch, no external dependencies will be allowed to keep the examples as educational as possible
Your implementation needs to include
1. A folder with your code which needs to define
1. Your model architecture
2. Training code
4. Evaluation code
5. An argparser
3. Make sure your script runs in CI so it doesn't break in the future by adding it to `run_python_examples.sh`
4. README describing any usage instructions
As an example this recent contribution by @sudomaze is a good one to follow https://github.com/pytorch/examples/pull/1003/files
Here are some model ideas
## Model ideas
* [ ] Controlnet - Guided diffusion
* [ ] NERF
* [x] Graph Neural Network @JoseLuisC99
* [ ] Diffusion Model, stable diffusion or any variant of the architecture you like
* [x] Vision Transformer
* [ ] Video model
* [ ] Toolformer
* [ ] Differentiable physics
* [ ] Flownet
* [ ] Dreamfusion or any 3d model
* [ ] Language Translation
* [ ] Swin transformer
But I'm quite open to anything we don't have that's cool
| https://github.com/pytorch/examples/issues/1131 | closed | [
"good first issue"
] | 2023-04-10T19:49:49Z | 2025-07-05T19:17:22Z | 58 | msaroufim |
pytorch/serve | 2,224 | How to prevent torchserve unloading my models in case of inactivity? | ### 📚 The doc issue
According to my experience, even though I wasn't able to find it in documentation, torchserve unloads a model after some time of inactivity. After the inference api for that model is invoked, it will load it again in memory, and thus increasing total inference time.
Can I control that behavior and set appropriate inactivity time?
Or can I just disable that option at all, and have all my models always loaded in memory?
### Suggest a potential alternative/fix
_No response_ | https://github.com/pytorch/serve/issues/2224 | open | [
"triaged",
"sagemaker"
] | 2023-04-10T12:32:26Z | 2023-05-08T21:51:39Z | null | petrovicu |
huggingface/datasets | 5,725 | How to limit the number of examples in dataset, for testing? | ### Describe the bug
I am using this command:
`data = load_dataset("json", data_files=data_path)`
However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter.
### Steps to reproduce the bug
In the description.
### Expected behavior
To be able to limit the number of examples
### Environment info
Nothing special | https://github.com/huggingface/datasets/issues/5725 | closed | [] | 2023-04-10T08:41:43Z | 2023-04-21T06:16:24Z | 3 | ndvbd |
huggingface/transformers.js | 75 | [Question] WavLM support | This is a really good project. I was wondering if WavLM is supported in the project, I wanted to run a voice conversation model in the browser, also if Hifi-gan for voice synthesis.
| https://github.com/huggingface/transformers.js/issues/75 | closed | [
"question"
] | 2023-04-08T09:36:03Z | 2023-09-08T13:17:07Z | null | Ashraf-Ali-aa |
huggingface/datasets | 5,719 | Array2D feature creates a list of list instead of a numpy array | ### Describe the bug
I'm not sure if this is expected behavior or not. When I create a 2D array using `Array2D`, the data has list type instead of numpy array. I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array into a list?
Also if I change the first dimension of the `Array2D` shape to None, it's returning array correctly.
### Steps to reproduce the bug
Run this code:
```py
from datasets import Dataset, Features, Array2D
import numpy as np
# you have to change the first dimension of the shape to None to make it return an array
features = Features(dict(seq=Array2D((2,2), 'float32')))
ds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features)
a = ds[0]['seq']
print(a)
print(type(a))
```
The following will be printed in stdout:
```
[[0.8127174377441406, 0.3760348856449127], [0.7510159611701965, 0.4322739541530609]]
<class 'list'>
```
### Expected behavior
Each indexed item should be a list or numpy array. Currently, `Array((2,2))` yields a list but `Array((None,2))` yields an array.
### Environment info
- `datasets` version: 2.11.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.13
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 1.4.4
| https://github.com/huggingface/datasets/issues/5719 | closed | [] | 2023-04-07T21:04:08Z | 2023-04-20T15:34:41Z | 4 | offchan42 |
huggingface/datasets | 5,716 | Handle empty audio | Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path?
when a audio is empty, when do resample , it will break:
`array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)` | https://github.com/huggingface/datasets/issues/5716 | closed | [] | 2023-04-07T09:51:40Z | 2023-09-27T17:47:08Z | 2 | zyb8543d |
huggingface/setfit | 344 | How to do I have multi text columns? | Text is not one column, there many columns. For example : The text columns are "sex","title","weather". What should I do? | https://github.com/huggingface/setfit/issues/344 | closed | [
"question"
] | 2023-04-07T01:51:21Z | 2023-04-10T00:45:38Z | null | freecui |
huggingface/transformers.js | 71 | [Question] How to run test suit | Hi @xenova,
I want to work on adding new features, but when I try to run the tests of the project I get this error:
```
Error: File not found. Could not locate "/Users/yonatanchelouche/Desktop/passive-project/transformers.js/models/onnx/quantized/distilbert-base-uncased-finetuned-sst-2-english/sequence-classification/tokenizer.json".
at getModelFile (/Users/yonatanchelouche/Desktop/passive-project/transformers.js/src/utils.js:235:23)
at async fetchJSON (/Users/yonatanchelouche/Desktop/passive-project/transformers.js/src/utils.js:288:18)
at async Promise.all (index 0)
at async Function.from_pretrained (/Users/yonatanchelouche/Desktop/passive-project/transformers.js/src/tokenizers.js:2571:48)
at async Promise.all (index 0)
at async pipeline (/Users/yonatanchelouche/Desktop/passive-project/transformers.js/src/pipelines.js:1308:17)
at async text_classification (/Users/yonatanchelouche/Desktop/passive-project/transformers.js/tests/index.js:90:22)
at async /Users/yonatanchelouche/Desktop/passive-project/transformers.js/tests/index.js:897:25
```
I guess it is because the models are missing from the models dir. Is there a programmatic way to download them from the lib?
By the way, I was thinking about adding a CI on PRs to run the tests and perhaps adding jest as the test runner. What, do you think about that?
| https://github.com/huggingface/transformers.js/issues/71 | closed | [
"question"
] | 2023-04-06T17:03:09Z | 2023-05-15T17:38:46Z | null | chelouche9 |
pytorch/text | 2,145 | Loading vectors into a GPU | ## 🚀 Feature
Is there any way for loading vectors based on device with torchtext.vocab.Vectors class?
| https://github.com/pytorch/text/issues/2145 | closed | [] | 2023-04-06T15:38:38Z | 2023-04-14T18:04:46Z | 4 | saeeddhqan |
pytorch/functorch | 1,123 | Can I call torch.utils.data.WeightedRandomSampler inside vmap? | Dear Experts,
I am trying to accelerate a series of weighted sampling (i.e., transition using a stochastic matrix) using vmap.
Basically, I am trying to accelerate the code from here: https://discuss.pytorch.org/t/best-way-to-implement-series-of-weighted-random-sampling-for-transition-w-stochastic-matrix/176713 using vmap instead of a for loop, by calling torch.utils.data.WeightedRandomSamper() inside vmap (the link is my question asking for any alternative way for acceleration in the general forum).
However, I get an error and I am not sure if this is possible.
Below is my code:
```
import torch
from torch import nn
from functorch import vmap
N = 10
M = 20
L = 5
P = torch.rand([N, M])
x = torch.randint(0, N, [L])
P_new = torch.stack([P[x[i]] for i in range(L)])
f = lambda p: torch.tensor(list(torch.utils.data.WeightedRandomSampler(p, 1))[0])
y = vmap(f, randomness='different')(P_new)
print(y)
```
Ideally, I want to sample L elements, each using distribution P[x[i]] for i = range(L).
Below is the error I get:
```
Traceback (most recent call last):
File "xxx/test.py", line 17, in <module>
y = vmap(f, randomness='different')(P_new)
File "xxx/functorch/_src/vmap.py", line 361, in wrapped
return _flat_vmap(
File "xxx/functorch/_src/vmap.py", line 487, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File xxx/test.py", line 16, in <lambda>
f = lambda p: torch.tensor(list(torch.utils.data.WeightedRandomSampler(p, 1))[0])
File "xxxx/site-packages/torch/utils/data/sampler.py", line 203, in __iter__
yield from iter(rand_tensor.tolist())
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
```
I wonder if something like this is fundamentally impossible, or is there a way around my error.
Any help would be highly appreciated!
Thank you | https://github.com/pytorch/functorch/issues/1123 | closed | [] | 2023-04-04T23:47:08Z | 2023-04-04T23:55:33Z | 1 | kwmaeng91 |
huggingface/transformers.js | 69 | How to convert bloomz model | While converting the [bloomz](https://huggingface.co/bigscience/bloomz-7b1l) model, I am getting the 'invalid syntax' error. Is conversion limited to only predefined model types?
If not, please provide the syntax for converting the above model with quantization.
(I will run the inference in nodejs and not in browser, so memory will not be an issue in inference.)
| https://github.com/huggingface/transformers.js/issues/69 | closed | [
"question"
] | 2023-04-04T14:51:16Z | 2023-04-09T02:01:49Z | null | bil-ash |
huggingface/transformers.js | 68 | [Feature request] whisper word level timestamps | I am new to both transformers.js and whisper, so I am sorry for a lame question in advance.
I am trying to make [return_timestamps](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline.__call__) parameter work...
I managed to customize [script.js](https://github.com/xenova/transformers.js/blob/main/assets/js/scripts.js#L447) from [transformer.js demo](https://xenova.github.io/transformers.js/) locally and added `data.generation.return_timestamps = "char"`; around line ~447 inside GENERATE_BUTTON click handler in order to pass the parameter. With that change in place I am seeing timestamp appears as chunks (`result` var in [worker.js](https://github.com/xenova/transformers.js/blob/main/assets/js/worker.js#L40)):
```
{
"text": " And so my fellow Americans ask not what your country can do for you ask what you can do for your country.",
"chunks": [
{
"timestamp": [0,8],
"text": " And so my fellow Americans ask not what your country can do for you"
},
{
"timestamp": [8,11],
"text": " ask what you can do for your country."
}
]
}
```
however the chunks are not "char level" granular as expected following the [return_timestamps](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline.__call__) doc.
I am looking for ideas how to achieve char/word level timestamp granularity with transform.js and whisper. Do some models/tools need to be updated and/or rebuild? | https://github.com/huggingface/transformers.js/issues/68 | closed | [
"enhancement",
"question"
] | 2023-04-04T10:57:05Z | 2023-07-09T22:48:31Z | null | jozefchutka |
huggingface/datasets | 5,705 | Getting next item from IterableDataset took forever. | ### Describe the bug
I have a large dataset, about 500GB. The format of the dataset is parquet.
I then load the dataset and try to get the first item
```python
def get_one_item():
dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True)
dataset = dataset.filter(lambda example: example['text'].startswith('Ar'))
print(next(iter(dataset)))
```
However, this function never finish. I waited ~10mins, the function was still running so I killed the process. I'm now using `line_profiler` to profile how long it would take to return one item. I'll be patient and wait for as long as it needs.
I suspect the filter operation is the reason why it took so long. Can I get some possible reasons behind this?
### Steps to reproduce the bug
Unfortunately without my data files, there is no way to reproduce this bug.
### Expected behavior
With `IteralbeDataset`, I expect the first item to be returned instantly.
### Environment info
- datasets version: 2.11.0
- python: 3.7.12 | https://github.com/huggingface/datasets/issues/5705 | closed | [] | 2023-04-04T09:16:17Z | 2023-04-05T23:35:41Z | 2 | HongtaoYang |
huggingface/optimum | 952 | Enable AMP for BetterTransformer | ### Feature request
Allow for the `BetterTransformer` models to be inferenced with AMP.
### Motivation
Models transformed with `BetterTransformer` raise error when used with AMP:
`bettertransformers.models.base`
```python
...
def forward_checker(self, *args, **kwargs):
if torch.is_autocast_enabled() or torch.is_autocast_cpu_enabled():
raise ValueError("Autocast is not supported for `BetterTransformer` integration.")
if self.training and not self.is_decoder:
raise ValueError(
"Training is not supported for `BetterTransformer` integration.",
" Please use `model.eval()` before running the model.",
)
...
```
Why is that? I tried setting `torch.is_autocast_enabled` to `lambda: False` and everything works just fine at least for `XLMRobertaModel`:
```python
>>> import torch
>>> from transformers import AutoModel
>>> from optimum.bettertransformer import BetterTransformer
>>> m = AutoModel.from_pretrained('xlm-roberta-base')
>>> BetterTransformer.transform(m, keep_original_model=False)
XLMRobertaModel(
(embeddings): XLMRobertaEmbeddings(
(word_embeddings): Embedding(250002, 768, padding_idx=1)
(position_embeddings): Embedding(514, 768, padding_idx=1)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): XLMRobertaEncoder(
(layer): ModuleList(
(0-11): 12 x BertLayerBetterTransformer()
)
)
(pooler): XLMRobertaPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
>>> with torch.amp.autocast('cuda'):
... m(**{name: t.to('cuda') for name, t in m.dummy_inputs.items()})
...
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ <stdin>:2 in <module> │
│ │
│ /home/viktor-sch/Clones/talisman-ie/venv/lib/python3.10/site-packages/torch/nn/modules/module.py │
│ :1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /home/viktor-sch/Clones/talisman-ie/venv/lib/python3.10/site-packages/transformers/models/xlm_ro │
│ berta/modeling_xlm_roberta.py:854 in forward │
│ │
│ 851 │ │ │ inputs_embeds=inputs_embeds, │
│ 852 │ │ │ past_key_values_length=past_key_values_length, │
│ 853 │ │ ) │
│ ❱ 854 │ │ encoder_outputs = self.encoder( │
│ 855 │ │ │ embedding_output, │
│ 856 │ │ │ attention_mask=extended_attention_mask, │
│ 857 │ │ │ head_mask=head_mask, │
│ │
│ /home/viktor-sch/Clones/talisman-ie/venv/lib/python3.10/site-packages/torch/nn/modules/module.py │
│ :1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): | https://github.com/huggingface/optimum/issues/952 | closed | [] | 2023-04-04T09:14:00Z | 2023-07-26T17:08:42Z | 6 | viktor-shcherb |
huggingface/controlnet_aux | 18 | When using openpose, what is the format of the input image? RGB format, or BGR format? | 

I saw that the image in BGR format is used as input in the open_pose/body.py file, but the huggingface demo uses a BGR format image. What is the impact of this? | https://github.com/huggingface/controlnet_aux/issues/18 | open | [] | 2023-04-04T03:58:38Z | 2023-04-04T11:23:33Z | null | ZihaoW123 |
huggingface/datasets | 5,702 | Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None? | ### Feature request
Hello! Apologies if my question sounds naive:
I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None?
Specifically, I’d like to define a feature for a list that contains 18 elements, each of which has been pre-defined as either a `dict or None` or `str or None` - as demonstrated in the slightly misaligned data provided below:
```json
[
[
{"text":"老妇人","idxes":[0,1,2]},null,{"text":"跪","idxes":[3]},null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,null,null,null,null,null,null,null,null,null],
[
{"text":"那些水","idxes":[13,14,15]},null,{"text":"舀","idxes":[11]},null,null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,{"text":"出","idxes":[12]},null,null,null,null,null,null,null],
[
{"text":"水","idxes":[38]},
null,
{"text":"舀","idxes":[40]},
"假", // note this is just a standalone string
null,null,null,{"text":"坑里","idxes":[35,36]},null,null,null,null,null,null,null,null,null,null]]
```
### Motivation
I'm currently working with a dataset of the following structure and I couldn't find a solution in the [documentation](https://huggingface.co/docs/datasets/v2.11.0/en/package_reference/main_classes#datasets.Features).
```json
{"qid":"3-train-1058","context":"桑桑害怕了。从玉米地里走到田埂上,他遥望着他家那幢草房子里的灯光,知道母亲没有让他回家的意思,很伤感,有点想哭。但没哭,转身朝阿恕家走去。","corefs":[[{"text":"桑桑","idxes":[0,1]},{"text":"他","idxes":[17]}]],"non_corefs":[],"outputs":[[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[11]},null,null,null,null,null,{"text":"从玉米地里","idxes":[6,7,8,9,10]},{"text":"到田埂上","idxes":[12,13,14,15]},null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[66]},null,null,null,null,null,null,null,{"text":"转身朝阿恕家去","idxes":[60,61,62,63,64,65,67]},null,null,null,null,null,null,null],[{"text":"灯光","idxes":[30,31]},null,null,null,null,null,null,{"text":"草房子里","idxes":[25,26,27,28]},null,null,null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},{"text":"他家那幢草房子","idxes":[21,22,23,24,25,26,27]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"远"],[{"text":"他","idxes":[17]},{"text":"阿恕家","idxes":[63,64,65]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"变近"]]}
```
### Your contribution
I'm going to provide the dataset at https://huggingface.co/datasets/2030NLP/SpaCE2022 . | https://github.com/huggingface/datasets/issues/5702 | closed | [
"enhancement"
] | 2023-04-04T03:20:43Z | 2023-04-05T14:15:18Z | 4 | gitforziio |
pytorch/text | 2,139 | torchtext.vocab.Vectors(..).__getitem__ does not work | ## ❓ Questions and Help
I loaded a model:
```python
vects = torchtext.vocab.Vectors('text5-emb.txt')
```
And when I want to know whether a vocab is in the dataset or not, I run this:
```python
if "the" in vects:
```
and the code stops here. I waited for a long time but it does not do anything.
Then, I loaded the model and set the unk_init to `lambda x: False`
Now, I can use `vects['the']` to know whether the vocab exists or not.
But why does not __getitem__ work?
| https://github.com/pytorch/text/issues/2139 | closed | [] | 2023-04-03T17:54:45Z | 2023-04-04T13:52:53Z | 0 | saeeddhqan |
huggingface/dataset-viewer | 1,011 | Remove authentication by cookie? | Currently, to be able to return the contents for gated datasets, all the endpoints check the request credentials if needed. The accepted credentials are: HF token, HF cookie, or a JWT in `X-Api-Key`. See https://github.com/huggingface/datasets-server/blob/ecb861b5e8d728b80391f580e63c8d2cad63a1fc/services/api/src/api/authentication.py#L26
Should we remove the cookie authentication?
cc @coyotte508 @SBrandeis @XciD @rtrompier | https://github.com/huggingface/dataset-viewer/issues/1011 | closed | [
"question",
"P2"
] | 2023-04-03T12:12:56Z | 2024-03-13T09:48:38Z | null | severo |
huggingface/transformers.js | 63 | [Model request] Helsinki-NLP/opus-mt-ru-en (marian) | Sorry for this noob question, can somebody give me a kind of guideline to be able to convert and use
https://huggingface.co/Helsinki-NLP/opus-mt-ru-en/tree/main
thank you | https://github.com/huggingface/transformers.js/issues/63 | closed | [
"enhancement",
"question"
] | 2023-03-31T09:18:28Z | 2023-08-20T08:00:38Z | null | eviltik |
huggingface/safetensors | 222 | Might not related but wanna ask: does there can have a c++ version? | Hello, wanna ask 2 questions:
1. will safetensors provides a c++ version, it looks more convenient then pth or onnx;
2. does it possible to load safetensors into some forward lib not just pytorch, such as onnxruntime etc? | https://github.com/huggingface/safetensors/issues/222 | closed | [
"Stale"
] | 2023-03-31T05:14:29Z | 2023-12-21T01:47:58Z | 5 | lucasjinreal |
huggingface/transformers.js | 62 | [Feature request] nodejs caching | Hi, thank you for your works
I'm a nodejs user and i read that there is no model cache implementation right now, and you are working on it.
Do you have an idea of when you will be able to push a release with a cache implementation ?
Just asking because i was at the point to code it on my side | https://github.com/huggingface/transformers.js/issues/62 | closed | [
"enhancement",
"question"
] | 2023-03-31T04:27:57Z | 2023-05-15T17:26:55Z | null | eviltik |
huggingface/dataset-viewer | 1,001 | Add total_rows in /rows response? | Should we add the number of rows in a split (eg. in field `total_rows`) in response to /rows?
It would help avoid sending a request to /size to get it.
It would also help fix a bad query.
eg: https://datasets-server.huggingface.co/rows?dataset=glue&config=ax&split=test&offset=50000&length=100 returns:
```json
{
"features": [
...
],
"rows": []
}
```
We would have to know the number of rows to fix it. | https://github.com/huggingface/dataset-viewer/issues/1001 | closed | [
"question",
"improvement / optimization"
] | 2023-03-30T13:54:19Z | 2023-05-07T15:04:12Z | null | severo |
pytorch/xla | 4,837 | How to run XLA compilation thru MLIR | ## ❓ Questions and Help
Hi,
Is there a way to switch pytorch->XLA to compilation through MLIR chain? (StableHLO/MHLO/LMHLO etc.) Or will it appear only after switch to openxla/xla repository? (I see such pull requests in the list, but according to OpenXLA community meeting slides, these repositories should have the same contents).
So far, using all found env.options, I managed to get dumps only of HLO (non-MLIR) IR and I guess this is a non-MLIR path used by default.
Thank you. | https://github.com/pytorch/xla/issues/4837 | closed | [] | 2023-03-30T13:16:28Z | 2023-05-22T19:32:41Z | null | MUR-83 |
huggingface/dataset-viewer | 999 | Use the huggingface_hub webhook server? | See https://github.com/huggingface/huggingface_hub/pull/1410
The/webhook endpoint could live in its pod with the huggingface_hub webhook server. Is it useful for our project? Feel free to comment. | https://github.com/huggingface/dataset-viewer/issues/999 | closed | [
"question",
"refactoring / architecture"
] | 2023-03-30T08:44:49Z | 2023-06-10T15:04:09Z | null | severo |
huggingface/datasets | 5,687 | Document to compress data files before uploading | In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are too large, Git will fail to commit/upload them.
I think for those file extensions (.csv, .json, .jsonl, .txt), we should better recommend to **compress** their data files (using ZIP for example) before uploading them to the Hub.
- Compressed files are tracked by Git LFS in our default `.gitattributes` file
What do you think?
CC: @stevhliu
See related issue:
- https://huggingface.co/datasets/tcor0005/langchain-docs-400-chunksize/discussions/1 | https://github.com/huggingface/datasets/issues/5687 | closed | [
"documentation"
] | 2023-03-30T06:41:07Z | 2023-04-19T07:25:59Z | 3 | albertvillanova |
pytorch/xla | 4,831 | Increasing rendezvous timeout patience? | ## ❓ Questions and Help
Hi, this might be a basic question but how do I increase the timeout of `xm.rendezvous()`? I'm training a large model and due to the system we're training on saving can take >5 minutes which results in timeout errors such as
`2023-03-29 13:52:59 172.16.96.171 [1] RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'torch_xla.core.xla_model.save': Connection reset by peer (14)`
Sorry if I missed this in the documentation. I might have misinterpreted this error but it seems like a basic rendezvous timeout? Thanks! | https://github.com/pytorch/xla/issues/4831 | closed | [
"question",
"distributed"
] | 2023-03-29T18:38:42Z | 2025-05-05T13:20:41Z | null | bram-w |
huggingface/datasets | 5,685 | Broken Image render on the hub website | ### Describe the bug
Hi :wave:
Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type

See this [dataset](https://huggingface.co/datasets/Francesco/cell-towers), basically for some reason the first image has numerical bytes inside, not sure if that is okay, but the image render feature **doesn't work**
So the dataset is stored in the following way
```python
builder.download_and_prepare(output_dir=str(output_dir))
ds = builder.as_dataset(split="train")
# [NOTE] no idea how to push it from the builder folder
ds.push_to_hub(repo_id=repo_id)
builder.as_dataset(split="validation").push_to_hub(repo_id=repo_id)
ds = builder.as_dataset(split="test")
ds.push_to_hub(repo_id=repo_id)
```
The build is this class
```python
class COCOLikeDatasetBuilder(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
def _info(self):
features = datasets.Features(
{
"image_id": datasets.Value("int64"),
"image": datasets.Image(),
"width": datasets.Value("int32"),
"height": datasets.Value("int32"),
"objects": datasets.Sequence(
{
"id": datasets.Value("int64"),
"area": datasets.Value("int64"),
"bbox": datasets.Sequence(
datasets.Value("float32"), length=4
),
"category": datasets.ClassLabel(names=categories),
}
),
}
)
return datasets.DatasetInfo(
description=description,
features=features,
homepage=homepage,
license=license,
citation=citation,
)
def _split_generators(self, dl_manager):
archive = dl_manager.download(url)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"annotation_file_path": "train/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
gen_kwargs={
"annotation_file_path": "test/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"annotation_file_path": "valid/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
]
def _generate_examples(self, annotation_file_path, files):
def process_annot(annot, category_id_to_category):
return {
"id": annot["id"],
"area": annot["area"],
"bbox": annot["bbox"],
"category": category_id_to_category[annot["category_id"]],
}
image_id_to_image = {}
idx = 0
# This loop relies on the ordering of the files in the archive:
# Annotation files come first, then the images.
for path, f in files:
file_name = os.path.basename(path)
if annotation_file_path in path:
annotations = json.load(f)
category_id_to_category = {
category["id"]: category["name"]
for category in annotations["categories"]
}
print(category_id_to_category)
image_id_to_annotations = collections.defaultdict(list)
for annot in annotations["annotations"]:
image_id_to_annotations[annot["image_id"]].append(annot)
image_id_to_image = {
annot["file_name"]: annot for annot in annotations["images"]
}
elif file_name in image_id_to_image:
image = image_id_to_image[file_name]
objects = [
process_annot(annot, category_id_to_category)
for annot in image_id_to_annotations[image["id"]]
| https://github.com/huggingface/datasets/issues/5685 | closed | [] | 2023-03-29T15:25:30Z | 2023-03-30T07:54:25Z | 3 | FrancescoSaverioZuppichini |
huggingface/datasets | 5,681 | Add information about patterns search order to the doc about structuring repo | Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged loaders.
I have a déjà vu that it had already been discussed as some point but I don't remember.... | https://github.com/huggingface/datasets/issues/5681 | closed | [
"documentation"
] | 2023-03-29T11:44:49Z | 2023-04-03T18:31:11Z | 2 | polinaeterna |
pytorch/tutorials | 2,273 | [BUG] - Chatbot Tutorial - Unterminated string starting at: line 1 column 91 (char 90) | ### Add Link
https://pytorch.org/tutorials/beginner/chatbot_tutorial.html#chatbot-tutorial
### Describe the bug
I downloaded the zip and extracted it.
Now I got this error:
```
Processing corpus into lines and conversations...
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
[<ipython-input-14-0fd208236945>](https://localhost:8080/#) in <module>
11 # Load lines and conversations
12 print("\nProcessing corpus into lines and conversations...")
---> 13 lines, conversations = loadLinesAndConversations(os.path.join(corpus, "utterances.jsonl"))
14
15 # Write new csv file
3 frames
[/usr/lib/python3.9/json/decoder.py](https://localhost:8080/#) in raw_decode(self, s, idx)
351 """
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
355 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 91 (char 90)
```
On this line:
`lines, conversations = loadLinesAndConversations(os.path.join(corpus, "utterances.jsonl"))`
### Describe your environment
I just clicked on the Collab Notebook button and ran it | https://github.com/pytorch/tutorials/issues/2273 | open | [
"bug",
"question"
] | 2023-03-28T21:29:07Z | 2024-11-09T02:31:22Z | null | levalencia |
pytorch/audio | 3,206 | How to train a wav2vec 2.0 pretrain model from scratch ? | ### 🚀 The feature
There is an example for hubert training [here](https://github.com/pytorch/audio/tree/main/examples/self_supervised_learning), but has no example aboult wav2vec 2.0.
### Motivation, pitch
I'm woking on ssl with/without a pretrained model to continue train the pretrained model like wav2vec 2.0 on other dataset,
### Alternatives
_No response_
### Additional context
_No response_ | https://github.com/pytorch/audio/issues/3206 | closed | [
"triaged"
] | 2023-03-27T13:26:38Z | 2023-04-23T09:57:51Z | null | kobenaxie |
pytorch/pytorch | 97,654 | where is the engine_layer_visualize.py,isn't removed? | ### 🐛 Describe the bug
where is the engine_layer_visualize.py,isn't removed?
### Versions
where is the engine_layer_visualize.py,isn't removed? | https://github.com/pytorch/pytorch/issues/97654 | closed | [] | 2023-03-27T08:45:04Z | 2023-03-27T18:20:59Z | null | cqray1990 |
huggingface/datasets | 5,671 | How to use `load_dataset('glue', 'cola')` | ### Describe the bug
I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`.
- I was stacked by the following problem:
```python
from datasets import load_dataset
cola_dataset = load_dataset('glue', 'cola')
---------------------------------------------------------------------------
InvalidVersion Traceback (most recent call last)
File <timed exec>:1
(Omit because of long error message)
File /usr/local/lib/python3.8/site-packages/packaging/version.py:197, in Version.__init__(self, version)
195 match = self._regex.search(version)
196 if not match:
--> 197 raise InvalidVersion(f"Invalid version: '{version}'")
199 # Store the parsed out pieces of the version
200 self._version = _Version(
201 epoch=int(match.group("epoch")) if match.group("epoch") else 0,
202 release=tuple(int(i) for i in match.group("release").split(".")),
(...)
208 local=_parse_local_version(match.group("local")),
209 )
InvalidVersion: Invalid version: '0.10.1,<0.11'
```
- You can check this full error message in my repository: [MLOps-Basics/week_0_project_setup/experimental_notebooks/data_exploration.ipynb](https://github.com/makinzm/MLOps-Basics/blob/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup/experimental_notebooks/data_exploration.ipynb)
### Steps to reproduce the bug
- This is my repository to reproduce: [MLOps-Basics/week_0_project_setup](https://github.com/makinzm/MLOps-Basics/tree/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup)
1. cd `/DockerImage` and command `docker build . -t week0`
2. cd `/` and command `docker-compose up`
3. Run `experimental_notebooks/data_exploration.ipynb`
----
Just to be sure, I wrote down Dockerfile and requirements.txt
- Dockerfile
```Dockerfile
FROM python:3.8
WORKDIR /root/working
RUN apt-get update && \
apt-get install -y python3-dev python3-pip python3-venv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip3 install --no-cache-dir jupyter notebook && pip install --no-cache-dir -r requirements.txt
CMD ["bash"]
```
- requirements.txt
```txt
pytorch-lightning==1.2.10
datasets==1.6.2
transformers==4.5.1
scikit-learn==0.24.2
```
### Expected behavior
There is no bug to implement `load_dataset('glue', 'cola')`
### Environment info
I already wrote it. | https://github.com/huggingface/datasets/issues/5671 | closed | [] | 2023-03-26T09:40:34Z | 2023-03-28T07:43:44Z | 2 | makinzm |
pytorch/data | 1,110 | `scan` support | ### 🚀 The feature
How does one create an `IterDataPipe` with [`scan`/`fold`](http://learnyouahaskell.com/higher-order-functions) semantics?
### Motivation, pitch
Necessary for pipelines that require some kind of state, eg. label encoding for an unknown number of labels.
### Alternatives
_No response_
### Additional context
_No response_ | https://github.com/meta-pytorch/data/issues/1110 | open | [
"good first issue",
"help wanted"
] | 2023-03-24T18:24:33Z | 2023-03-24T22:19:53Z | 3 | samuela |
pytorch/android-demo-app | 305 | I am doing object detection, app is working fine with android studio emulator. but when run on device it is showing interface as expected, all other buttons working . but when detection is pressed nothing happens . what might be the issue | https://github.com/pytorch/android-demo-app/issues/305 | open | [] | 2023-03-24T12:07:25Z | 2023-05-05T15:17:16Z | null | som1233 | |
huggingface/optimum | 918 | Support for LLaMA | ### Feature request
A support for exporting LLaMA to ONNX
### Motivation
It would be great to have one, to apply optimizations and so on
### Your contribution
I could try implementing a support, but I would need an assist on model config even though it should be pretty simmilar to what is already done with GPT-J | https://github.com/huggingface/optimum/issues/918 | closed | [
"onnx"
] | 2023-03-23T21:07:30Z | 2023-04-17T14:32:37Z | 2 | nenkoru |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.