repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/swift-transformers | 72 | How to use BertTokenizer? | what is the best way to use the BertTokenizer? its not a public file so I'm not sure whats the best way to use it | https://github.com/huggingface/swift-transformers/issues/72 | closed | [] | 2024-03-16T18:13:36Z | 2024-03-22T10:29:54Z | null | jonathan-goodrx |
huggingface/chat-ui | 934 | What are the rules to create a chatPromptTemplate in .env.local? | We know that chatPromptTemplate for google/gemma-7b-it in .env.local is:
"chatPromptTemplate" : "{{#each messages}}{{#ifUser}}<start_of_turn>user\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<end_of_turn>\n<start_of_turn>model\n{{/ifUser}}{{#ifAssistant}}{{content}}<end_of_turn... | https://github.com/huggingface/chat-ui/issues/934 | open | [
"question"
] | 2024-03-16T17:51:38Z | 2024-04-04T14:02:20Z | null | houghtonweihu |
huggingface/chat-ui | 933 | Why the chat template of google/gemma-7b-it is invalid josn format in .env.local? | I used the chat template from google/gemma-7b-it in .env.local, shown below:
"chat_template": "{{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_except... | https://github.com/huggingface/chat-ui/issues/933 | closed | [
"question"
] | 2024-03-15T20:34:11Z | 2024-03-18T13:24:55Z | null | houghtonweihu |
pytorch/xla | 6,760 | xla_model.RateTracker doesn't have a docstring and its behavior is subtle and potentially confusing. | ## π Documentation
The `RateTracker` class in https://github.com/pytorch/xla/blob/fe3f23c62c747da30595cb9906d929b926aae6e4/torch_xla/core/xla_model.py doesn't have a docstring. This class is [used in lots of tests](https://github.com/search?q=repo%3Apytorch%2Fxla%20RateTracker&type=code), including [this one](http... | https://github.com/pytorch/xla/issues/6760 | closed | [
"usability"
] | 2024-03-15T17:23:46Z | 2025-04-18T13:52:01Z | 10 | ebreck |
pytorch/xla | 6,759 | Do I have to implement PjRtLoadedExecutable::GetHloModules when `XLA_STABLEHLO_COMPILE=1` ? | ## β Questions and Help
Hi, I'm from a hardware vendor and we want to implement a PJRT plugin for our DSA accelerator. We have our own MLIR-based compiler stack and it takes StableHLO as the input IR.
I'm new to PJRT, according to the [description](https://opensource.googleblog.com/2024/03/pjrt-plugin-to-acceler... | https://github.com/pytorch/xla/issues/6759 | open | [
"question",
"stablehlo"
] | 2024-03-15T10:59:36Z | 2025-04-18T13:58:24Z | null | Nullkooland |
huggingface/diffusers | 7,337 | How to convert multiple piped files into a single SafeTensor file? | How to convert multiple piped files into a single SafeTensor file?
For example, from this address: https://huggingface.co/Vargol/sdxl-lightning-4-steps/tree/main
```python
import torch
from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
base = "Vargol/sdxl-lightning-4-steps"... | https://github.com/huggingface/diffusers/issues/7337 | closed | [] | 2024-03-15T05:49:01Z | 2024-03-15T06:51:24Z | null | xxddccaa |
huggingface/transformers.js | 648 | `aggregation_strategy` in TokenClassificationPipeline | ### Question
Hello, from Transformers original version they have aggregation_strategy parameter to group the token corresponding to the same entity together in the predictions or not. But in transformers.js version I haven't found this parameter. Is it possible to provide this parameter? I want the prediction result a... | https://github.com/huggingface/transformers.js/issues/648 | closed | [
"question"
] | 2024-03-15T04:07:22Z | 2024-04-10T21:35:42Z | null | boat-p |
pytorch/vision | 8,317 | position, colour, and background colour of text labels in draw_bounding_boxes | ### π The feature
Text labels from `torchvision.utils.draw_bounding_boxes` are currently always inside the box with origin at the top left corner of the box, without a background colour, and the same colour as the bounding box itself. These are three things that would be nice to control.
### Motivation, pitch
... | https://github.com/pytorch/vision/issues/8317 | open | [] | 2024-03-14T13:50:17Z | 2025-04-17T13:28:39Z | 9 | carandraug |
huggingface/transformers.js | 646 | Library no longer maintained? | ### Question
1 year has passed since this PR is ready for merge: [Support React Native #118](https://github.com/xenova/transformers.js/pull/118)
Should we do our own fork of xenova/transformers.js ?
| https://github.com/huggingface/transformers.js/issues/646 | closed | [
"question"
] | 2024-03-14T10:37:33Z | 2024-06-10T15:32:41Z | null | pax-k |
pytorch/serve | 3,026 | Exception when using torchserve to deploy hugging face model: java.lang.InterruptedException: null | ### π Describe the bug
I followed the tutorial as https://github.com/pytorch/serve/tree/master/examples/Huggingface_Transformers
First,
```
python Download_Transformer_models.py
```
Then,
```
torch-model-archiver --model-name BERTSeqClassification --version 1.0 --serialized-file Transformer_model/pytorch_m... | https://github.com/pytorch/serve/issues/3026 | open | [
"help wanted",
"triaged",
"needs-reproduction"
] | 2024-03-14T07:56:57Z | 2024-03-19T16:44:51Z | 4 | yolk-pie-L |
pytorch/serve | 3,025 | torchserve output customization | Hi team
To process a inference request in torchserve, there are stages like initialize, preprocess, inference, postprocess.
If I want to convert the output format from tensor to my custom textual format, where and how can I carry this out ?
I am able to receive output in json format. But I need to make some cust... | https://github.com/pytorch/serve/issues/3025 | closed | [
"triaged"
] | 2024-03-13T20:37:39Z | 2024-03-14T21:05:42Z | 3 | advaitraut |
pytorch/executorch | 2,397 | How to perform inference and gathering accuracy metrics on executorch model | Hi, I am having trouble finding solid documentation that explains how to do the following with executorch (stable):
- Load in the exported .pte model
- Run inference with images
- Gather accuracy
I have applied quantization and other optimizations to the original model and exported it to .pte. I'd like to see the... | https://github.com/pytorch/executorch/issues/2397 | open | [
"module: doc",
"need-user-input",
"triaged"
] | 2024-03-13T14:40:01Z | 2025-02-04T20:21:12Z | null | mmingo848 |
huggingface/tokenizers | 1,469 | How to load tokenizer trained by sentencepiece or tiktoken | Hi, does this lib supports loading pre-trained tokenizer trained by other libs, like `sentencepiece` and `tiktoken`? Many models on hf hub store tokenizer in these formats | https://github.com/huggingface/tokenizers/issues/1469 | closed | [
"Stale",
"planned"
] | 2024-03-13T10:22:00Z | 2024-04-30T10:15:32Z | null | jordane95 |
pytorch/pytorch | 121,798 | what is the match numpy verison, can not build from source | ### π Describe the bug
what is the match numpy verison, can not build from source
after run ` python3 setup.py develop`
got this error
```
error: no member named 'elsize' in '_PyArray_Descr'
```
### Versions
OS: macOS 14.4 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4... | https://github.com/pytorch/pytorch/issues/121798 | closed | [
"module: build",
"triaged",
"module: numpy"
] | 2024-03-13T09:52:46Z | 2024-03-14T07:10:15Z | null | yourmoonlight |
pytorch/functorch | 1,142 | Swapping 2 columns in a 2d tensor | I have a function ```tridiagonalization``` to tridiagonalize matrix (2d tensor), and I want to map it to batch. It involves a for loop and on each iteration a permutation of 2 columns and 2 rows inside it. I do not understand how to permute 2 columns without errors. So my code for rows works and looks as follows:
```
... | https://github.com/pytorch/functorch/issues/1142 | open | [] | 2024-03-13T09:33:29Z | 2024-03-13T09:33:29Z | 0 | Kreativshikkk |
huggingface/transformers.js | 644 | Contribution Question-What's next after run scripts.convert? | ### Question
Hi @xenova I am trying to figure out how to contribute. I am new to huggingface. Just 2 months down the rabbit hole.
I ran
`python -m scripts.convert --quantize --model_id SeaLLMs/SeaLLM-7B-v2`
command
Here is a list of file I got in `models/SeaLLMs/SeaLLM-7B-v2` folder
```
_model_layers.0_s... | https://github.com/huggingface/transformers.js/issues/644 | closed | [
"question"
] | 2024-03-13T08:51:37Z | 2024-04-11T02:33:04Z | null | pacozaa |
huggingface/making-games-with-ai-course | 11 | [UPDATE] Typo in Unit 1, "What is HF?" section. The word "Danse" should be "Dance" | # What do you want to improve?
There is a typo in Unit 1, "What is HF?" section.
The word "Danse" should be "Dance"
- Explain the typo/error or the part of the course you want to improve
There is a typo in Unit 1, "What is HF?" section.
The word "Danse" should be "Dance"
The English spelling doesn't seem t... | https://github.com/huggingface/making-games-with-ai-course/issues/11 | closed | [
"documentation"
] | 2024-03-12T17:12:20Z | 2024-04-18T07:18:12Z | null | PaulForest |
huggingface/transformers.js | 642 | RangeError: offset is out of bounds #601 | ### Question
```
class NsfwDetector {
constructor() {
this._threshold = 0.5;
this._nsfwLabels = [
'FEMALE_BREAST_EXPOSED',
'FEMALE_GENITALIA_EXPOSED',
'BUTTOCKS_EXPOSED',
'ANUS_EXPOSED',
'MALE_GENITALIA_EXPOSED',
'B... | https://github.com/huggingface/transformers.js/issues/642 | closed | [
"question"
] | 2024-03-12T16:47:58Z | 2024-03-13T05:57:23Z | null | vijishmadhavan |
huggingface/chat-ui | 926 | AWS credentials resolution for Sagemaker models | chat-ui is excellent, thanks for all your amazing work here!
I have been experimenting with a model in Sagemaker and am having some issues with the model endpoint configuration. It currently requires credentials to be provided explicitly. This does work, but the ergonomics are not great for our use cases:
- in deve... | https://github.com/huggingface/chat-ui/issues/926 | open | [] | 2024-03-12T16:24:57Z | 2024-03-13T10:30:52Z | 1 | nason |
huggingface/optimum | 1,754 | How to tell whether the backend of ONNXRuntime accelerator is Intel VINO. | According to the [wiki](https://onnxruntime.ai/docs/execution-providers/#summary-of-supported-execution-providers), OpenVINO is one of the ONNXRuntime's execution providers.
I am deploying model on Intel Xeon Gold server, which supports AVX512 and which is compatible with Intel OpenVINO. How could I tell if the acce... | https://github.com/huggingface/optimum/issues/1754 | closed | [] | 2024-03-12T08:54:01Z | 2024-07-08T11:31:13Z | null | ghost |
huggingface/alignment-handbook | 134 | Is there a way to freeze some layers of a model ? | Can we follow the normal way of:
```
for param in model.base_model.parameters():
param.requires_grad = False
``` | https://github.com/huggingface/alignment-handbook/issues/134 | open | [] | 2024-03-12T02:06:03Z | 2024-03-12T02:06:03Z | 0 | shamanez |
huggingface/diffusers | 7,283 | How to load lora trained with Stable Cascade? | I finished a lora traning based on Stable Cascade with onetrainer, but I cannot find a solution to load the load in diffusers pipeline. Anyone who can help me will be appreciated. | https://github.com/huggingface/diffusers/issues/7283 | closed | [
"stale"
] | 2024-03-12T01:33:01Z | 2024-06-29T13:35:45Z | null | zengjie617789 |
huggingface/datasets | 6,729 | Support zipfiles that span multiple disks? | See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream
The dataset viewer gives the following error:
```
Error code: ConfigNamesError
Exception: BadZipFile
Message: zipfiles that span multiple disks are not supported
Traceback: Traceback (most recent call last):
F... | https://github.com/huggingface/datasets/issues/6729 | closed | [
"enhancement",
"question"
] | 2024-03-11T21:07:41Z | 2024-06-26T05:08:59Z | null | severo |
huggingface/candle | 1,834 | How to increase model performance? | Hello all,
I have recently benchmarked completion token time, which is 30ms on an H100. However, with llama.cpp it is 10ms. Because [mistral.rs](https://github.com/EricLBuehler/mistral.rs) is built on Candle, it inherits this performance deficit. In #1680, @guoqingbao said that the Candle implementation is not suita... | https://github.com/huggingface/candle/issues/1834 | closed | [] | 2024-03-11T12:36:45Z | 2024-03-29T20:44:46Z | null | EricLBuehler |
huggingface/transformers.js | 638 | Using an EfficientNet Model - Looking for advice | ### Question
Discovered this project from the recent Syntax podcast episode (which was excellent) - it got my mind racing with different possibilities.
I got some of the example projects up and running without too much issue and naturally wanted to try something a little more outside the box, which of course has l... | https://github.com/huggingface/transformers.js/issues/638 | closed | [
"question"
] | 2024-03-11T01:31:49Z | 2024-03-11T17:42:31Z | null | ozzyonfire |
pytorch/xla | 6,710 | Does XLA use the Nvidia GPU's tensor cores? | ## β Questions and Help
1. Does XLA use the Nvidia GPU's tensor cores?
2. Is Pytorch XLA only designed to accelerate neural network training or does it accelerate their inferencing as well? | https://github.com/pytorch/xla/issues/6710 | closed | [] | 2024-03-11T00:55:36Z | 2024-03-15T23:42:26Z | 2 | Demis6 |
huggingface/text-generation-inference | 1,636 | Need instructions for how to optimize for production serving (fast startup) | ### Feature request
I suggest better educating developers how to download and optimize the model at build time (in container or in a volume) so that the command `text-generation-launcher` serves as fast as possible.
### Motivation
By default, when running TGI using Docker, the container downloads the model on the fl... | https://github.com/huggingface/text-generation-inference/issues/1636 | closed | [
"Stale"
] | 2024-03-10T22:17:53Z | 2024-04-15T02:49:03Z | null | steren |
pytorch/tutorials | 2,797 | Contradiction in `save_for_backward`, what is permitted to be saved | https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html
"ctx is a context object that can be used to stash information for backward computation. You can **cache arbitrary objects** for use in the backward pass using the ctx.save_for_backward method."
https://pytorch.org/docs/stab... | https://github.com/pytorch/tutorials/issues/2797 | closed | [
"core",
"medium",
"docathon-h1-2025"
] | 2024-03-10T19:40:16Z | 2025-06-04T21:11:21Z | null | ad8e |
huggingface/optimum | 1,752 | Documentation for exporting openai/whisper-large-v3 to ONNX | ### Feature request
Hello, I am exporting the [OpenAI Whisper-large0v3](https://huggingface.co/openai/whisper-large-v3) to ONNX and see it exports several files, most importantly in this case encoder (encoder_model.onnx & encoder_model.onnx.data) and decoder (decoder_model.onnx, decoder_model.onnx.data, decoder_with... | https://github.com/huggingface/optimum/issues/1752 | open | [
"feature-request",
"onnx"
] | 2024-03-10T05:24:36Z | 2024-10-09T09:18:27Z | 10 | mmingo848 |
huggingface/transformers | 29,564 | How to add new special tokens | ### System Info
- `transformers` version: 4.38.0
- Platform: Linux-6.5.0-21-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0 (False)
- Tensorf... | https://github.com/huggingface/transformers/issues/29564 | closed | [] | 2024-03-09T22:56:44Z | 2024-04-17T08:03:43Z | null | lordsoffallen |
pytorch/vision | 8,305 | aarch64 build for AWS Linux - Failed to load image Python extension | ### π Describe the bug
Built Torch 2.1.2 and TorchVision 0.16.2 from source and running into the following problem:
/home/ec2-user/conda/envs/textgen/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/ec2-user/conda/envs/textgen/lib/python3.10/site... | https://github.com/pytorch/vision/issues/8305 | open | [] | 2024-03-09T20:13:46Z | 2024-03-12T18:53:04Z | 6 | elkay |
huggingface/datasets | 6,726 | Profiling for HF Filesystem shows there are easy performance gains to be made | ### Describe the bug
# Let's make it faster
First, an evidence...

Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106... | https://github.com/huggingface/datasets/issues/6726 | open | [] | 2024-03-09T07:08:45Z | 2024-03-09T07:11:08Z | 2 | awgr |
huggingface/alignment-handbook | 133 | Early Stopping Issue when used with ConstantLengthDataset | Hello
I modified the code to include the Constant Length Dataset and it's early stopping at around 15% of the training. This issue doesn't occur when not used with the normal code given. Is there an issue with constant length dataset? I used it with SFTTrainer. | https://github.com/huggingface/alignment-handbook/issues/133 | open | [] | 2024-03-08T23:08:08Z | 2024-03-08T23:08:08Z | 0 | sankydesai |
pytorch/serve | 3,008 | very high QueueTime | Hi, I am seeing a very high queue time in my torchserve setup.
if I am considering correctly the `QueueTime.ms:19428` means this particular request had to wait for 19 sec for processing
while the QueTime just before that request was `QueueTime.ms:0` so why suddenly 18 sec delay
If I am wrong then what does this Qu... | https://github.com/pytorch/serve/issues/3008 | closed | [] | 2024-03-08T14:52:09Z | 2024-03-09T17:12:37Z | 0 | PushpakBhoge512 |
huggingface/transformers.js | 635 | Failed to process file. and Failed to upload. | ### Question
I am hosting Supabase on Docker in Ubuntu, and I am facing file upload failures on the chatbot-ui. The error messages displayed are "Failed to process file" and "Failed to upload." The console output error messages are as follows:
- POST https://chat.example.com/api/retrieval/process 500 (Internal Serv... | https://github.com/huggingface/transformers.js/issues/635 | closed | [
"question"
] | 2024-03-08T13:07:18Z | 2024-03-08T13:22:57Z | null | chawaa |
huggingface/peft | 1,545 | How to use lora finetune moe model | https://github.com/huggingface/peft/issues/1545 | closed | [] | 2024-03-08T11:45:09Z | 2024-04-16T15:03:39Z | null | Minami-su | |
huggingface/datatrove | 119 | how about make a ray executor to deduplication | - https://github.com/ChenghaoMou/text-dedup/blob/main/text_dedup/minhash_spark.py
- referenceοΌhttps://github.com/alibaba/data-juicer/blob/main/data_juicer/core/ray_executor.py
- Ray is simpler and faster than Spark
| https://github.com/huggingface/datatrove/issues/119 | closed | [] | 2024-03-08T11:37:13Z | 2024-04-11T12:48:53Z | null | simplew2011 |
huggingface/transformers.js | 634 | For nomic-ai/nomic-embed-text-v1 8192 context length | ### Question
As per document: https://huggingface.co/nomic-ai/nomic-embed-text-v1
Model supports 8192 context length, however, in transformers.js model_max_length: 512.
Any guidance how to use full context (8192) instead of 512? | https://github.com/huggingface/transformers.js/issues/634 | closed | [
"question"
] | 2024-03-08T05:33:39Z | 2025-10-13T04:57:49Z | null | faizulhaque |
huggingface/diffusers | 7,254 | Request proper examples on how to training a diffusion models with diffusers on large scale dataset like LAION | Hi, I do not see any examples in diffusers/examples on how to training a diffusion models with diffusers on large scale dataset like LAION. However, it is important since many works and models is willing integrate their models into diffusers, so if they can train their models in diffusers, it would be more easy when t... | https://github.com/huggingface/diffusers/issues/7254 | closed | [
"stale"
] | 2024-03-08T01:31:33Z | 2024-06-30T05:27:57Z | null | Luciennnnnnn |
huggingface/swift-transformers | 56 | How to get models? | Missing in docu? | https://github.com/huggingface/swift-transformers/issues/56 | closed | [] | 2024-03-07T15:47:54Z | 2025-02-11T11:41:32Z | null | pannous |
huggingface/datasets | 6,721 | Hi,do you know how to load the dataset from local file now? | Hi, if I want to load the dataset from local file, then how to specify the configuration name?
_Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
| https://github.com/huggingface/datasets/issues/6721 | open | [] | 2024-03-07T13:58:40Z | 2024-03-31T08:09:25Z | null | Gera001 |
pytorch/executorch | 2,293 | How to analyze executorch .pte file performance? | I am looking for a way to either benchmark the .pte files performance, the final state of the ExecutorchProgramManager object, or similar after following [this](https://pytorch.org/executorch/stable/tutorials/export-to-executorch-tutorial.html) tutorial. I used the PyTorch profiler on the model before putting it throug... | https://github.com/pytorch/executorch/issues/2293 | closed | [
"module: devtools"
] | 2024-03-07T12:12:41Z | 2025-02-03T22:04:48Z | null | mmingo848 |
huggingface/transformers.js | 633 | Is 'aggregation_strategy' parameter available for token classification pipeline? | ### Question
Hi. I have question.
From HuggingFace Transformers documentation, they have **'aggregation_strategy'** parameter in token classification pipeline. [Link](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.TokenClassificationPipeline.aggregation_strategy)
Need to know in th... | https://github.com/huggingface/transformers.js/issues/633 | open | [
"help wanted",
"good first issue",
"question"
] | 2024-03-07T07:02:55Z | 2024-06-09T15:16:56Z | null | boat-p |
pytorch/xla | 6,674 | How to minimize memory expansion due to padding during sharding | Hello
For a model that can be sharded in model parallelization in TPUv4 (4x32) device, I am getting the error below at the beginning of the training on TPUv3 (8x16) device. There is `4x expansion` with respect to console message. Even if both both TPUv4 and TPUv3 devices have same total memory I cannot run the trai... | https://github.com/pytorch/xla/issues/6674 | open | [
"performance",
"distributed"
] | 2024-03-06T15:23:31Z | 2025-04-18T18:42:38Z | null | mfatih7 |
huggingface/swift-coreml-diffusers | 93 | Blocked at "loading" screen - how to reset the app / cache ? | After playing a bit with the app, it now stays in "Loading" state at startup (see screenshot)
I tried to remove the cache in `~/Library/Application Support/hf-diffusion-models` but it just cause a re-download.
How can I reset the app, delete all files created and start like on a fresh machine again ?
Alternati... | https://github.com/huggingface/swift-coreml-diffusers/issues/93 | open | [] | 2024-03-06T12:50:29Z | 2024-03-10T11:24:49Z | null | sebsto |
huggingface/chat-ui | 905 | Fail to create assistant. | I use the docker image chat-ui-db as the frontend, text-generation-inference as the inference backend, and meta-llamaLlama-2-70b-chat-hf as the model. Using the image and model mentioned above, I set up a large language model dialog service on server A. Assume that the IP address of the server A is x.x.x.x.
I use dock... | https://github.com/huggingface/chat-ui/issues/905 | open | [] | 2024-03-06T08:33:03Z | 2024-03-06T08:33:03Z | 0 | majestichou |
pytorch/serve | 3,004 | How to 'Create model archive pod and run model archive file generation script' in the βUser Guideβ | ### π Describe the bug
I'm reading the User Guide of KServe doc. One part of the 'Deploy a PyTorch Model with TorchServe InferenceService' is hard to understand.
3 'Create model archive pod and run model archive file generation script'
3.1 Create model archive pod and run model archive file generation script... | https://github.com/pytorch/serve/issues/3004 | open | [
"triaged",
"kfserving"
] | 2024-03-06T07:42:50Z | 2024-03-07T07:06:52Z | null | Enochlove |
huggingface/chat-ui | 904 | Running the project with `npm run dev`, but it does not hot reload. | Am I alone in this issue or are you just developing without hot reload? Does anyone have any ideas on how to resolve it?
**UPDATES:**
It has to do whenever you're running it on WSL.
I guess this is an unrelated issue so feel free to close, but would still be nice to know how to resolve this. | https://github.com/huggingface/chat-ui/issues/904 | closed | [] | 2024-03-06T03:34:21Z | 2024-03-06T16:07:11Z | 2 | CakeCrusher |
huggingface/dataset-viewer | 2,550 | More precise dataset size computation | Currently, the Hub uses the `/size` endpoint's `num_bytes_original_files` value to display the `Size of downloaded dataset files` on a dataset's card page. However, this value does not consider a possible overlap between the configs' data files (and simply [sums](https://github.com/huggingface/datasets-server/blob/e4aa... | https://github.com/huggingface/dataset-viewer/issues/2550 | open | [
"question",
"P2"
] | 2024-03-05T22:22:24Z | 2024-05-24T20:59:36Z | null | mariosasko |
pytorch/serve | 3,001 | Clean up metrics documentation | ### π The doc issue
Metrics documentation has a lot of information and information is spread across different subsections finding it difficult to know whats the right way to use metrics
### Suggest a potential alternative/fix
For older versions of TorchServe, one can always go to the tag and check the Readme.
Cl... | https://github.com/pytorch/serve/issues/3001 | closed | [
"documentation",
"internal"
] | 2024-03-05T20:49:32Z | 2024-04-26T21:32:45Z | 0 | agunapal |
huggingface/datasets | 6,719 | Is there any way to solve hanging of IterableDataset using split by node + filtering during inference | ### Describe the bug
I am using an iterable dataset in a multi-node setup, trying to do training/inference while filtering the data on the fly. I usually do not use `split_dataset_by_node` but it is very slow using the IterableDatasetShard in `accelerate` and `transformers`. When I filter after applying `split_dataset... | https://github.com/huggingface/datasets/issues/6719 | open | [] | 2024-03-05T15:55:13Z | 2024-03-05T15:55:13Z | 0 | ssharpe42 |
huggingface/chat-ui | 899 | Bug--Llama-2-70b-chat-hf error: `truncate` must be strictly positive and less than 1024. Given: 3072 | I use the docker image chat-ui-db as the frontend, text-generation-inference as the inference backend, and meta-llamaLlama-2-70b-chat-hf as the model.
In the model field of the .env.local file, I have the following settings
```
MODELS=`[
{
"name": "meta-llama/Llama-2-70b-chat-hf",
"endpoints": [{... | https://github.com/huggingface/chat-ui/issues/899 | open | [
"support",
"models"
] | 2024-03-05T12:27:45Z | 2024-03-06T00:59:10Z | 4 | majestichou |
huggingface/tokenizers | 1,468 | How to convert tokenizers.tokenizer to XXTokenizerFast in transformers? | ### Motivation
I followed the guide [build-a-tokenizer-from-scratch](https://huggingface.co/docs/tokenizers/quicktour#build-a-tokenizer-from-scratch) and got a single tokenizer.json from my corpus. Since I'm not sure if it is compatible with the trainer, I want to convert it back to XXTokenizerFast in transformers.
... | https://github.com/huggingface/tokenizers/issues/1468 | closed | [
"Stale",
"planned"
] | 2024-03-05T06:32:27Z | 2024-07-21T01:57:17Z | null | rangehow |
pytorch/pytorch | 121,203 | How to clear GPU memory without restarting kernel when using a PyTorch model | ## Issue description
I am currently using pytorch's model on my windows computer, using python scripts running on vscode.
I want to be able to load and release the model repeatedly in a resident process, where releasing the model requires fully freeing the memory of the currently used GPU, including freeing the cach... | https://github.com/pytorch/pytorch/issues/121203 | open | [
"module: cuda",
"triaged"
] | 2024-03-05T05:58:49Z | 2024-03-06T15:21:20Z | null | Doctor-Damu |
huggingface/gsplat.js | 71 | How to support VR? | It's great to be able to use vr on a vr device. | https://github.com/huggingface/gsplat.js/issues/71 | closed | [] | 2024-03-05T05:03:17Z | 2024-03-05T07:55:53Z | null | did66 |
huggingface/tgi-gaudi | 95 | How to use FP8 feature in TGI-gaudi | ### System Info
The FP8 quantization feature has been incorporated into the TGI-Gaudi branch. However, guidance is needed on how to utilize this feature. The process involves running the FP8 quantization through Measurement Mode and Quantization Mode. How to enable FP8 using the TGI 'docker run' command? Could you kin... | https://github.com/huggingface/tgi-gaudi/issues/95 | closed | [] | 2024-03-05T02:50:08Z | 2024-05-06T09:03:15Z | null | lvliang-intel |
huggingface/accelerate | 2,521 | how to set `num_processes` in multi-node training | Is it the total num of gpus or the number of gpus on a single node?
I have seen contradictory signals in the code.
https://github.com/huggingface/accelerate/blob/ee004674b9560976688e1a701b6d3650a09b2100/docs/source/usage_guides/ipex.md?plain=1#L139 https://github.com/huggingface/accelerate/blob/ee004674b9560976688e... | https://github.com/huggingface/accelerate/issues/2521 | closed | [] | 2024-03-04T13:03:57Z | 2025-12-22T01:53:32Z | null | lxww302 |
huggingface/distil-whisper | 95 | How to use distil-whisper-large-v3-de-kd model from HF? | Officially, multi-language support is still not implemented in distil-whisper.
But I noticed, that the esteemed @sanchit-gandhi uploaded a German model for distil-whisper to HuggingFace, called 'distil-whisper-large-v3-de-kd'
How can I use this specific model for transcribing something? | https://github.com/huggingface/distil-whisper/issues/95 | open | [] | 2024-03-04T12:01:13Z | 2024-04-02T09:40:46Z | null | Arche151 |
huggingface/transformers.js | 623 | Converted QA model answers in lower case, original model does not. What am I doing wrong? | ### Question
I have converted [deutsche-telekom/electra-base-de-squad2](https://huggingface.co/deutsche-telekom/electra-base-de-squad2) to ONNX using ```python -m scripts.convert --quantize --model_id deutsche-telekom/electra-base-de-squad2```. The ONNX model, used with the same code, yields returns in lower case, whe... | https://github.com/huggingface/transformers.js/issues/623 | open | [
"question"
] | 2024-03-04T11:56:44Z | 2024-03-04T11:56:44Z | null | MarceloEmmerich |
pytorch/kineto | 885 | How to add customized metadata with on demand profiling ? | When profiling with `torch.profiler.profile` , generated json file has a section called `distributedInfo` shown as below
```json
{
"distributedInfo": {"backend": "nccl", "rank": 0, "world_size": 2}
}
```
But there's no such section in generated file when on-demand profiling is triggered. As a result, Holistic... | https://github.com/pytorch/kineto/issues/885 | closed | [
"bug"
] | 2024-03-04T09:41:04Z | 2024-07-08T21:53:03Z | null | staugust |
pytorch/executorch | 2,226 | How do you get executorch to run within Mbed OS? | Hi guys,
We serialized a PyTorch module to a .pte file for Cortex-M architecture by doing this example:
https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html). Additionally, we have a P-Nucleo-WB55 development platform. We want to run the module on the development platform using Mbed OS. How do ... | https://github.com/pytorch/executorch/issues/2226 | closed | [] | 2024-03-04T08:52:58Z | 2024-05-16T11:07:20Z | null | ChristophKarlHeck |
pytorch/test-infra | 4,980 | Provide the range of commits where a disabled test is effectively disabled | In the current implementation, disabling a test or enabling it (via a GitHub issues) take effect globally across all trunk and PR jobs. The good thing about this approach is that disabling a test is trivial. However, enabling them is still a tricky business. A common scenario is that a forward fix will address the i... | https://github.com/pytorch/test-infra/issues/4980 | open | [
"enhancement"
] | 2024-03-02T06:58:40Z | 2024-03-02T06:58:40Z | null | huydhn |
huggingface/transformers.js | 618 | How do I convert a DistilBERT Model to Quantized ONNX - | ### Question
Note, https://huggingface.co/docs/transformers.js/en/index#convert-your-models-to-onnx is a broken link.
I have a simple DistilBERT model I'm trying to load with the examples/next-server (wdavies/public-question-in-text)
I tried the simplest version of converting to ONNX (wdavies/public-onnx-test f... | https://github.com/huggingface/transformers.js/issues/618 | closed | [
"question"
] | 2024-03-01T16:55:16Z | 2024-03-02T00:47:40Z | null | davies-w |
huggingface/sentence-transformers | 2,521 | Is the implementation of `MultipleNegativesRankingLoss` right? | It is confusing why the labels are `range(len(scores))`.
```python
class MultipleNegativesRankingLoss(nn.Module):
def __init__(self, model: SentenceTransformer, scale: float = 20.0, similarity_fct=util.cos_sim):
super(MultipleNegativesRankingLoss, self).__init__()
self.model = model
se... | https://github.com/huggingface/sentence-transformers/issues/2521 | closed | [
"question"
] | 2024-03-01T10:13:35Z | 2024-03-04T07:01:12Z | null | ghost |
huggingface/text-embeddings-inference | 178 | How to specify a local model | ### Feature request
model=BAAI/bge-reranker-large
volume=$PWD/data
docker run -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.0 --model-id $model
### Motivation
model=BAAI/bge-reranker-large
volume=$PWD/data
docker run -p 8080:80 -v $volume:/data --pull always ghcr.i... | https://github.com/huggingface/text-embeddings-inference/issues/178 | closed | [] | 2024-03-01T09:40:07Z | 2024-03-01T16:54:27Z | null | yuanjie-ai |
huggingface/chat-ui | 889 | How does huggingchat prompt the model to generate HTML output? | How does Huggingchat prompt the LLM to generate HTML output? Where can I find that prompt? I'd like to tweak it. thanks! | https://github.com/huggingface/chat-ui/issues/889 | open | [] | 2024-02-29T17:20:01Z | 2024-03-05T18:45:56Z | null | vgoklani |
huggingface/chat-ui | 888 | Code LLAMA doesn't work | I am simply entering this prompt:
```
You're given the following regex in python: \| *([^|]+?) *\|
This captures text values in markdown tables but fails to capture numbers. Update this regex to capture numbers as well
```
Then what happens is that my 1 core of CPU is used 100% for at least for 5 mins until ... | https://github.com/huggingface/chat-ui/issues/888 | closed | [] | 2024-02-29T12:44:20Z | 2025-01-01T11:54:48Z | 1 | lordsoffallen |
huggingface/text-generation-inference | 1,615 | How to use the grammar support feature? | ### Feature request

Can you please clarify how we can use this? what is it for?
### Motivation

In the text classification example of transformers v4.38.1, the columns are not removed.
h... | https://github.com/huggingface/datasets/issues/6700 | closed | [] | 2024-02-28T12:36:22Z | 2024-04-02T17:15:28Z | 3 | shelfofclub |
pytorch/serve | 2,978 | Broken example for a custom Counter metrics | ### π The doc issue
The example in the section [Add Counter based metrics](https://github.com/pytorch/serve/blob/18d56ff56e05de48af0dfabe0019f437f332a868/docs/metrics.md#add-counter-based-metrics) shows how to add custom Counter metric:
```
# Create a counter with name 'LoopCount' and dimensions, initial value
met... | https://github.com/pytorch/serve/issues/2978 | closed | [
"triaged"
] | 2024-02-28T12:26:30Z | 2024-03-20T21:56:12Z | 3 | feeeper |
pytorch/TensorRT | 2,665 | β [Question] operator being decomposed rather than being converted when a corresponding converter exists? | ## β Question
From the debug log below, it seems that the `aten.grid_sampler_2d` operator gets decomposed into several lower-level operators. But isn't there a corresponding [converter](https://github.com/pytorch/TensorRT/blob/9a100b6414bee175040bcaa275ecb71df54836e4/py/torch_tensorrt/dynamo/conversion/aten_ops_conv... | https://github.com/pytorch/TensorRT/issues/2665 | closed | [
"question"
] | 2024-02-28T06:35:20Z | 2024-07-27T08:20:37Z | null | HolyWu |
huggingface/optimum | 1,729 | tflite support for gemma | ### Feature request
As per the title, is there plans to support gemma in tfilte
### Motivation
necessary format for current work
### Your contribution
no | https://github.com/huggingface/optimum/issues/1729 | closed | [
"feature-request",
"tflite",
"Stale"
] | 2024-02-27T17:15:54Z | 2025-01-19T02:04:34Z | 2 | Kaya-P |
huggingface/huggingface_hub | 2,051 | How edit cache dir and in bad net download how to redownload with last download point | OSError: Consistency check failed: file should be of size 1215993967 but has size 118991296 (pytorch_model.bin).
We are sorry for the inconvenience. Please retry download and pass `force_download=True, resume_download=False` as argument.
If the issue persists, please let us know by opening an issue on https://github.... | https://github.com/huggingface/huggingface_hub/issues/2051 | closed | [] | 2024-02-27T14:45:10Z | 2024-02-27T15:59:35Z | null | caihua |
huggingface/candle | 1,769 | [Question] How to modify Mistral to enable multiple batches? | Hello everybody,
I am attempting to implement multiple batches for the Mistral forward pass. However, the `forward` method takes an argument `seqlen_offset` which seems to be specific to the batch. I have attempted to implement it with a `position_ids` tensor in [this](https://github.com/EricLBuehler/mistral.rs/blob... | https://github.com/huggingface/candle/issues/1769 | closed | [] | 2024-02-27T13:18:18Z | 2024-03-01T14:01:21Z | null | EricLBuehler |
huggingface/datatrove | 108 | How to load a dataset with the output a tokenizer? | I planned to use datatrove to apply my tokenizer so that data is ready to use with nanotron.
I am using DocumentTokenizer[Merger] which produces *.ds and *ds.index binary files, although, from what I understood, nanotron is expecting datasets (with "input_ids" keys).
I see that things like ParquetWriter cannot be pip... | https://github.com/huggingface/datatrove/issues/108 | closed | [] | 2024-02-27T08:58:09Z | 2024-05-07T12:33:47Z | null | Jeronymous |
pytorch/audio | 3,750 | I have some questions about RNNT loss. |
hello
I would like to ask you a question that may be somewhat trivial.
The shape of logits of RNN T loss is Batch, max_seq_len, max_target_len+1, class.
Why is max_target_len+1 here?
Shouldn't the number of classes be +1 to the size of the total vocab? Because blank is included.
I don't understand at all.
Is th... | https://github.com/pytorch/audio/issues/3750 | open | [] | 2024-02-26T11:39:39Z | 2024-02-26T13:09:30Z | 6 | girlsending0 |
huggingface/chat-ui | 875 | Difficulty configuring multiple instances of the same model with distinct parameters | I am currently self-deploying an application that requires setting up multiple instances of the same model, each configured with different parameters. For example:
```
MODELS=`[{
"name": "gpt-4-0125-preview",
"displayName": "GPT 4",
"endpoints" : [{
"type": "openai"
}]
},
{
... | https://github.com/huggingface/chat-ui/issues/875 | open | [] | 2024-02-26T10:48:43Z | 2024-02-27T17:28:21Z | 1 | mmtpo |
huggingface/optimum-nvidia | 76 | How to install optimum-nvidia properly without building a docker image | It's quite hard for me to build a docker image, so I started from a docker environment with TensorRT LLM 0.6.1 inside.
I checked your dockerfile, followed the process, and built TensorRT LLM using (I am using 4090 so that cuda arch is 89):
```
python3 scripts/build_wheel.py -j --trt_root /usr/local/tensorrt --py... | https://github.com/huggingface/optimum-nvidia/issues/76 | closed | [] | 2024-02-26T05:05:24Z | 2024-03-11T13:36:18Z | null | Yuchen-Cao |
pytorch/examples | 1,235 | Testing a C++ case with MPI failed. | ### π Describe the bug
I am testing the following example:
https://github.com/pytorch/examples/blob/main/cpp/distributed/dist-mnist.cpp
I get the following error:
[ 50%] Building CXX object CMakeFiles/awcm.dir/xdist.cxx.o
/home/alamj/TestCases/tests/xtorch/xdist/xdist.cxx:1:10: fatal error: c10d/ProcessGrou... | https://github.com/pytorch/examples/issues/1235 | open | [] | 2024-02-25T19:34:24Z | 2024-12-04T15:08:51Z | 1 | alamj |
huggingface/diffusers | 7,088 | Vague error: `ValueError: With local_files_only set to False, you must first locally save the tokenizer in the following path: 'openai/clip-vit-large-patch14'.` how to fix? | Trying to convert a .safetensors stable diffusion model to whatever the format is that hugging face requires. It throws a vague nonsequitur of an error:
`pipe = diffusers.StableDiffusionPipeline.from_single_file(str(aPathlibPath/"vodkaByFollowfoxAI_v40.safetensors") )`
```...
[1241](file:///C:/Users/openSourc... | https://github.com/huggingface/diffusers/issues/7088 | closed | [
"stale",
"single_file"
] | 2024-02-25T15:03:07Z | 2024-09-17T21:56:26Z | null | openSourcerer9000 |
huggingface/diffusers | 7,085 | how to train controlnet with lora? | train full controlnet need much resource and time, so how to train controlnet with lora?
| https://github.com/huggingface/diffusers/issues/7085 | closed | [
"should-move-to-discussion"
] | 2024-02-25T06:31:47Z | 2024-03-03T06:38:35Z | null | akk-123 |
huggingface/optimum-benchmark | 138 | How to set trt llm backend parameters | I am trying to run the trt_llama example: https://github.com/huggingface/optimum-benchmark/blob/main/examples/trt_llama.yaml
It seems optimem-benchmark will automatically transform the huggingface model to inference engine file then benchmarking its performance. When we use tensorrt llm, there is a model "build" pro... | https://github.com/huggingface/optimum-benchmark/issues/138 | closed | [] | 2024-02-24T17:12:12Z | 2024-02-27T12:48:44Z | null | Yuchen-Cao |
huggingface/optimum-nvidia | 75 | How to build this environment without docker? | My computer does not support the use of docker. How do I deploy this environment on my computer? | https://github.com/huggingface/optimum-nvidia/issues/75 | open | [] | 2024-02-24T16:59:37Z | 2024-03-06T13:45:18Z | null | lemon-little |
huggingface/accelerate | 2,485 | How to log information into a local logging file? | ### System Info
```Shell
Hi, I want to save a copy of logs to a local file, how to achieve this? Specifically, I want accelerator.log also write information in my local file.
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ fo... | https://github.com/huggingface/accelerate/issues/2485 | closed | [] | 2024-02-24T07:52:55Z | 2024-04-03T15:06:24Z | null | Luciennnnnnn |
huggingface/optimum-benchmark | 136 | οΌquestionοΌWhen I use the memory tracking feature on the GPU, I find that my VRAM is reported as 0. Is this normal, and what might be causing it? | 
| https://github.com/huggingface/optimum-benchmark/issues/136 | closed | [] | 2024-02-24T02:57:49Z | 2024-03-08T16:59:41Z | null | WCSY-YG |
huggingface/optimum | 1,716 | Optimum for Jetson Orin Nano | ### System Info
```shell
optimum version: 1.17.1
platform: Jetson Orin Nano, Jetpack 6.0
Python: 3.10.13
CUDA: 12.2
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such ... | https://github.com/huggingface/optimum/issues/1716 | open | [
"bug"
] | 2024-02-23T23:22:08Z | 2024-02-26T10:03:59Z | 1 | JunyiYe |
huggingface/transformers | 29,244 | Google Gemma don't know what 1+1 is equal toοΌ | ### System Info
[v4.38.1](https://github.com/huggingface/transformers/releases/tag/v4.38.1)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ..... | https://github.com/huggingface/transformers/issues/29244 | closed | [] | 2024-02-23T12:16:17Z | 2024-03-07T10:54:09Z | null | zhaoyun0071 |
huggingface/optimum | 1,713 | Issue converting owlv2 model to ONNX format | Hi Team,
I hope this message finds you well.
I've been working with the owlv2 model and have encountered an issue while attempting to convert it into ONNX format using the provided command:
`! optimum-cli export onnx -m google/owlv2-base-patch16 --task 'zero-shot-object-detection' --framework 'pt' owlv2_onnx`
... | https://github.com/huggingface/optimum/issues/1713 | closed | [
"feature-request",
"onnx",
"exporters"
] | 2024-02-23T05:55:23Z | 2025-09-10T23:26:13Z | 6 | n9s8a |
huggingface/optimum-benchmark | 135 | How to import and use the quantized model with AutoGPTQοΌ | https://github.com/huggingface/optimum-benchmark/issues/135 | closed | [] | 2024-02-23T03:13:28Z | 2024-02-23T05:03:06Z | null | jhrsya | |
pytorch/serve | 2,962 | Update documentation on deprecating mac x86 support | ### π Describe the bug
PyTorch is deprecating support for x86 macs. TorchServe will also do the same.
### Error logs
N/A
### Installation instructions
N/A
### Model Packaing
N/A
### config.properties
_No response_
### Versions
N/A
### Repro instructions
N/A
### Possible Solution
_No response_ | https://github.com/pytorch/serve/issues/2962 | open | [
"documentation"
] | 2024-02-22T22:53:33Z | 2024-03-26T20:58:19Z | 0 | agunapal |
huggingface/optimum | 1,710 | Native Support for Gemma | ### System Info
```shell
python version : 3.10.12
optimum version : built from github
openvino : 2024.1.0-14548-688c71ce0ed
transformers : 4.38.1
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially suppo... | https://github.com/huggingface/optimum/issues/1710 | closed | [
"feature-request",
"onnx",
"exporters"
] | 2024-02-22T17:15:08Z | 2024-02-28T08:37:36Z | 5 | Kaya-P |
huggingface/sentence-transformers | 2,499 | how can i save fine_tuned cross-encoder to HF and then download it from HF | I'm looking for ways to share fine-tuned cross-encoder with my teacher.
Cross encoder model does not have native push_to_hub() method. So i decided to use general approach:
```
from transformers import AutoModelForSequenceClassification
import torch
# read from disk, model was saved as ft_model.save("model/cr... | https://github.com/huggingface/sentence-transformers/issues/2499 | closed | [
"good first issue"
] | 2024-02-22T15:29:37Z | 2025-03-25T16:07:25Z | null | satyrmipt |
huggingface/transformers | 29,214 | How to get input embeddings from PatchTST with (batch_size, sequence_length, hidden_size) dimensions | ### System Info
-
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The following sni... | https://github.com/huggingface/transformers/issues/29214 | open | [
"Feature request"
] | 2024-02-22T14:17:10Z | 2024-03-25T03:56:58Z | null | nikhilajoshy |
pytorch/TensorRT | 2,653 | β [Question] Can torch_tensorRT be used in C++ with multiprocessing using fork? | ## β Question
Can torch_tensorRT be used in C++ with multiprocessing using fork?
## What you have already tried
I have doubts if this library can be used in C++ multiprocessing (using fork()) where each process loads a TorchScript model compiled for Torch-TensorRT. I have the pipeline that works with no Torch-... | https://github.com/pytorch/TensorRT/issues/2653 | open | [
"question"
] | 2024-02-22T14:10:57Z | 2024-02-23T22:04:21Z | null | peduajo |
huggingface/huggingface_hub | 2,039 | How to find out the type of files in the repository | Hello
Is there an option to determine the type of file in the repository, such as "Checkpoint", "LORA", "Textual_Inversion", etc?
I didn't know where to ask the question so sorry if I'm wrong. | https://github.com/huggingface/huggingface_hub/issues/2039 | closed | [] | 2024-02-22T01:41:29Z | 2024-03-25T11:39:31Z | null | suzukimain |
pytorch/serve | 2,955 | CPP backend debugging and troubleshooting | ### π The feature
For ease of debugging and troubleshooting for the CPP backend add following:
- [ ] In the TS startup logs, add explicit log line for successful startup of CPP backend
- [x] In the TS print environment add details for the CPP backend
- [x] Cleanup steps for the build script
- [x] FAQ page for... | https://github.com/pytorch/serve/issues/2955 | open | [
"documentation"
] | 2024-02-22T01:34:36Z | 2024-03-26T20:59:22Z | 0 | chauhang |
huggingface/datasets | 6,686 | Question: Is there any way for uploading a large image dataset? | I am uploading an image dataset like this:
```
dataset = load_dataset(
"json",
data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"},
)
dataset = dataset.cast_column("images", Sequence(Image()))
dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_si... | https://github.com/huggingface/datasets/issues/6686 | open | [] | 2024-02-21T22:07:21Z | 2024-05-02T03:44:59Z | 1 | zhjohnchan |
pytorch/tutorials | 2,773 | pipeline_tutorial failing due to dead torchtext link | Line 55 of https://github.com/pytorch/tutorials/blob/082c8b1bddb48b75f59860db3679d8c439238f10/intermediate_source/pipeline_tutorial.py is using torchtext to download a dataset that canβt be accessed right now (maybe got taken down, Iβm looking for an alternative link but torchtext is no longer maintained)
Can this t... | https://github.com/pytorch/tutorials/issues/2773 | closed | [] | 2024-02-21T21:02:25Z | 2024-05-15T16:36:22Z | 3 | clee2000 |
pytorch/TensorRT | 2,649 | β [Question] torch_tensorrt.dynamo.compile hangs indefinitely mid compilation? | ## β Question
torch_tensorrt.dynamo.compile hangs indefinitely mid compilation cpu usage is through the roof and having debug = True shows that there's a step where it fails
## What you have already tried
I tried compiling with torchscript and it works well enough but i wanted to test the dynamo backend
## ... | https://github.com/pytorch/TensorRT/issues/2649 | open | [
"question"
] | 2024-02-21T16:27:28Z | 2024-02-26T18:07:44Z | null | Antonyesk601 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.