repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/audio | 3,796 | How to use my finetuned version of wave2vec2 for forced alignment as shown in example/ | ### 🐛 Describe the bug
Example script i am following, it used default pretrained model, where as. i want to use my own finetuned model.
https://pytorch.org/audio/main/generated/torchaudio.pipelines.Wav2Vec2FABundle.html#torchaudio.pipelines.Wav2Vec2FABundle
### Versions
[pip3] mypy-extensions==1.0.0
[pip3] nump... | https://github.com/pytorch/audio/issues/3796 | open | [] | 2024-05-19T19:13:25Z | 2024-05-19T19:13:25Z | null | omerarshad |
huggingface/tokenizers | 1,534 | How to allow the merging of consecutive newline tokens \n when training a byte-level bpe tokenizer? | Hello, I'm currently working on training a byte-level BPE tokenizer using the Huggingface tokenizers library. I've created a simple training script, a sample corpus, and provided the output produced by this script. My aim is to understand why consecutive newline tokens `\n` are not being merged into a single token `\n\... | https://github.com/huggingface/tokenizers/issues/1534 | open | [
"bug"
] | 2024-05-18T03:11:35Z | 2025-07-07T09:34:16Z | null | liuslnlp |
huggingface/transformers | 30,886 | How to get the data seen by the model during training? | Hi! I haven't been able to find an answer to my question so opening an issue here. I'm fine-tuning the GPT-2 XL model using the trainer for 10 epochs and I'd like to save the data seen by the model during each epoch. More specifically, I want to save the data seen by the model every 242 steps. For instance, data seen f... | https://github.com/huggingface/transformers/issues/30886 | closed | [] | 2024-05-17T21:32:50Z | 2024-05-20T17:26:29Z | null | jaydeepborkar |
huggingface/optimum | 1,859 | Improve inference time TrOCR | I have a fine tuning TrOCR model, and i'm using
`from optimum.onnxruntime import ORTModelForVision2Seq`
how i can then make the inferation faster, when some one make a request in a endpoint api ? , i already using async for multi request | https://github.com/huggingface/optimum/issues/1859 | closed | [
"question",
"inference",
"Stale"
] | 2024-05-16T13:31:53Z | 2024-12-18T02:06:21Z | null | CrasCris |
huggingface/chat-ui | 1,148 | Chat-ui Audit Logs | Hello,
Is there a way to log the username, sessionID, conversation ID, what question was sent in some type of log in chat-ui ? Or just the username and the question?
How can we accomplish this?
Thanks | https://github.com/huggingface/chat-ui/issues/1148 | open | [] | 2024-05-16T11:13:30Z | 2024-05-21T18:48:17Z | 5 | Neb2653 |
huggingface/diffusers | 7,957 | How to implement `IPAdapterAttnProcessor2_0` with xformers | I want to fine-tune IP-adapter model with xformers, but I did not find the implementation of the xformers version corresponding to IPAdapterAttnProcessor2_0. I want to implement attention processor in xformers, are the following two lines of code the only difference between the two versions?
In `XFormersAttnProcesso... | https://github.com/huggingface/diffusers/issues/7957 | closed | [] | 2024-05-16T08:54:07Z | 2024-05-23T13:03:42Z | null | JWargrave |
pytorch/xla | 7,070 | Cannot Import _XLAC | ## ❓ Questions and Help
When I want to import torch_xla,the error occurs
```shell
>>> import torch_xla
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/code/pytorch/torch-xla/torch_xla/__init__.py", line 114, in <module>
import _XLAC
ImportError: /code/pytorch/torch-xla/_XLA... | https://github.com/pytorch/xla/issues/7070 | open | [
"question"
] | 2024-05-16T07:24:08Z | 2025-04-17T13:38:56Z | null | DarkenStar |
huggingface/OBELICS | 12 | How to use LDA for topic modeling | Thanks for your work again!
In the paper the topic modeling of OBELICS is implemented using LDA, and I am wondering what is the specific LDA model was used, what setting was used to train the model, and most importantly, how the topic was derived from the key words and weights(like using LLMs)? Thank you for answering... | https://github.com/huggingface/OBELICS/issues/12 | open | [] | 2024-05-16T03:56:29Z | 2024-06-11T16:27:12Z | null | jrryzh |
huggingface/transformers.js | 765 | Can you use all transformers models with transformers.js? | ### Question
Hi,
can you use [all transformers models ](https://huggingface.co/models?library=transformers&sort=trending)(which seem to be listed under the python library) also in transformers.js? If yes, how so? Just download and provide the local path? I'm working in nodejs right now.
For example I'd like to u... | https://github.com/huggingface/transformers.js/issues/765 | open | [
"question"
] | 2024-05-15T19:35:28Z | 2024-05-15T21:21:57Z | null | Sir-hennihau |
huggingface/datasets | 6,899 | List of dictionary features get standardized | ### Describe the bug
Hi, i’m trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets librar... | https://github.com/huggingface/datasets/issues/6899 | open | [] | 2024-05-15T14:11:35Z | 2025-04-01T20:48:03Z | 2 | sohamparikh |
huggingface/transformers | 30,827 | Using this command(optimum-cli export onnx --model Qwen1.5-0.5B-Chat --task text-generation Qwen1.5-0.5B-Chat_onnx/) to perform onnx transformation, it is found that the tensor type of the model becomes int64. How to solve this problem? | ### System Info
transformers version : 4.38.1
platform: ubuntu 22.04
python version : 3.10.14
optimum version : 1.19.2
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `exampl... | https://github.com/huggingface/transformers/issues/30827 | closed | [] | 2024-05-15T12:45:50Z | 2024-06-26T08:04:10Z | null | JameslaoA |
pytorch/executorch | 3,620 | how to calculate the vocab_size of new model | hi,
when I tried to introduce the "Blue LLM" model and evaluate its ppl, there is a mistake as follow:
Traceback (most recent call last):
File "/home/ufoe/anaconda3/envs/linchao/bin/lm_eval", line 8, in <module>
sys.exit(cli_evaluate())
File "/home/ufoe/linchao/lm-evaluation-harness/lm_eval/__main__.py", ... | https://github.com/pytorch/executorch/issues/3620 | closed | [] | 2024-05-15T12:20:13Z | 2024-05-16T05:12:15Z | null | l2002924700 |
huggingface/chat-ui | 1,142 | Feature request, local assistants | I experimented with a few assistants on HF.
The problem I am facing is that I don't know how to get the same behaviour I get on HF from local model (which is the same model).
I tried everything I could thing of.
I think HF does some filtering or rephrasing or has an additional prompt before the assistant description... | https://github.com/huggingface/chat-ui/issues/1142 | open | [
"support"
] | 2024-05-15T11:11:29Z | 2024-05-27T06:53:21Z | 2 | Zibri |
pytorch/extension-cpp | 93 | [feature request] Instruction on how to setup compile-env for Windows | Hi
I have been working with extensions successfully on Linux (shipping as `whl`)
An end-user has asked me to provide a windows version of an extension, and I have to admit that it was not as simple as the documentation suggested [here](https://pytorch.org/tutorials/advanced/cpp_extension.html).
Can you please pr... | https://github.com/pytorch/extension-cpp/issues/93 | open | [] | 2024-05-15T06:10:08Z | 2024-05-15T06:10:08Z | null | litaws |
huggingface/optimum | 1,855 | how to change optimum temporary path ? | ### Feature request
c drive less space
### Motivation
help to solve many issue
### Your contribution
dont know | https://github.com/huggingface/optimum/issues/1855 | closed | [] | 2024-05-14T11:17:14Z | 2024-10-14T12:22:35Z | null | neonarc4 |
huggingface/optimum | 1,854 | ai21labs/Jamba-tiny-random support | ### Feature request
ai21labs/Jamba-tiny-random mode, is not supported by Optimum export.
ValueError: Trying to export a jamba model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporte... | https://github.com/huggingface/optimum/issues/1854 | open | [
"feature-request",
"onnx"
] | 2024-05-14T10:22:05Z | 2024-10-09T09:10:58Z | 0 | frankia312 |
huggingface/transformers.js | 763 | Have considered using wasm technology to implement this library? | ### Question
Hello, have you ever considered using wasm technology to implement this library? For example, rust's wgpu-rs and c++'s dawn are both implementations of webgpu. They can be converted to wasm and can also be accelerated with simd. | https://github.com/huggingface/transformers.js/issues/763 | open | [
"question"
] | 2024-05-14T09:22:57Z | 2024-05-14T09:28:38Z | null | ghost |
huggingface/trl | 1,643 | How to save and resume a checkpoint from PPOTrainer | https://github.com/huggingface/trl/blob/5aeb752053876cce64f2164a178635db08d96158/trl/trainer/ppo_trainer.py#L203
It seems that every time the PPOTrainer is initialized, the accelerator is initialized as well. There's no API provided by PPOTrainer to resume checkpoints. How can we save and resume checkpoints? | https://github.com/huggingface/trl/issues/1643 | closed | [] | 2024-05-14T09:10:40Z | 2024-08-08T12:44:25Z | null | paraGONG |
huggingface/tokenizers | 1,531 | How to Batch-Encode Paired Input Sentences with Tokenizers: Seeking Clarification | Hello.
I'm using the tokenizer to encoding pair sentences in TemplateProcessing in batch_encode.
There's a confusing part where the method requires two lists for sentence A and sentence B.
According to the [guide documentation](https://huggingface.co/docs/tokenizers/quicktour): "To process a batch of sentences p... | https://github.com/huggingface/tokenizers/issues/1531 | closed | [
"Stale"
] | 2024-05-14T08:03:52Z | 2024-06-21T08:20:05Z | null | insookim43 |
pytorch/xla | 7,057 | Experiencing slow recompilation when manually building XLA | ## ❓ Questions and Help
Hi, I am interested in contributing to XLA community but I encounter a small challenge. After manually building `torch` and `torch_xla` on a CPU-based(CPU: **Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz**) Docker env, I noticed that the `python setup.py develop` process will take about **1 m... | https://github.com/pytorch/xla/issues/7057 | open | [
"question"
] | 2024-05-14T03:28:42Z | 2025-04-17T13:41:57Z | null | wenboqian |
pytorch/xla | 7,056 | Export nn.Module.forward with kwargs to StableHLO | ## ❓ Questions and Help
I see in [_exported_program_to_stablehlo_bundle()](https://github.com/pytorch/xla/blob/6f0b61e5d782913a0fc7743812f2a8e522189111/torch_xla/stablehlo.py#L318) that exporting with kwargs isn't support _**yet**_.
Do you expect to support this in the near future?
If not, is there another way t... | https://github.com/pytorch/xla/issues/7056 | closed | [
"question",
"stablehlo"
] | 2024-05-13T21:21:42Z | 2025-04-17T13:42:55Z | null | johnmatter |
huggingface/transformers.js | 762 | Options for the "translation" pipeline when using Xenova/t5-small | ### Question
The translation pipeline is [documented](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TranslationPipeline) to use {src_lang and tgt_lang} options to translate from the src language to the tgt language. However, when using Xenova/t5-small none of the options seem to be used. I... | https://github.com/huggingface/transformers.js/issues/762 | open | [
"question"
] | 2024-05-13T21:09:15Z | 2024-05-13T21:09:15Z | null | lucapivato |
pytorch/torchchat | 784 | Can't use TorchChat with Python-3.9 | Because of https://github.com/pytorch/torchchat/blob/a276b5fdd12d0dd843fd81543ceffb57065354e3/cli.py#L318-L319
That was added by https://github.com/pytorch/torchchat/pull/746 with a very descriptive title "CLI check"
If this is indeed a product requirement, can we specify it somewhere in README.MD (and perhaps ha... | https://github.com/pytorch/torchchat/issues/784 | closed | [
"launch blocker"
] | 2024-05-13T18:50:16Z | 2024-05-13T19:01:22Z | 2 | malfet |
huggingface/datasets | 6,894 | Better document defaults of to_json | Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891 | https://github.com/huggingface/datasets/issues/6894 | closed | [
"documentation"
] | 2024-05-13T13:30:54Z | 2024-05-16T14:31:27Z | 0 | albertvillanova |
pytorch/TensorRT | 2,830 | ❓ [Question] How to specific aten operators must be run by LibTorch in C++? | ## ❓ Question
When I compile the SwinTransformer model using Torch-TensorRT, an error appears:
```
terminate called after throwing an instance of 'c10::Error'
what(): 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":615, please report a bug to PyTorch. We don't have an op for aten::floor_d... | https://github.com/pytorch/TensorRT/issues/2830 | open | [
"question"
] | 2024-05-13T10:10:09Z | 2024-05-27T01:40:49Z | null | demuxin |
huggingface/chat-ui | 1,134 | Websearch failed on retrieving from pdf files | On chat ui I am getting the error as shown in screenshot, on pdf files it always says "Failed to parse webpage". I set USE_LOCAL_WEBSEARCH=True in .env.local. can anyone help me.

| https://github.com/huggingface/chat-ui/issues/1134 | open | [
"support",
"websearch"
] | 2024-05-13T06:41:08Z | 2024-06-01T09:25:59Z | 2 | prateekvyas1996 |
pytorch/xla | 7,049 | Spmd whether expert parallelism is supported? | torchxla spmd whether expert parallelism is supported?
If it is a moe model, how should it be computed in xla?
## ❓ Questions and Help
| https://github.com/pytorch/xla/issues/7049 | open | [
"question",
"distributed"
] | 2024-05-13T03:23:20Z | 2025-09-03T20:34:04Z | null | mars1248 |
pytorch/torchchat | 776 | [tune/chat integration] component sharing | We seem to be doing the same rote stuff like manage checkpoints, download them, manager permissions, convert checkpoints and what have you...
Maybe this might be a good opportunity to reduce our joint workload by pooling some of these functions. It would likely also improve user experience thanks to consistency and... | https://github.com/pytorch/torchchat/issues/776 | closed | [] | 2024-05-13T02:44:08Z | 2024-07-21T21:50:46Z | 0 | mikekgfb |
pytorch/torchchat | 775 | [INTEGRATION] torchtune integration for e2e workflow with torchchat | Hey, I’m working myself thru our documentation and try to make it run in CI. That aligns pretty well with the user experience we have in mind where users can just cut & paste commands…
Also, we have so many dependences that unless we test at least the instructions for the users nothing works…
I have a couple of ... | https://github.com/pytorch/torchchat/issues/775 | closed | [] | 2024-05-13T02:35:21Z | 2024-07-21T21:46:30Z | 1 | mikekgfb |
pytorch/torchchat | 773 | [DOCS] GGUF instructions in docs/ADVANCED-USERS.md |
the instructions for GGUF in https://github.com/pytorch/torchchat/blob/main/docs/ADVANCED-USERS.md state:
> To use the quantize tool, install the GGML tools at ${GGUF} . Then, you can, for example, convert a quantized model to f16 format:
How do I do that? Can we put this in the doc, including with a definitio... | https://github.com/pytorch/torchchat/issues/773 | closed | [] | 2024-05-13T01:26:16Z | 2024-05-20T12:56:45Z | 1 | mikekgfb |
huggingface/parler-tts | 47 | Custom pronunciation for words - any thoughts / recommendations about how best to handle them? | Hello! This is a really interesting looking project.
Currently there doesn't seem any way that users can help the model correctly pronounce custom words - for instance **JPEG** is something that speakers just need to know is broken down as "**Jay-Peg**" rather than **Jay-Pea-Ee-Gee**.
I appreciate this project is... | https://github.com/huggingface/parler-tts/issues/47 | open | [] | 2024-05-12T15:51:05Z | 2025-01-03T08:39:58Z | null | nmstoker |
pytorch/examples | 1,257 | multi-node Tensor Parallel | Hello, could you add an new example of the tensor parallel + fsdp but using a multi-node setup?
Is it possible to do multi-node tensor parallelization with pytorch 2.3? I am trying to use 2 nodes with 4 GPUs each.
05/12/2024 04:32:52 PM Device Mesh created: device_mesh=DeviceMesh([[0, 1, 2, 3], [4, 5, 6, 7]], mesh_... | https://github.com/pytorch/examples/issues/1257 | open | [] | 2024-05-12T15:19:26Z | 2024-11-05T09:15:28Z | 1 | PieterZanders |
pytorch/torchchat | 757 | [LAUNCH DOCS] Add instructions what needs to be installed, and how to README | At present, running the instructions in the README will fail for the xcode project. See [#755](https://github.com/pytorch/torchchat/pull/755)
At a minimum we should specify what should be installed and what the minimum xcode version (and any other requirements) are?
Also, I would expect this to fail even then,... | https://github.com/pytorch/torchchat/issues/757 | closed | [] | 2024-05-12T04:50:32Z | 2024-07-27T01:53:39Z | null | mikekgfb |
pytorch/executorch | 3,585 | How can I use ExecuTorch to deploy a model to a MicroController,such as Infineon TC3xxx ? | "ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, **embedded devices** and **microcontrollers**"
Hello,above expression presents in [ExecuTorch doc:](https://pytorch.org/executorch/stable/intro-overview.html)
I want to know:
... | https://github.com/pytorch/executorch/issues/3585 | closed | [
"module: backend"
] | 2024-05-11T07:13:57Z | 2025-02-05T17:22:54Z | null | AlexLuya |
pytorch/torchchat | 740 | [FEATURE REQUEST] Could not find... Probably missing HF token/login, but if so we might indicate? |
(base) mikekg@mikekg-mbp torchchat % python3 torchchat.py generate llama3 --device cpu --compile
Downloading meta-llama/Meta-Llama-3-8B-Instruct from HuggingFace...
Converting meta-llama/Meta-Llama-3-8B-Instruct to torchchat format...
known configs: ['13B', '70B', 'CodeLlama-7b-Python-hf', '34B', 'stories42M', '30... | https://github.com/pytorch/torchchat/issues/740 | closed | [] | 2024-05-10T22:18:51Z | 2024-07-30T17:22:27Z | 1 | mikekgfb |
huggingface/text-generation-inference | 1,875 | How to share memory among 2 GPUS for distributed inference? | # Environment Setup
Runtime environment:
Target: x86_64-unknown-linux-gnu
Cargo version: 1.75.0
Commit sha: https://github.com/huggingface/text-generation-inference/commit/c38a7d7ddd9c612e368adec1ef94583be602fc7e
Docker label: sha-6c4496a
Kubernetes Cluster deployment
2 A100 GPU with 80GB RAM
12 CPU wit... | https://github.com/huggingface/text-generation-inference/issues/1875 | closed | [
"Stale"
] | 2024-05-10T08:49:05Z | 2024-06-21T01:48:05Z | null | martinigoyanes |
pytorch/pytorch | 125,902 | How to export onnx with fixed shape output ? | ### 🐛 Describe the bug
```
import torch
class TRT_SCA(torch.autograd.Function):
@staticmethod
def forward(ctx,
query,
key,
value,
reference_points,
spatial_shapes,
reference_points_cam,
... | https://github.com/pytorch/pytorch/issues/125902 | open | [
"module: onnx",
"triaged"
] | 2024-05-10T05:58:23Z | 2024-05-17T04:35:24Z | null | lix19937 |
pytorch/text | 2,264 | t5_demo can't retrieve CNNDM from drive.google; how to use local copy? | ## 🐛 Bug
**Describe the bug** A clear and concise description of what the bug is.
Following the [t5_demo](https://pytorch.org/text/stable/tutorials/t5_demo.html), but when it tries to access the CNN data at ` https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ`
**To Reproduce** Steps... | https://github.com/pytorch/text/issues/2264 | open | [] | 2024-05-10T03:55:13Z | 2024-05-10T03:55:13Z | null | rbelew |
huggingface/accelerate | 2,759 | How to specify the backend of Trainer | ### System Info
```Shell
accelerate 0.28.0
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_... | https://github.com/huggingface/accelerate/issues/2759 | closed | [] | 2024-05-10T03:18:08Z | 2025-01-16T10:29:19Z | null | Orion-Zheng |
huggingface/lerobot | 167 | python3.10 how to install rerun-sdk | ### System Info
```Shell
ubuntu18.04
python3.10
ERROR: Could not find a version that satisfies the requirement rerun-sdk>=0.15.1 (from lerobot) (from versions: none)
ERROR: No matching distribution found for rerun-sdk>=0.15.1
```
### Information
- [X] One of the scripts in the examples/ folder of LeRobot
- [... | https://github.com/huggingface/lerobot/issues/167 | closed | [
"dependencies"
] | 2024-05-10T03:07:30Z | 2024-05-13T01:25:09Z | null | MountainIntelligent |
huggingface/safetensors | 478 | Can't seem to skip parameter initialization while using the `safetensors.torch.load_model` API! | ### System Info
- `transformers` version: 4.40.0
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.22.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- Te... | https://github.com/huggingface/safetensors/issues/478 | closed | [
"Stale"
] | 2024-05-09T19:12:05Z | 2024-06-15T01:49:24Z | 1 | goelayu |
pytorch/tutorials | 2,861 | Performance Tuning Guide is very out of date | ### 🚀 Descirbe the improvement or the new tutorial
The first thing you see when you Google PyTorch performance is this. The recipe is well written but it's very much out of data today
https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html
Some concrete things we should fix
1. For fusions we should tal... | https://github.com/pytorch/tutorials/issues/2861 | closed | [
"medium",
"docathon-h1-2024"
] | 2024-05-09T16:57:35Z | 2024-06-12T16:11:31Z | 9 | msaroufim |
pytorch/xla | 7,042 | model.to(xla_device) increases the number of named_parameters | ## 🐛 Bug
Copy model to xla device affects the number of model's parameters.

## To Reproduce
```bash
python xla/benchmarks/experiment_runner.py --suite-name torchbench --accelerator cuda --dynamo openxla -... | https://github.com/pytorch/xla/issues/7042 | closed | [
"question"
] | 2024-05-09T13:53:03Z | 2025-04-17T13:51:16Z | null | shenh10 |
pytorch/xla | 7,040 | [torchbench] The official benchmark for performance and accuracy check | ## ❓ Questions and Help
Hi I found two available codebases for testing torchbench with pytorch/xla:
1. The one provided by pytorch official: https://github.com/pytorch/pytorch/tree/main/benchmarks/dynamo
2. Another one provided by pytorch/xla team: https://github.com/pytorch/xla/tree/master/benchmarks
However fo... | https://github.com/pytorch/xla/issues/7040 | closed | [
"question",
"benchmarking"
] | 2024-05-09T08:33:21Z | 2025-04-17T13:53:39Z | null | shenh10 |
huggingface/tokenizers | 1,525 | How to write custom Wordpiece class? | My aim is get the rwkv5 model‘s "tokenizer.json",but it implemented through slow tokenizer(class Pretrainedtokenizer).
I want to convert "slow tokenizer" to "fast tokenizer",it needs to use "tokenizer = Tokenizer(Wordpiece())",but rwkv5 has it‘s own Wordpiece file.
So I want to create a custom Wordpiece
the code i... | https://github.com/huggingface/tokenizers/issues/1525 | closed | [
"Stale"
] | 2024-05-09T03:48:27Z | 2024-07-18T01:53:23Z | null | xinyinan9527 |
huggingface/trl | 1,635 | How to use trl\trainer\kto_trainer.py | If I want to use KTO trainer, I could set the parameter [loss_type == "kto_pair"] in dpo_trainer.py. Then what is kto_trainer.py used for? And how to use it? | https://github.com/huggingface/trl/issues/1635 | closed | [] | 2024-05-09T02:40:14Z | 2024-06-11T10:17:51Z | null | mazhengyufreedom |
pytorch/tutorials | 2,860 | requires_grad=True for an input datapoint? | https://github.com/pytorch/tutorials/blob/f4ebb4d007792f5bc302affa7b360a9710e4a88b/advanced_source/super_resolution_with_onnxruntime.py#L144
It is obscure to me why there is the need to set the flag requires_grad to True for datapoint "x", which has no parameters to be learnt.
Is it something required to export the... | https://github.com/pytorch/tutorials/issues/2860 | closed | [
"question",
"onnx"
] | 2024-05-08T15:25:54Z | 2025-04-16T21:22:11Z | null | ggbioing |
huggingface/datasets | 6,882 | Connection Error When Using By-pass Proxies | ### Describe the bug
I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(M... | https://github.com/huggingface/datasets/issues/6882 | open | [] | 2024-05-08T06:40:14Z | 2024-05-17T06:38:30Z | 1 | MRNOBODY-ZST |
huggingface/datatrove | 180 | how to turn log/traceback color off? | Trying datatrove for the first time and the program spews a bunch of logs and tracebacks in yellow and cyan which are completely unreadable on the b&w console.
Does the program make an assumption that the user is using w&b (dark) console?
I tried to grep for `color` to see how it controls the colors but found no... | https://github.com/huggingface/datatrove/issues/180 | closed | [] | 2024-05-08T03:51:11Z | 2024-05-17T17:53:20Z | null | stas00 |
pytorch/TensorRT | 2,822 | ❓ [Question] Model inference is much slower after updating to TensorRT 9.3 | ## ❓ Question
I have a VIT model for object detection. The model inference speed in the tensort 8.5 environment is 190ms per frame. However when I updated to TensorRT 9.3, Inference slowed down to 250ms per frame.
I acquired the C++ dynamic library by compiling the latest Torch-TensorRT source code.
What might... | https://github.com/pytorch/TensorRT/issues/2822 | open | [
"question"
] | 2024-05-08T03:20:18Z | 2025-09-03T20:08:33Z | null | demuxin |
pytorch/expecttest | 18 | How to use it in pytest based testing? | The readme seems to be written for testcase only. | https://github.com/pytorch/expecttest/issues/18 | closed | [] | 2024-05-07T22:27:37Z | 2024-05-07T23:09:38Z | null | youkaichao |
huggingface/candle | 2,171 | How to run LLama-3 or Phi with more then 4096 prompt tokens? | Could you please show me an example where LLama-3 model used (better GGUF quantized) and initial prompt is more then 4096 tokens long? Or better 16-64K long (for RAG). Currently everything I do ends with error:
In this code:
let logits = model.forward(&input, 0); // input is > 4096 tokens
Error:
narrow invalid a... | https://github.com/huggingface/candle/issues/2171 | open | [] | 2024-05-07T20:15:28Z | 2024-05-07T20:16:13Z | null | baleksey |
pytorch/xla | 7,033 | constant folding for AvgPool2d | ## ❓ Questions and Help
exporting simple `AvgPool2d` using `torch_xla 2.3` results in two different `stablehlo.reduce_window` ops, the second one only takes args as constants. Is there a way to fold it into a constant in `exported_program_to_stablehlo`? @lsy323 @qihqi
e.g. `%4` in the following example.
```pyt... | https://github.com/pytorch/xla/issues/7033 | closed | [
"stablehlo"
] | 2024-05-07T07:34:11Z | 2024-09-23T21:45:42Z | 10 | thong3le |
huggingface/chat-ui | 1,115 | [v0.8.4] IMPORTANT: Talking to PDFs and general Roadmap? | Hi @nsarrazin
I have a couple of questions that I could not get answers to in the repo and on the web.
1. Is there a plan to enable file uploads (PDFs, etc) so that users can talk to those files? Similar to ChatGPT, Gemini etc?
2. Is there a feature roadmap available somewhere?
Thanks! | https://github.com/huggingface/chat-ui/issues/1115 | open | [] | 2024-05-07T06:10:20Z | 2024-09-10T15:44:16Z | 4 | adhishthite |
huggingface/candle | 2,167 | How to do a Axum's sse function for Candle? | fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> {
use std::io::Write;
self.tokenizer.clear();
let mut tokens = self
.tokenizer
.tokenizer()
.encode(prompt, true)
.map_err(E::msg)?
.get_ids()
.to_vec... | https://github.com/huggingface/candle/issues/2167 | closed | [] | 2024-05-07T02:38:50Z | 2024-05-08T04:27:14Z | null | sunnyregion |
pytorch/torchchat | 708 | --num-samples xxx does not work for getting multiple prompt responses | Previouslu, users could use --num-samples to get reliable benchmarking. WIth recent updates, num-samples no longer appears to work.
https://github.com/pytorch/pytorch/pull/125611 shows nice performance gains on gpt-fast, and @helloguo would like to validate on torchchat to ensure this also accelerates our code. ... | https://github.com/pytorch/torchchat/issues/708 | closed | [] | 2024-05-06T23:45:52Z | 2024-05-12T21:23:06Z | 1 | mikekgfb |
huggingface/optimum | 1,847 | Static Quantization for Seq2Seq models like T5 | I'm currently trying to static quantize T5 but it seem in the optimum doc last committed 10 months ago said it don't support static only dynamic. Is there anyone ever try this before or has optimum updated any related recently, may be help me take a look? | https://github.com/huggingface/optimum/issues/1847 | open | [
"question",
"quantization"
] | 2024-05-06T19:34:30Z | 2024-10-14T12:24:28Z | null | NQTri00 |
pytorch/torchtitan | 312 | Question on Model Init | I noticed that there are two parts of implementation that are related to model initialization.
### Instancing the model with meta tensor
https://github.com/pytorch/torchtitan/blob/f72a2a0da0bdfc394faaab9b3c0f35d0b6f5be50/train.py#L177-L181
### Doing explicit model initalization
https://github.com/pytorch/torc... | https://github.com/pytorch/torchtitan/issues/312 | open | [
"question"
] | 2024-05-06T17:35:15Z | 2024-05-13T13:30:51Z | null | XinDongol |
huggingface/optimum | 1,846 | Low performance of THUDM/chatglm3-6b onnx model | I ran the chatglm3-6b model by exporting it to ONNX framework using custom onnx configuration. Although the functionality is correct, the latency of the model is very high, much higher than the pytorch model.
I have attached a minimal reproducible code which exports and run the model. Can someone take a look into it ... | https://github.com/huggingface/optimum/issues/1846 | open | [
"inference",
"onnxruntime",
"onnx"
] | 2024-05-06T17:18:58Z | 2024-10-14T12:25:29Z | 0 | tuhinp-amd |
pytorch/torchchat | 692 | [LAUNCH BLOCKER] TorchChat results seems less connected than they could have been | For example generating text from the same prompt using llama.cpp and TorchChat produces following results:
```
Hello, my name is **Marcus**, and I am a 33-year-old software developer from California. I have been using the internet for the past 20 years, and I have seen it evolve into a powerful tool for communication... | https://github.com/pytorch/torchchat/issues/692 | closed | [
"launch blocker"
] | 2024-05-06T16:31:38Z | 2024-07-21T22:00:21Z | 9 | malfet |
pytorch/TensorRT | 2,813 | ❓ [Question] How to solve this warning: Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled. | ## ❓ Question
I used Torch-TensorRT to compile the torchscript model in C++. When compiling or loading torchtrt model, it displays many warnings.
```
WARNING: [Torch-TensorRT] - Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled. For more on the implications of... | https://github.com/pytorch/TensorRT/issues/2813 | closed | [
"question"
] | 2024-05-06T09:39:02Z | 2024-05-21T17:02:12Z | null | demuxin |
huggingface/dataset-viewer | 2,775 | Support LeRobot datasets? | Currently:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'VideoFrame' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image']
```
eg on https://... | https://github.com/huggingface/dataset-viewer/issues/2775 | open | [
"question",
"feature request",
"dependencies",
"P2"
] | 2024-05-06T09:16:40Z | 2025-07-24T03:36:41Z | null | severo |
huggingface/peft | 1,712 | how to finetune whisper model with 'initial_prompt' | when use 'initial_prompt', the decoding result of finetuning with my data on whisper model v2 is bad, on the contrary, the result is good.
however, when use 'initial_prompt' the decoding result of based whisper model v2 is also good, so it means If want to use 'initial_prompt' during decoding , must add it when t... | https://github.com/huggingface/peft/issues/1712 | closed | [] | 2024-05-06T06:28:20Z | 2024-06-13T15:03:43Z | null | zyb8543d |
pytorch/torchchat | 685 | [PRE-LAUNCH] Test for quantization.md does not work... is attempt to install et when it has already been installed to blame? | ERROR: type should be string, got "https://github.com/pytorch/torchchat/actions/runs/8961642013/job/24609465486?pr=684\r\n\r\nAs part of the setup for this test, we build and install et. But, et is already installed. Should this pass?\r\nAnd if not, should it? Are we condemning everybody who re-runs install_et to fail?\r\n```\r\n -- Detecting CXX compile features - done\r\n -- Downloading FXdiv to /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/FXdiv-source (define FXDIV_SOURCE_DIR to avoid it)\r\n -- Configuring done (0.1s)\r\n -- Generating done (0.0s)\r\n -- Build files have been written to: /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/FXdiv-download\r\n [ 11%] Creating directories for 'fxdiv'\r\n [ 22%] Performing download step (git clone) for 'fxdiv'\r\n Cloning into 'FXdiv-source'...\r\n Already on 'master'\r\n Your branch is up to date with 'origin/master'.\r\n [ 33%] Performing update step for 'fxdiv'\r\n [ 44%] No patch step for 'fxdiv'\r\n [ 55%] No configure step for 'fxdiv'\r\n [ 66%] No build step for 'fxdiv'\r\n [ 77%] No install step for 'fxdiv'\r\n [ 88%] No test step for 'fxdiv'\r\n [100%] Completed 'fxdiv'\r\n [100%] Built target fxdiv\r\n -- Performing Test CMAKE_HAVE_LIBC_PTHREAD\r\n -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success\r\n -- Found Threads: TRUE\r\n -- Using python executable '/Library/Frameworks/Python.framework/Versions/3.10/bin/python'\r\n -- Resolved buck2 as /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/buck2-bin/buck2-99e407b49dc432eda0cbddd67ea78346.\r\n -- Killing buck2 daemon\r\n -- executorch: Generating source lists\r\n -- executorch: Generating source file list /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/executorch_srcs.cmake\r\n\r\n Error while generating /Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/executorch_srcs.cmake. Exit code: 1\r\n Output:\r\n \r\n Error:\r\n Traceback (most recent call last):\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/buck_util.py\", line 26, in run\r\n cp: subprocess.CompletedProcess = subprocess.run(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py\", line 526, in run\r\n raise CalledProcessError(retcode, process.args,\r\n subprocess.CalledProcessError: Command '['/Users/runner/work/torchchat/torchchat/et-build/src/executorch/pip-out/temp.macosx-10.9-universal2-cpython-310/cmake-out/buck2-bin/buck2-99e407b49dc432eda0cbddd67ea78346', 'cquery', \"inputs(deps('//runtime/executor:program'))\"]' returned non-zero exit status 2.\r\n \r\n The above exception was the direct cause of the following exception:\r\n \r\n Traceback (most recent call last):\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/extract_sources.py\", line 218, in <module>\r\n main()\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/extract_sources.py\", line 203, in main\r\n target_to_srcs[name] = sorted(target.get_sources(graph, runner))\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/extract_sources.py\", line 116, in get_sources\r\n sources: set[str] = set(runner.run([\"cquery\", query]))\r\n File \"/Users/runner/work/torchchat/torchchat/et-build/src/executorch/build/buck_util.py\", line 31, in run\r\n raise RuntimeError(ex.stderr.decode(\"utf-8\")) from ex\r\n RuntimeError: Command failed:\r\n Error validating working directory\r\n \r\n Caused by:\r\n 0: Failed to stat `/Users/runner/work/torchchat/torchchat/et-build/src/executorch/buck-out/v2`\r\n 1: ENOENT: No such file or directory\r\n \r\n \r\n CMake Error at build/Utils.cmake:191 (message):\r\n executorch: source list generation failed\r\n Call Stack (most recent call first):\r\n CMakeLists.txt:311 (extract_sources)\r\n ```" | https://github.com/pytorch/torchchat/issues/685 | closed | [
"bug"
] | 2024-05-05T23:01:07Z | 2024-05-12T20:40:53Z | 1 | mikekgfb |
huggingface/dataspeech | 17 | UnboundLocalError: cannot access local variable 't' where it is not associated with a value """ | ### What i do
Hello. I tried to annotate my own dataset. And I got an error that I don't understand.
I'm a newbie. He is generally unable to understand what happened and why it happened.
I am attaching all the materials that I have
I have CSV-Scheme
| audio | text | speeker_id |
| ------------- | ------... | https://github.com/huggingface/dataspeech/issues/17 | closed | [] | 2024-05-05T20:49:26Z | 2024-05-28T11:31:37Z | null | anioji |
pytorch/vision | 8,409 | Mask r-cnn training runs infinitely without output or error | ### 🐛 Describe the bug
Here’s a brief overview of my process:
1.I generated a dataset using PyTorch by applying the SAM mask from bounding boxes to my images.
2.After creating the dataset, I split it into training and testing sets.
3.I loaded both sets using torch.utils.data.DataLoader.
4.I’m using a pre-traine... | https://github.com/pytorch/vision/issues/8409 | closed | [] | 2024-05-05T11:09:04Z | 2024-05-07T10:48:07Z | 1 | MontassarTn |
pytorch/examples | 1,253 | Drawbacks of making the C++ API look like Python | Thank you for creating a C++ version of Pytorch. However, I wonder if you could create an example that looks like C++ and not like Python?
The [DCGAN sample project](https://github.com/pytorch/examples/blob/main/cpp/dcgan/dcgan.cpp) makes extensive use of ```auto``` so that it can show how it can be made to look and... | https://github.com/pytorch/examples/issues/1253 | closed | [] | 2024-05-04T15:39:22Z | 2024-05-11T09:39:36Z | 10 | dannypike |
pytorch/torchchat | 676 | [PRE-LAUNCH] On some MacOS/xcode version install fails with an error |
This happens in our cloud runners. Does not affect most users, but only those that have certain versions of the Apple linker installed. Do we need to cover this in common problems?
Fixing this may not be a launch blocker, but being intentional about it probably is. | https://github.com/pytorch/torchchat/issues/676 | closed | [
"documentation"
] | 2024-05-04T15:31:19Z | 2024-05-12T20:43:17Z | 4 | mikekgfb |
huggingface/parler-tts | 38 | how to use common voice mozilla dataset train for Parler-TTS | how to use common voice mozilla dataset train for Parler-TTS ?can you help me ? | https://github.com/huggingface/parler-tts/issues/38 | open | [] | 2024-05-04T12:36:30Z | 2024-05-04T12:36:30Z | null | herbiel |
pytorch/torchchat | 674 | [LAUNCH BLOCKER] Build of ET - Commands from README fail | #670 adds building on MacOS for the entire flow but fails very much towards the end of macOS ci.
However the status is reported as green/correct execution. Why, and how do we make it red when it fails?
Building ET fails according to readme logs, witj an error we have seen before from the linker:
https://github.c... | https://github.com/pytorch/torchchat/issues/674 | closed | [] | 2024-05-04T10:30:39Z | 2024-05-05T20:27:32Z | 2 | mikekgfb |
pytorch/torchchat | 663 | [PRE-LAUNCH] Why is necessary to disable int8pack_mm with compilation? Is it not working or slow ? |
Curious why we're disabling the int4pack_mm for CPU compilation - are we thinking generated code is more performant? (Then we should document that someplace...) Or is it not working to call this operator from AOTI?
Why not? I thought there was an automatic fallback. @desertfire | https://github.com/pytorch/torchchat/issues/663 | closed | [] | 2024-05-04T03:34:20Z | 2024-05-17T13:08:15Z | 1 | mikekgfb |
pytorch/torchchat | 660 | [LABEL TBD] torchchat redownloads model when rebased? | A few days ago, I played with torchchat as follows (in the context of https://github.com/pytorch/torchchat/issues/621):
`python3 torchchat.py download llama3`
`python3 torchchat.py generate llama3`
Today, I rebased and continued where I left of. In particular, i called the following command:
`python3 tor... | https://github.com/pytorch/torchchat/issues/660 | closed | [] | 2024-05-03T22:01:22Z | 2024-05-06T15:13:30Z | 2 | mergennachin |
huggingface/setfit | 519 | how to optimize setfit inference | hi,
im currently investigating what the options we have to optimize setfit inference and have a few questions about it:
- gpu:
- torch compile: https://huggingface.co/docs/transformers/en/perf_torch_compile
is the following the only way to use setfit with torch.compile?
```
model.model_body[0].auto_model =... | https://github.com/huggingface/setfit/issues/519 | closed | [] | 2024-05-03T19:19:21Z | 2024-06-02T20:30:34Z | null | geraldstanje |
huggingface/chat-ui | 1,097 | Katex fails to render math expressions from ChatGPT4. | I am using Chat UI version 0.8.3 and ChatGPT version gpt-4-turbo-2024-04-09.
ChatGPT is outputting formula delimiters as `\[`, `\]`, `\(`, `\)` and katex in the current version of ChatUI is not rendering them correctly. Based on my experiments, katex renders only formulas with `$` delimiters correctly.
I did a qu... | https://github.com/huggingface/chat-ui/issues/1097 | closed | [
"bug",
"help wanted",
"front"
] | 2024-05-03T08:19:40Z | 2024-11-22T12:18:44Z | 5 | haje01 |
huggingface/chat-ui | 1,096 | error in login redirect | I am running chat-ui in online vps ubuntu 22
I am stuck at login redirection
I went through google authorization page and confirm my Gmail then redirect to my main domain again
The problem is simply it back with no action, not logged on and the URL been like that:
mydomain.com/login/callback?state=xxxxxxxxx
when... | https://github.com/huggingface/chat-ui/issues/1096 | open | [
"support"
] | 2024-05-02T22:19:13Z | 2024-05-07T20:50:28Z | 0 | abdalladorrah |
huggingface/trl | 1,614 | How to do fp16 training with PPOTrainer? | I modified the example from the official website to do PPO training with llama3 using lora. When I use fp16, the weights go to nan after the first update, which does not occur when using fp32.
Here is the code
```python
# 0. imports
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
fro... | https://github.com/huggingface/trl/issues/1614 | closed | [] | 2024-05-02T17:52:16Z | 2024-11-18T08:28:08Z | null | KwanWaiChung |
huggingface/optimum | 1,843 | Support for speech to text models. | ### Feature request
Hi, it would be really useful if speech to text models could be supported by optimum, specifically to ONNX. I saw a repo that managed to do it and they claimed they used optimum to do it.
https://huggingface.co/Xenova/speecht5_tts
Is there a way to do this?
### Motivation
I am finding it ve... | https://github.com/huggingface/optimum/issues/1843 | open | [
"feature-request",
"onnx"
] | 2024-05-02T11:43:49Z | 2024-10-14T12:25:52Z | 0 | JamesBowerXanda |
huggingface/datasets | 6,854 | Wrong example of usage when config name is missing for community script-datasets | As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name i... | https://github.com/huggingface/datasets/issues/6854 | closed | [
"bug"
] | 2024-05-02T06:59:39Z | 2024-05-03T15:51:59Z | 0 | albertvillanova |
pytorch/xla | 7,014 | Export debug information to StableHLO | ## ❓ Questions and Help
Hi team, the debugging information is lost during `exported_program_to_stablehlo`, is there a way to export this information?
For example, `torch.export` generates file and line number for each op,
```python
import torch
import torch.nn as nn
from torch_xla.stablehlo import exported_pr... | https://github.com/pytorch/xla/issues/7014 | closed | [
"stablehlo"
] | 2024-05-01T21:27:11Z | 2024-05-14T16:45:17Z | 11 | thong3le |
huggingface/distil-whisper | 130 | How to set the target language for examples in README? | The code examples in the README do not make it obvious how to set the language of the audio to transcribe.
The default settings create garbled english text if the audio language is different. | https://github.com/huggingface/distil-whisper/issues/130 | open | [] | 2024-05-01T11:52:00Z | 2024-05-22T11:59:09Z | null | clstaudt |
huggingface/transformers | 30,596 | AutoModal how to enable TP for extremly large models? | Hi, I have 8V100s, but a single one can not fit InternVL1.5 model which has 28B parameters.
So that, I just wonder if I can fit all of them into 8 V100 with TP?
I found that Deepspeed can be used to do tensor parallel like this:
```
# create the model
if args.pre_load_checkpoint:
model = model_class.fro... | https://github.com/huggingface/transformers/issues/30596 | closed | [] | 2024-05-01T10:06:45Z | 2024-06-09T08:03:23Z | null | MonolithFoundation |
huggingface/transformers | 30,595 | i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ | ### System Info
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Who can help?
i cannot... | https://github.com/huggingface/transformers/issues/30595 | closed | [] | 2024-05-01T09:17:58Z | 2024-05-01T09:31:39Z | null | ldh127 |
huggingface/transformers.js | 732 | What does "Error: failed to call OrtRun(). error code = 6." mean? I know it is ONNX related, but how to fix? | ### Question
I keep running into the same issue when using transformers.js Automatic Speech Recognition pipeline. I've tried solving it multiple ways. But pretty much hit a wall every time. I've done lots of googling, LLMs, and used my prior knowledge of how this stuff functions in python. But I can't seem to get it t... | https://github.com/huggingface/transformers.js/issues/732 | closed | [
"question"
] | 2024-05-01T07:01:06Z | 2024-05-11T09:18:35Z | null | jquintanilla4 |
huggingface/transformers | 30,591 | i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ | ### Feature request
i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^
### Motivation
... | https://github.com/huggingface/transformers/issues/30591 | closed | [] | 2024-05-01T04:27:47Z | 2024-06-08T08:03:17Z | null | ldh127 |
huggingface/chat-ui | 1,093 | I want to get the html of a website https://bit.ly/4bgmLb9 in huggingchat web search | I want to get the html of a website https://bit.ly/4bgmLb9 in hugging-chat web search. In chrome, I can put https://bit.ly/4bgmLb9 in the address bar and get the result. But I do not know how to do that in hugging-chat web search?
I try in hugging-chat and the screenshot
 with PEFT method, I use lora、loha and lokr for PEFT in [diffuser](https://github.com/huggingface/diffusers).
I have a question, how to convert a loha safetensor trained from diffusers to webui format?
In the training process:
the loading way:
`peft_config =... | https://github.com/huggingface/peft/issues/1693 | closed | [] | 2024-04-30T07:17:48Z | 2024-06-08T15:03:44Z | null | JIAOJIAYUASD |
pytorch/torchchat | 579 | [User Experience] User does not know what is expected by prompts | @ali-khosh user report:
I’m being asked “Do you want to enter a system prompt? Enter y for yes and anything else for no.” not sure what this means. When I hit yes, it asks “what is your system prompt?” still don’t know what that means. I entered “hello my name is” and it’s now asking me for “User:” no clue what that... | https://github.com/pytorch/torchchat/issues/579 | open | [] | 2024-04-30T06:39:23Z | 2024-04-30T06:39:50Z | null | mikekgfb |
pytorch/torchchat | 575 | unimplemented operators - workarounds and long term perspective | Today users have to set PYTORCH_ENABLE_MPS_FALLBACK=1 when they call torchchat if they want to use _weight_int4pack_mm. Can we set that automatically, from inside the program. This is a crude workaround, maybe we can get an implementation of _weight_int4pack_mm for MPS? (This would also be goodness for mobile.)
| https://github.com/pytorch/torchchat/issues/575 | open | [] | 2024-04-30T05:58:13Z | 2024-07-30T20:44:26Z | 0 | mikekgfb |
pytorch/torchchat | 565 | [LAUNCH BLOCKER] Llama3 8B Instruct model hangs on chat | (.venv) (base) mikekg@mikekg-mbp torchchat % # Llama 3 8B Instruct
python3 torchchat.py chat llama3
zsh: command not found: #
Using device=cpu Apple M1 Max
Loading model...
Time to load model: 10.23 seconds
Entering Chat Mode. Will continue chatting back and forth with the language model until the models max cont... | https://github.com/pytorch/torchchat/issues/565 | closed | [] | 2024-04-29T22:15:12Z | 2024-04-29T22:42:26Z | 2 | mikekgfb |
pytorch/torchchat | 561 | [FEATURE REQUEST] raise connection error fails download / we don't offer. plan b, or a way to resume | so, does this have a common error instruction? Should we tell people to download another model if they can’t get Meta approval, or there’s an error like in my case?
Also, this engineer having been on the slwo end of a pipe before.... are there any instructions how to resume a failed download that's say, frustrating... | https://github.com/pytorch/torchchat/issues/561 | closed | [] | 2024-04-29T21:36:59Z | 2024-05-12T20:45:02Z | 1 | mikekgfb |
huggingface/safetensors | 474 | How to fully load checkpointed weights in memory? | ### System Info
- `transformers` version: 4.40.0
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.22.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- ... | https://github.com/huggingface/safetensors/issues/474 | closed | [] | 2024-04-29T21:30:37Z | 2024-04-30T22:12:29Z | null | goelayu |
pytorch/data | 1,247 | [StatefulDataLoader] macOS tests are too slow | ### 🐛 Describe the bug
test_state_dict is very slow on macOS (and slows down CI), likely because of macOS default multiprocessing_context being spawn instead of fork. The StatefulDataLoader tests on macOS take ~1.5 hours, vs 10 minutes on Linux and Windows.
Example of test-runtimes on my local mac:
<img widt... | https://github.com/meta-pytorch/data/issues/1247 | closed | [
"stateful_dataloader"
] | 2024-04-29T18:10:35Z | 2024-04-30T19:11:57Z | 0 | andrewkho |
huggingface/dataset-viewer | 2,754 | Return partial dataset-hub-cache instead of error? | `dataset-hub-cache` depends on multiple previous steps, and any error in one of them makes it fail. It provokes things like https://github.com/huggingface/moon-landing/issues/9799 (internal): in the datasets list, a dataset is not marked as "supporting the dataset viewer", whereas the only issue is that we didn't manag... | https://github.com/huggingface/dataset-viewer/issues/2754 | closed | [
"question",
"P2"
] | 2024-04-29T17:10:09Z | 2024-06-13T13:57:20Z | null | severo |
pytorch/torchchat | 549 | [CI] add dtype tests for runner-aoti and runner-et |
We are reverting ##539 which added more dtype tests for runner-aoti + runner-et,
because of fails - there's no point in having failing tests. That being said, we should figure out which ones should work, and if they don't today, how to make them work. | https://github.com/pytorch/torchchat/issues/549 | open | [] | 2024-04-29T16:42:19Z | 2024-04-29T18:01:09Z | 2 | mikekgfb |
pytorch/torchchat | 547 | Can we make sure native runner binary commands in README work directly as written? | It would be great if
```
cmake-out/aoti_run model.so -z tokenizer.model -l 3 -i "Once upon a time"
```
and
```
cmake-out/et_run llama3.pte -z tokenizer.model -l 3 -i "Once upon a time"
```
were changed to include a known location of a model.so and tokenizer.model file. For example, include download and ... | https://github.com/pytorch/torchchat/issues/547 | closed | [] | 2024-04-29T15:33:15Z | 2024-05-12T21:03:08Z | 1 | orionr |
pytorch/torchchat | 546 | Move legal disclaimer down to license section? | I think we can move
Disclaimer: The torchchat Repository Content is provided without any guarantees about performance or compatibility. In particular, torchchat makes available model architectures written in Python for PyTorch that may not perform in the same manner or meet the same standards as the original version... | https://github.com/pytorch/torchchat/issues/546 | closed | [] | 2024-04-29T15:29:37Z | 2024-05-12T21:06:46Z | 1 | orionr |
huggingface/datasets | 6,848 | Cant Downlaod Common Voice 17.0 hy-AM | ### Describe the bug
I want to download Common Voice 17.0 hy-AM but it returns an error.
```
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name='hfds_config', config_path=None)
/usr/local/lib/pyth... | https://github.com/huggingface/datasets/issues/6848 | open | [] | 2024-04-29T10:06:02Z | 2025-04-01T20:48:09Z | 3 | mheryerznkanyan |
huggingface/optimum | 1,839 | why does ORTModelForCausalLM assume new input length is 1 when past_key_values is passed | https://github.com/huggingface/optimum/blob/c55f8824f58db1a2f1cfc7879451b4743b8f206b/optimum/onnxruntime/modeling_decoder.py#L649
``` python
def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs):
if past_key_values is not None:
past_length = past_key_values[0][... | https://github.com/huggingface/optimum/issues/1839 | open | [
"question",
"onnxruntime"
] | 2024-04-29T07:06:04Z | 2024-10-14T12:28:51Z | null | cyh-ustc |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.