repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/torchx | 798 | Combine / rename `dist.ddp` and `dist.spmd` into `dist.torchrun` | ## Description
Currently, `dist.ddp` and `dist.spmd` are basically identical (the latter being a lightweight wrapper on the former). Also, they could be named more explicitly — `dist.ddp` doesn't actually involve Distributed Data Parallel, it just calls `torchrun`.
## Motivation/Background
<!-- why is this feature... | https://github.com/meta-pytorch/torchx/issues/798 | open | [] | 2023-12-08T21:23:31Z | 2023-12-08T21:31:54Z | 0 | schmidt-ai |
huggingface/tokenizers | 1,410 | How to create Tokenizer.json? | I have this tokenizer and I want to convert it to **tokenizer.json** format.
- added_tokens.json
- normalizer.json
- special_tokens_map.json
- config.json
- preprocessor_config.json
- vocab.json
- merges.txt
- pytorch_model.bin
Is it possible to replace my tokenizer data wit... | https://github.com/huggingface/tokenizers/issues/1410 | closed | [
"Stale"
] | 2023-12-08T09:41:18Z | 2024-01-14T01:52:39Z | null | kenaii |
huggingface/optimum | 1,577 | Support the ORT of the Stable Diffusion XL inpaint model | ### Feature request
Hi all.
We would like to convert the stable-diffusion-xl-inpaint model below to ONNX and run it using ORT. The conversion to ONNX went well using Optimum's cli, but there doesn't seem to be a Python class for ORT inference.
https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting... | https://github.com/huggingface/optimum/issues/1577 | closed | [
"feature-request",
"Stale"
] | 2023-12-08T09:21:06Z | 2025-02-19T02:02:54Z | 2 | 0-chan-kor |
huggingface/chat-ui | 617 | Does Chat-UI support multithreading? | Maybe it depends on node.js, but I want to know the CPU utilization. | https://github.com/huggingface/chat-ui/issues/617 | closed | [
"question"
] | 2023-12-08T05:36:18Z | 2023-12-14T07:30:01Z | null | calycekr |
huggingface/chat-ui | 615 | npm run error (latest git pull) | I created a .env.local as:
```
MONGODB_URL=mongodb://localhost:27017
MONGODB_DB_NAME=chat-ui
MONGODB_DIRECT_CONNECTION=false
COOKIE_NAME=hf-chat
HF_TOKEN=
HF_API_ROOT=https://api-inference.huggingface.co/models
OPENAI_API_KEY=
```
Then I tried:
```
npm install #everything went fine
npm run dev -- --hos... | https://github.com/huggingface/chat-ui/issues/615 | closed | [
"support"
] | 2023-12-07T10:59:53Z | 2024-04-24T12:29:46Z | 4 | shuther |
huggingface/chat-ui | 614 | Docker build - multiple errors - documentation | I can't find documentation to build it myself; so I tried:
`docker-compose build up`
But I got multiple errors amoung:
> chat-ui/.env: line 23: unexpected character "\"" in variable name "\"PROVIDER_URL\": \"\","
Even `source .env` returned multiple errors; I tried to change the `into a ' with no luck.
My go... | https://github.com/huggingface/chat-ui/issues/614 | open | [
"support"
] | 2023-12-07T10:55:04Z | 2024-06-01T12:44:18Z | 4 | shuther |
huggingface/text-generation-inference | 1,318 | how to run tgi installed locally without any UI | ### System Info
how to run tgi installed locally without any UI?
pip install text-generation , giving error: ERROR: No matching distribution found for text-generation
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
... | https://github.com/huggingface/text-generation-inference/issues/1318 | closed | [
"Stale"
] | 2023-12-07T08:47:13Z | 2024-01-13T01:46:40Z | null | poojitharamachandra |
huggingface/autotrain-advanced | 376 | How to a Autotrain Seq2Seq ? | Hi everyone , I'm trying to finetune a Helsinki-NLP/opus-mt-tc-big-ar-en on local arabic of morocco which is called Daraija Arabic , the problem is that I'm unable to use Autotrain I keep getting 500 error code
.
## What I am trying to do
I'm trying to create a tokenizer... | https://github.com/huggingface/tokenizers/issues/1407 | open | [
"bytefallback",
"Feature Request"
] | 2023-12-06T09:03:35Z | 2024-08-27T01:57:04Z | null | dinhanhx |
huggingface/transformers.js | 432 | Cannot download the model from huggingface | Because of the network reason, when using transfomer.js we cannot download the model successful
How to set the network proxy for the model download
| https://github.com/huggingface/transformers.js/issues/432 | open | [
"question"
] | 2023-12-06T08:18:58Z | 2023-12-10T13:42:50Z | null | wujohns |
huggingface/blog | 1,677 | how to achieve image-text matching of BLIP2 | Hi, Thanks to the authors for the works.
I am trying to achieve image-text matching of BLIP2, but I didn't find any examples of that. Can you give me some help or tips? | https://github.com/huggingface/blog/issues/1677 | open | [] | 2023-12-06T07:03:21Z | 2023-12-06T07:08:48Z | null | wkqun555 |
pytorch/kineto | 847 | How does kineto work actuallly? | Hello, everyone.
I took a quick look at the source code of kineto and it seems the most important part of kineto is [CUPTI](https://docs.nvidia.com/cupti/r_main.html#r_main). I am curious how does kineto work and I have tried some examples of CUPTI. I have some questions hope someone could give me some insights.
1. ... | https://github.com/pytorch/kineto/issues/847 | closed | [
"documentation",
"question"
] | 2023-12-06T06:48:28Z | 2023-12-28T16:46:47Z | null | stricklandye |
huggingface/diffusers | 6,070 | How to overload existing class in diffusers | That's just for personal development. I want to write a new class inherited from existing class (e.g. `ControlNetModel`) and I added some new parameters to `__init__` function, but found that the `__init__` function is still the parent's implementation, whether to add the decorator `register_to_config` or not.
Hope ... | https://github.com/huggingface/diffusers/issues/6070 | closed | [] | 2023-12-06T06:41:44Z | 2024-09-25T14:44:04Z | null | OrangeSodahub |
huggingface/diffusers | 6,067 | How to run the fine_tuned model? | Hi all,
I used the instructions given [here](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) to fine_tune the model on dog pictures (as explained in the link).
The fine_tuning has finished, and a folder called path-to-save-model has been created (that has the weights of the model). Now how d... | https://github.com/huggingface/diffusers/issues/6067 | closed | [] | 2023-12-06T01:01:56Z | 2025-04-28T10:32:33Z | null | alireza18878 |
huggingface/text-generation-inference | 1,314 | What is the default tokenizer behaviour? | ### System Info
N/A
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
I'm trying to understand whether special tokens (i.e. BOS and EOS) are added and suppressed on tokenization and decoding.
Encoding:
- I searched ... | https://github.com/huggingface/text-generation-inference/issues/1314 | closed | [] | 2023-12-05T17:35:05Z | 2024-01-19T13:14:13Z | null | RonanKMcGovern |
huggingface/chat-ui | 609 | [Feature Request] Uploading PDFS/Text Files/Images? | I love the search function and it makes the chat feel so much more accurate! I use it mainly as a direct ChatGPT replacment, using code models when needed or normal models for chat.
Can we have the option to upload images/pdfs/other files to the chat? the images could be integrated by clip/blip, and the PDF or text ... | https://github.com/huggingface/chat-ui/issues/609 | open | [] | 2023-12-05T12:20:39Z | 2024-10-04T01:13:18Z | 3 | iChristGit |
huggingface/trl | 1,059 | How can I have the evaluation pass in only the response to a prompted/instructed generation into the metric. | I have created the following metric:
```py
class MyCustomMetric(Metric):
def _info(self):
# Returns the MetricInfo that defines the name, description, etc.
return datasets.MetricInfo(
# This should be a short description of your metric.
description="_DESCRIPTION",
... | https://github.com/huggingface/trl/issues/1059 | closed | [] | 2023-12-04T19:01:34Z | 2024-01-12T15:05:10Z | null | CakeCrusher |
huggingface/distil-whisper | 49 | How to make training data? | I have a folder like this:
audio_1
transcript_1.txt
audio_2
transcript_2.txt
how can I make this folder into huggingface dataset? | https://github.com/huggingface/distil-whisper/issues/49 | open | [] | 2023-12-04T18:44:40Z | 2023-12-12T16:51:48Z | null | satani99 |
pytorch/audio | 3,711 | _pickle.UnpicklingError: invalid load key, 'v'. | ### 🐛 Describe the bug
### ISSUE
When I run
`python preprocess_lrs3.py --data-dir=D:/BaiduNetdiskDownload/LRS3 --detector=retinaface --dataset=lrs3 --root-dir=D:/pycharmProject/audio_vision/audio-main/examples/avsr/predata --subset=test --seg-duration=16 --groups=4 --job-index=0`
The following appears
`D:\anacon... | https://github.com/pytorch/audio/issues/3711 | open | [] | 2023-12-04T15:32:55Z | 2024-11-12T15:06:54Z | 1 | YuQing2000 |
pytorch/xla | 6,015 | Kaggle TPU Finetuning Roberta Help | ## ❓ Questions and Help
I have pretrained roberta-base on dna promoter sequences of plants (working on a project). I am currently trying to finetune it on a downstream task of predicting gene expression values, basically a list of 8 values (corresponding to various tissues) from a single promoter sequence.
This wa... | https://github.com/pytorch/xla/issues/6015 | open | [
"question",
"performance",
"xla:tpu"
] | 2023-12-04T14:07:43Z | 2025-04-24T14:56:25Z | null | gurveervirk |
pytorch/xla | 6,014 | How to add a new third-party Backend | ## ❓ Questions and Help
1 We see PyTorch/XLA now pulls XLA from OpenXLA, is that means we just need to adapt OpenXLA to add a new backend?
2 Will collective operations work with third-party backend?
| https://github.com/pytorch/xla/issues/6014 | closed | [] | 2023-12-04T10:10:18Z | 2023-12-28T22:31:11Z | null | dinghaodhd |
huggingface/computer-vision-course | 77 | Issue with rendering the course | If we try to render the course to preview how our added content looks like, it throws the following error
```bash
sarthak@kde:~/Desktop/computer-vision-course$ doc-builder preview computer-vision-course chapters/ --not_python_module
Initial build docs for computer-vision-course chapters/ /tmp/tmp0uqdjoxf/computer-vi... | https://github.com/huggingface/computer-vision-course/issues/77 | open | [
"question"
] | 2023-12-04T01:02:22Z | 2023-12-08T18:17:19Z | null | sarthak247 |
huggingface/sentence-transformers | 2,363 | How to retrieve the epoch of the saved model from model.save ? | Hi,
Thank you for the repo.
Can anyone help me with retrieving the epoch of the saved model, in both cases where save_best_model=True and save_best_model=False?
Thank you
```
model.fit(train_objectives=[(train_dataloader, train_loss)],
evaluator=evaluator,
epochs=num_epochs,
... | https://github.com/huggingface/sentence-transformers/issues/2363 | closed | [] | 2023-12-02T15:25:52Z | 2024-01-09T22:16:20Z | null | gowrijsuria |
huggingface/transformers.js | 426 | [Question] feature-extraction discrepancies across different platforms | I'm observing discrepancies in feature-extraction results across different platforms. Here's the code:
```js
import { pipeline, env } from '@xenova/transformers'
const extractor = await pipeline('feature-extraction', 'Xenova/gte-small', {
quantized: false,
cache_dir: './.cache',
local_files_only: false,... | https://github.com/huggingface/transformers.js/issues/426 | closed | [
"question"
] | 2023-12-01T17:12:04Z | 2023-12-05T18:51:03Z | null | devfacet |
pytorch/xla | 5,959 | how pytorch NCHW TO XLA HWOI format ? help | ## ❓ Questions and Help
I have a request to make the pytorch input model in NCHW format by default, and convert it to HWOI format during the training process, which is conducive to hardware processing data. I wonder if there is a way to uniformly convert this model to the HWOI format when it is sent to XLA. In additio... | https://github.com/pytorch/xla/issues/5959 | open | [
"question"
] | 2023-12-01T06:18:20Z | 2025-04-28T11:44:59Z | null | ckfgihub |
pytorch/serve | 2,814 | [question] How to properly handle client request cancelation during inference? | Hey all,
My model's inference is quite long-running (around 50 seconds per request), so it would be great if closed client connections are handled properly by interrupting the inference that's currently in progress. I'm currently implementing `initialize`, `preprocess`, `inference` and `postprocess` methods in my cu... | https://github.com/pytorch/serve/issues/2814 | closed | [] | 2023-11-30T18:34:49Z | 2024-03-20T22:14:27Z | null | miroslavLalev |
huggingface/chat-ui | 604 | "Invalid State: Controller is already closed" error when trying to use chat-ui locally with llama.cpp | HELP NEEDED
**What is the issue?**
Not able to use chat-ui locally to get the response back when using the llama.cpp as a server.
I can load the chat-ui after installing it via npm install and npm run dev. The env.local file is also configured and UI allows to send the request. However, the response never comes ba... | https://github.com/huggingface/chat-ui/issues/604 | closed | [] | 2023-11-30T16:42:06Z | 2023-11-30T17:41:19Z | 1 | ManasInd |
huggingface/optimum | 1,556 | RuntimeError: Cannot infer the task from a local directory yet, please specify the task manually. | ### System Info
windows 10 - ryzen 3600x - 16 gb ddr4-3000 - python 3.10 - latest optimum inside a venv
### Who can help?
_No response_
### Information
When I try to convert a model to openvino using
optimum-cli export openvino -m "d:\sdxl\LCMphoton" "d:\sdxl\LCMphotonov"
I have this error :
Ru... | https://github.com/huggingface/optimum/issues/1556 | closed | [
"bug"
] | 2023-11-30T16:09:24Z | 2023-12-09T22:37:44Z | 2 | patientx |
pytorch/xla | 5,953 | xla NCHW to HWOI | ## ❓ Questions and Help
Is there a simple way to modify the tensor layout (NCHW) in the entire xla computation graph to convert it to HWOI format, and continue to convert it to NCHW format when it is returned to torch? If there is no simple and unified modification method, how can we change it? For example, modifying ... | https://github.com/pytorch/xla/issues/5953 | closed | [
"duplicate",
"question"
] | 2023-11-30T06:59:00Z | 2025-04-28T11:53:34Z | null | ckfgihub |
pytorch/executorch | 1,313 | How to run the pte model on GPU | Hello,
I would like to konw if ExecuTorch supports GPU.
Now I could export model into pte format and execute runtime for xnnpack backend in Intel device.
The device has GPU.
But when I check GPU usage while running the application, GPU wasn't utilized.
If ExecuTorch supports GPU, can you please share me how to... | https://github.com/pytorch/executorch/issues/1313 | closed | [
"need-user-input"
] | 2023-11-30T05:19:12Z | 2023-12-14T23:54:25Z | null | EarthMu |
huggingface/safetensors | 396 | [Feature request] How about support async save to disk? | ### Feature request
How about support async save to disk?
### Motivation
the weight or optimizer is vary large for LLMs,so,it will waste a lot of time for tensor from cpu to disk。
If we can support async save to disk, it will be vary helpful.
### Your contribution
. | https://github.com/huggingface/safetensors/issues/396 | closed | [
"Stale"
] | 2023-11-30T02:55:25Z | 2024-02-13T01:46:40Z | null | ZHUI |
pytorch/pytorch | 114,822 | convert to onnx with the dynamic shape and onnx convert to tensorrt, but could't get the dynamic engine of tensorrt. dims.d[0]==1 !!! what is wrong with the model??? please give me some help. thanks | ### 🐛 Describe the bug
- convert my model to onnx with the dynamic shape and onnx convert to tensorrt, but could't get the dynamic engine of tensorrt. dims.d[0]==1 !!! but when i converted yolov8 model to onnx, then onnx convert to tensorrt, i got dims.d[0] == -1 and it worked well. what is wrong with the model??... | https://github.com/pytorch/pytorch/issues/114822 | closed | [] | 2023-11-30T02:21:23Z | 2023-11-30T03:22:52Z | null | tianlan6767 |
huggingface/transformers.js | 424 | [Question] Batch inference for vit | It seems like all the tests in the repository related to processors and image models use one image per input.
1. Do the models support feeding a batch of images as input during inference? Is there a speed benefit from this?
2. Are there any other optimization/parallelization tools in transformers.js that I can use ... | https://github.com/huggingface/transformers.js/issues/424 | closed | [
"question"
] | 2023-11-29T09:52:16Z | 2023-12-05T14:49:36Z | null | arseniymerkulov |
huggingface/transformers | 27,755 | How to inference the model with 200k length context | ### Model description
I want to test Yi-34B-200k, Although I ran through the model, as the context length increased, OOM appeared, and I wondered how I could test to 200k context length with sufficient GPU resources.
### Open source status
- [X] The model implementation is available
- [X] The model weights are avai... | https://github.com/huggingface/transformers/issues/27755 | closed | [] | 2023-11-29T07:37:06Z | 2024-05-24T07:24:56Z | null | taishan1994 |
huggingface/transformers.js | 423 | Not able to load local classification onnx model | Was trying to follow the instruction of this page to load local custom model, but failed to find local path https://huggingface.co/docs/transformers.js/custom_usage
the code snippet
`
import { env, AutoTokenizer, AutoModelForSequenceClassification } from '@xenova/transformers';
env.useFS = true;
env.localModel... | https://github.com/huggingface/transformers.js/issues/423 | closed | [
"question"
] | 2023-11-29T06:40:09Z | 2023-11-30T07:27:27Z | null | purezhanghan |
huggingface/chat-ui | 594 | TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed | i use the lasted main version and i have error when make chat, and in GUI , it show "Sorry, something went wrong. Please try again."
TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed
at new NodeError (node:internal/errors:405:5)
at ReadableStreamDefaultController.enqueue (node:inte... | https://github.com/huggingface/chat-ui/issues/594 | closed | [
"support"
] | 2023-11-29T04:28:27Z | 2024-06-17T12:48:45Z | 18 | AlexBlack2202 |
huggingface/chat-ui | 593 | Show image in chat box | Can I show a image by http link on chat box? | https://github.com/huggingface/chat-ui/issues/593 | open | [
"support"
] | 2023-11-29T03:17:17Z | 2023-11-30T17:57:32Z | 3 | ntqnhanguyen |
pytorch/text | 2,217 | how to run this code | ## how to run this code
i need a --pip list -- to run this code | https://github.com/pytorch/text/issues/2217 | open | [] | 2023-11-29T02:15:16Z | 2024-08-05T12:51:43Z | null | ygqrc |
huggingface/optimum | 1,554 | ORT Models Failing because of the latest fsdp changes on transformers Trainer. | ### System Info
```shell
optimum from source
transformers from source
```
### Who can help?
@JingyaHuang
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset... | https://github.com/huggingface/optimum/issues/1554 | closed | [
"bug"
] | 2023-11-28T20:22:40Z | 2023-12-26T18:15:02Z | 6 | AdamLouly |
huggingface/chat-ui | 592 | Authentication Doc and Code may be out-of-date/not working | ## Description
Hello,
Following the doc in the `README`: https://github.com/huggingface/chat-ui#basic-and-bearer. The UI should support (if setup in the `.env.local` file) `Basic` and `Bearer` authentication, however, what I noticed since the requests have been moved to the `huggingface` module is that the author... | https://github.com/huggingface/chat-ui/issues/592 | open | [
"bug",
"documentation",
"back"
] | 2023-11-28T18:50:15Z | 2023-11-29T13:29:22Z | 1 | muscionig |
huggingface/transformers.js | 421 | [Question] FeatureExtractionPipeline input length | @xenova : First of all thank you so much for your amazing work with this open source library. It opens up many possibilities.
One thing that caught my attention which is [FeatureExtractionPipeline](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FeatureExtractionPipeline) can accept any am... | https://github.com/huggingface/transformers.js/issues/421 | closed | [
"question"
] | 2023-11-28T17:28:28Z | 2023-12-02T11:20:52Z | null | devfacet |
huggingface/sentence-transformers | 2,361 | How to divide long texts into chunks using sentence-transformers? | Hello, I encounter the issue of my texts exceeding the maximum lengths allowed by pretrained models. So I intend to divide my texts into smaller chunks and then calculate the average embeddings over them.
However, I find this process is not as straightforward as I initially thought.
In order to properly chunk th... | https://github.com/huggingface/sentence-transformers/issues/2361 | closed | [] | 2023-11-28T16:35:44Z | 2023-12-25T12:38:42Z | null | srhouyu |
huggingface/alignment-handbook | 56 | Why does the alignment-handbook account for user & system Inputs in loss calculation | I noticed that the alignment-handbook doesn't ignore the loss calculated from both the user and system inputs Based on my knowledge, many SFT choose to ignore these. I'm curious about the reasoning behind this difference. | https://github.com/huggingface/alignment-handbook/issues/56 | open | [] | 2023-11-28T06:03:53Z | 2024-05-30T07:45:29Z | 3 | xffxff |
huggingface/transformers | 27,737 | How to save the generated output of BarkModel to an npz file? | Hello there!
I'm using the BarkModel from Hugging Face Transformers and I'm wondering how to save the generated results to an npz file. I'd like to use these saved results as history prompts for the next generation.
In the [suno-ai/bark](https://github.com/suno-ai/bark) , when using the [`semantic_to_waveform`](h... | https://github.com/huggingface/transformers/issues/27737 | closed | [] | 2023-11-28T03:55:19Z | 2024-01-10T08:03:57Z | null | chet-chen |
huggingface/alignment-handbook | 55 | Running on single GPU(16GB) | Hi,
What is the best way to run this on my high performance laptop?
Should this somehow work? Can i calculate how many days/weeks it will run?
Thanks in advance
Specs:
> OS: Win 11 (WSL2)
> CPU: Intel Core i7 12850HX
> Make: Lenovo Thinkpad P16 gen 1
> Memory: 128GB DDR5-4800 (2400MHz)
> GPU: Nvidia ... | https://github.com/huggingface/alignment-handbook/issues/55 | open | [] | 2023-11-27T19:50:12Z | 2023-12-13T14:58:31Z | 1 | patchie |
huggingface/chat-ui | 588 | Hallucinations when using web search | I have tried to run a mistral model with the search api but the web results don't seem to be making it to the model.
I'm hosting the model through text-gen-webui and encountering the exact same issue as #571.
I've given it a go with [openhermes-2.5-mistral-7b.Q5_K_M.gguf](https://imgur.com/a/HQV1lGD), [it seems ... | https://github.com/huggingface/chat-ui/issues/588 | open | [
"support",
"websearch"
] | 2023-11-27T17:12:22Z | 2023-12-27T21:25:42Z | 2 | NasonZ |
huggingface/chat-ui | 587 | How do I format the ChatPromptTemplate ? | I currently have a working setup with llamacpp+mistral 7b instruct with the following loca.env :
```
MODELS=`[
{
"name": "Mistral",
"chatPromptTemplate": "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#i... | https://github.com/huggingface/chat-ui/issues/587 | open | [
"support",
"models"
] | 2023-11-27T15:21:17Z | 2023-12-19T07:21:50Z | 5 | iChristGit |
huggingface/candle | 1,379 | Help request: How to compile CUDA kernels with `cc-rs`? | Hello everybody,
In the process of adding PagedAttention to candle-vllm, I need to compile some CUDA kernels. I am currently trying to use `cc-rs` in a `build.rs` to automatically build the kernels. However, I am not making much progress as I have run into issues that seem to be tied to the build stage.
I would r... | https://github.com/huggingface/candle/issues/1379 | closed | [] | 2023-11-27T14:32:10Z | 2023-11-27T20:57:11Z | null | EricLBuehler |
huggingface/transformers | 27,726 | How to load PixArtAlphaPipeline in 8bit? | I know there is example but I couldn't make it work. I am trying to make an auto installer and gradio interface for Pix Art Alpha Pipeline so common people can install and use on their Windows PCs
Currently my below code working and I want to make it load in 8 bit is that possible?
```
if torch.cuda.is_available... | https://github.com/huggingface/transformers/issues/27726 | closed | [] | 2023-11-27T11:36:44Z | 2024-01-05T08:03:56Z | null | FurkanGozukara |
huggingface/diffusers | 5,942 | How to prepare dataset for text-guided image to image generation | As the title suggests, I want to use stable diffusion to fine-tune my own dataset. How should I build it? I have tried:
--input_image
--xx.jpg
--xx.jpg
--output_image
--yy.jpg
--yy.jpg
metadata.csv
but it did't work ,can anybody help? | https://github.com/huggingface/diffusers/issues/5942 | closed | [
"stale"
] | 2023-11-27T06:58:57Z | 2024-01-09T15:06:12Z | null | feelme0461 |
huggingface/alignment-handbook | 52 | What about the system prompt? | It seems that the system prompt is left to be `\n` or rather blank.
Inspecting UltraChat (https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k?row=5), seems that no system prompt is added to the dataset.
There must be something that I missed in regards to addition of system prompts to the dataset for tra... | https://github.com/huggingface/alignment-handbook/issues/52 | open | [] | 2023-11-27T02:55:38Z | 2023-11-27T02:55:38Z | 0 | timothylimyl |
huggingface/alignment-handbook | 50 | What is the expected "global batch size"? | In the recipes README there is this statement:
> If you scale up/down the number of GPUs, we recommend also scaling up the per-device batch size or number of gradient accumulation steps to keep the global batch size constant (and thus replicate our results).
Q: What is the expected "global batch size"?
For ex... | https://github.com/huggingface/alignment-handbook/issues/50 | closed | [] | 2023-11-26T21:47:41Z | 2023-11-27T04:14:22Z | null | ohmeow |
huggingface/transformers.js | 417 | [Question] Any examples of processing video frames of a user uploaded video (specifically for depth estimation)? | Hi there, I'm wondering if there are any examples of processing video frames of a user uploaded video? I'm specifically looking to run depth estimation on each frame of a short video, but any similar example would be useful.
If not, does this approach seem correct?
* Use one of the approaches described [here](https... | https://github.com/huggingface/transformers.js/issues/417 | open | [
"question"
] | 2023-11-26T09:18:04Z | 2023-12-10T22:51:18Z | null | jparismorgan |
huggingface/chat-ui | 583 | Option to share the web interface locally/online ? | I wish we could make the ui available on phone/mac or even outside the local network.
For example in SillyTavern (https://github.com/SillyTavern/SillyTavern)
You can either open it up to all devices in the local network or open a cloudflare tunnel to access it through a link.
Is that possible to add? | https://github.com/huggingface/chat-ui/issues/583 | open | [
"enhancement",
"back"
] | 2023-11-26T00:44:08Z | 2024-04-22T16:45:44Z | 2 | iChristGit |
huggingface/candle | 1,375 | Question: How to interface a C++ API `torch::Tensor` with `candle_core::Tensor`? | I was wondering if there is a way to use a C++ API that accepts a Pytorch `torch::Tensor` with a Candle `candle_core::Tensor`? For reference, I want to use [this](https://github.com/vllm-project/vllm/blob/main/csrc/ops.h) C++ API.
Can I convert between tensor types? @LaurentMazare, would it be possible to use [tch-r... | https://github.com/huggingface/candle/issues/1375 | closed | [] | 2023-11-25T19:05:27Z | 2023-11-25T23:04:03Z | null | EricLBuehler |
pytorch/TensorRT | 2,486 | ❓ [Question] Using dynamic shapes with FX frontend | I tried to use dynamic shapes in FX path with the following codes. It seems that the `input_specs` argument passed to `LowerSetting` has no effect and TRT gives an error message.
```python
import torch
import torch.nn as nn
from torch_tensorrt.fx import InputTensorSpec, LowerSetting
from torch_tensorrt.fx.lower ... | https://github.com/pytorch/TensorRT/issues/2486 | closed | [
"question"
] | 2023-11-25T06:52:12Z | 2024-02-22T13:30:13Z | null | HolyWu |
pytorch/TensorRT | 2,485 | How may I install torch_tensorrt with my own local version of torch? | ## ❓ Question
How may I install `torch_tensorrt` with my own local version of torch?
## What you have already tried
pip install torch-tensorrt --no-deps resulted in
```
ImportError: /home/jonch/.local/lib/python3.10/site-packages/torch_tensorrt/lib/libtorchtrt.so: undefined symbol: _ZN3c106detail23torchIn... | https://github.com/pytorch/TensorRT/issues/2485 | open | [
"question"
] | 2023-11-25T05:25:35Z | 2023-11-28T19:50:29Z | null | jon-chuang |
huggingface/accelerate | 2,187 | how to collect outputs(not tensor dtype) on multi gpus | As the toy example below,
```
val_dataset = ['a', 'b', 'c', 'd', 'e']
val_dataloader = DataLoader(
val_dataset, batch_size=2
)
accelerator = Accelerator()
val_dataloader = accelerator.prepare(val_dataloader)
for step, batch in enumerate(val_dataloader):
print(batch, accelerator.device)
```
... | https://github.com/huggingface/accelerate/issues/2187 | closed | [] | 2023-11-25T02:51:21Z | 2023-11-27T06:07:19Z | null | shliu0 |
huggingface/chat-ui | 581 | Trying to set up with TGI | I have installed TGI using docker, I can see the api docs at http://127.0.0.1:8080/docs/
But still cannot set up the env.local file, I have tried to set it up with the example, but always failing.

 seems to be disabled at default.
And I failed to figure out how to enable it.
Could anyone be kind enough to provide some guid... | https://github.com/huggingface/hf_transfer/issues/20 | closed | [] | 2023-11-24T08:13:00Z | 2023-11-27T12:15:10Z | null | tongyx361 |
huggingface/gsplat.js | 39 | How to implement point clouds render? | Hi, great work! I see that this library is upon [antimatter15/splat](https://github.com/antimatter15/splat), but this library does not have the same render which is very similar to point clouds like that lib. I want to know how to implement this function base on your gsplat library? By the way, do you have any documen... | https://github.com/huggingface/gsplat.js/issues/39 | open | [] | 2023-11-24T07:27:33Z | 2024-01-22T21:12:06Z | null | xinnai |
huggingface/alignment-handbook | 46 | Weird DPO loss | Hi, I would like to raise some attention to issue #38.
It seems that the DPO-Lora training loss (red line) drops abruptly at the beginning of each epoch, which seems weird. (I tried Lora model global batch size 64, multi_gpu acceleration, 8GPUs, learning rate 1e-4, others same suggested)
In the mean time, the f... | https://github.com/huggingface/alignment-handbook/issues/46 | open | [] | 2023-11-24T03:07:46Z | 2024-05-28T07:09:10Z | 1 | ChenDRAG |
huggingface/diffusers | 5,912 | How to set config in VaeImageProcessor? | I created a `StableDiffusionControlNetImg2ImgPipeline` and I want to manually set the config `do_normalize` in `VaeImageProcessor`. I wonder how can I set? I look for it in the pipe.vae.config and see nothing about it. | https://github.com/huggingface/diffusers/issues/5912 | closed | [
"stale"
] | 2023-11-23T12:54:22Z | 2023-12-26T21:29:17Z | null | youyuge34 |
huggingface/chat-ui | 576 | Cannot build using latest Chat UI Space template | Using the Dockerfile created from the ChatUI-Space template, but cloning it to a local machine and trying to build it fails at `npm run build`
> #18 [chatui-builder 12/12] RUN npm run build
#0 0.673
#0 0.673 > chat-ui@0.6.0 build
#0 0.673 > vite build
#0 0.673
#0 1.678 vite v4.3.9 building SSR bundle for produc... | https://github.com/huggingface/chat-ui/issues/576 | open | [
"support",
"spaces"
] | 2023-11-23T12:23:06Z | 2023-11-30T14:11:32Z | 1 | simon376 |
huggingface/transformers | 27,666 | how to remove punctuation marks. | ### System Info
i trained t5-large for translation.
the result of train was good
But when i input some sentence, the result is like that "What are you doing now?.??....."
[?.??......] <- how to delete that punctuation marks.
i put some parameter like max_length. But i can not solve that situation
### Who ... | https://github.com/huggingface/transformers/issues/27666 | closed | [] | 2023-11-23T07:21:33Z | 2023-12-31T08:03:43Z | null | chanyong-owl |
huggingface/blog | 1,655 | how to scale fine-tuning whisper in English? | I'm attempting to fine-tune whisper using the excellent hugging face tut: https://huggingface.co/blog/fine-tune-whisper. The delta between the tut's case and my case is that I am using English which has 1M more test cases (and also I'm using big GPUs so I am using `whisper-large-v3`).
No matter how much compute I th... | https://github.com/huggingface/blog/issues/1655 | open | [] | 2023-11-22T22:45:29Z | 2024-03-10T06:55:47Z | null | jsteinberg-rbi |
huggingface/datasets | 6,446 | Speech Commands v2 dataset doesn't match AST-v2 config | ### Describe the bug
[According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover,... | https://github.com/huggingface/datasets/issues/6446 | closed | [] | 2023-11-22T20:46:36Z | 2023-11-28T14:46:08Z | 3 | vymao |
pytorch/rl | 1,708 | [Question] What is ESS in PPO? | Here [ppo.py](https://github.com/pytorch/rl/blob/main/torchrl/objectives/ppo.py#L649) from PPO source code is the definition.
<img width="983" alt="Screenshot 2023-11-22 at 1 21 12 AM" src="https://github.com/pytorch/rl/assets/22335780/3ec3663e-7140-4353-a65a-8b13f761fab2">
Does ESS stand for **Effective Sample Siz... | https://github.com/pytorch/rl/issues/1708 | closed | [] | 2023-11-22T06:13:37Z | 2023-11-23T03:07:41Z | null | gitfourteen |
huggingface/alignment-handbook | 45 | Reproducing of Lora Model Result on MT-Bench | Recently, I attempted to fit the DPO on my own dataset.
Initially, I tried to reproduce the results of your LORA model( 7.43 on MT-Bench).
However, I encountered some issues.
Despite using all your parameters and data, here are my results on MT-Bench:
| Model | MT-Bench |
|--------|--------|
| Zephyr-SFT-Lora-Ow... | https://github.com/huggingface/alignment-handbook/issues/45 | open | [] | 2023-11-22T03:42:32Z | 2023-12-11T17:09:32Z | 27 | wlhgtc |
huggingface/optimum | 1,551 | Running llama-2-13b resulted in `Killed` | ### System Info
```shell
This is my run.py code:
import torch
import transformers
import requests
print(torch.cuda.is_available())
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load model and adapter weights from local directory
model = transformers.AutoMo... | https://github.com/huggingface/optimum/issues/1551 | closed | [
"bug"
] | 2023-11-21T13:11:40Z | 2024-01-09T15:58:09Z | 1 | maxloopinmok |
huggingface/optimum-quanto | 32 | Are threre some exmples show how to export onnx model ? torch.onnx.export | https://github.com/huggingface/optimum-quanto/issues/32 | closed | [] | 2023-11-21T11:33:37Z | 2024-03-13T08:15:51Z | null | youkiwang | |
pytorch/executorch | 1,252 | What is the codegen really done at the Executorch flow? | Hi,
Although I study the https://pytorch.org/executorch/stable/concepts.html#codegen about codegen part, I do not understand very well about this part.

Above the concepts map, after ... | https://github.com/pytorch/executorch/issues/1252 | closed | [
"need-user-input",
"module: kernels",
"triaged"
] | 2023-11-21T08:38:57Z | 2024-02-14T00:53:21Z | null | kris-himax |
huggingface/transformers | 27,615 | How to get the number of trainable parameters for a hf model | ### Feature request
'
peft_parameters = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=8,
bias="none",
task_type="CAUSAL_LM"
)
train_params = TrainingArguments(
output_dir="./results_modified",
num_train_epochs=1,
per_device_train_batch_size=4,
gradient_accumulation_step... | https://github.com/huggingface/transformers/issues/27615 | closed | [] | 2023-11-21T00:37:01Z | 2023-11-21T19:28:32Z | null | mathmax12 |
huggingface/chat-ui | 571 | trying to replicate the api search with the local search option | When I try searching for information on the site (huggingface.co/chat) it works fine and gives correct information, but when doing the same thing using the same model I get hallucinations..
Ive tried all sorts of temperature settings and models.
This is the result locally:
:
# set seed for all possible avenues of stochasticity
numpy.random.seed(seed=seed)
random.seed(seed)
torch.manu... | https://github.com/huggingface/trl/issues/1014 | closed | [] | 2023-11-20T16:47:28Z | 2024-01-03T15:05:11Z | null | zhaochenyang20 |
huggingface/candle | 1,349 | How to pass bounding box instead of points in the segment-anything example? | Is it possible to pass a bounding box instead of points when using the segment-anything model? Is this just 4 points? | https://github.com/huggingface/candle/issues/1349 | open | [] | 2023-11-20T15:44:22Z | 2023-11-20T15:44:22Z | null | svelterust |
huggingface/alignment-handbook | 43 | Did you use RMSprop or AdamW as the optimizer? | Hi to whoever is reading this 🤗
## Question
After reading the Zephyr pre-printed paper https://arxiv.org/pdf/2310.16944.pdf and going through the configuration files here, I saw that there was a mismatch between the optimizer used in https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-... | https://github.com/huggingface/alignment-handbook/issues/43 | closed | [] | 2023-11-20T15:23:03Z | 2024-03-07T06:55:07Z | 3 | alvarobartt |
huggingface/sentence-transformers | 2,359 | How to evaluate the result of dataset that does not have any labels | Hi,
I was trying to look at the different evaluation metrics that are provided to SentenceTransformers. I have a column of text in my dataset that I compare against a query and get the top k similarity using cosine similarity. I do not know if there is any method to evaluate the result. Should I consider the cosine ... | https://github.com/huggingface/sentence-transformers/issues/2359 | open | [] | 2023-11-20T14:52:21Z | 2023-11-20T14:52:21Z | null | Yarmohamadshr |
huggingface/alignment-handbook | 42 | How to QLoRA training with ZeRO-3 on two or more GPUs? | I added a 4-bit load after the command LoRA training with ZeRO-3 on two or more GPUs to achieve a mix of QLoRA and ZeRO-3. But the program encountered the following error:
RuntimeError: expected there to be only one unique element in <generator object Init._convert_to_deepspeed_param.<locals>.all_gather_coalesced.<loc... | https://github.com/huggingface/alignment-handbook/issues/42 | open | [] | 2023-11-20T14:13:36Z | 2024-05-17T00:27:27Z | null | Di-Zayn |
huggingface/transformers | 27,600 | How to get input sentence embedding from Llama or Llama2? | I'm trying to get the sentence embedding that I input, I checked some common practice to do it, but I'm not sure I'm doing the it right. Who may be help? @gante Thanks if you can be help. my code is as below:
```
model = LlamaForCausalLM.from_pretrained(
args.pretrained_name_or_path,
torch_dtype=torch... | https://github.com/huggingface/transformers/issues/27600 | closed | [] | 2023-11-20T13:18:08Z | 2023-11-22T14:32:26Z | null | waterluck |
pytorch/serve | 2,801 | When is initialize method called? | ### 📚 The doc issue
I've created a custom handler with the following initialize method
```python
class CustomHandler(VisionHandler):
def initialize(self, context):
print("Got here 000!")
time.sleep(20)
print("Got here 111!")
super(VisionHandler, self).__init__()
```
I sp... | https://github.com/pytorch/serve/issues/2801 | closed | [] | 2023-11-20T12:03:07Z | 2023-11-23T20:47:00Z | 4 | InakiRaba91 |
pytorch/serve | 2,800 | When is initialize method called? | ### 📚 The doc issue
I've created a custom handler with the following initialize method
```python
class CustomHandler(VisionHandler):
def initialize(self, context):
print("Got here 000!")
time.sleep(20)
print("Got here 111!")
super(VisionHandler, self).__init__()
```
I sp... | https://github.com/pytorch/serve/issues/2800 | closed | [] | 2023-11-20T11:46:16Z | 2023-11-20T12:02:53Z | 0 | irabanillo91 |
pytorch/executorch | 1,239 | How to access to result of tensor after inference | Hi,
I am implementing executorch by following step.
1. Exporting resnet18 including softmax layer.
2. Implementing executor_runner.cpp to access to result of tensor after inference.
I expected that I could get each classes' result like [0,0,0,0.1,0.9] after inference(including softmax).
But when I try to acce... | https://github.com/pytorch/executorch/issues/1239 | closed | [
"need-user-input"
] | 2023-11-20T09:03:14Z | 2023-11-22T19:24:01Z | null | EarthMu |
huggingface/transformers | 27,592 | How to always use initial prompt in Whisper? | I checked this PR (#22496 ) but still can't figure out how to always use the initial prompt. is it possible to provide a use case? | https://github.com/huggingface/transformers/issues/27592 | closed | [] | 2023-11-19T18:35:23Z | 2023-11-20T08:29:41Z | null | GanymedeNil |
huggingface/pytorch-image-models | 2,038 | how to run the efficientmit.py | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternat... | https://github.com/huggingface/pytorch-image-models/issues/2038 | closed | [
"enhancement"
] | 2023-11-19T02:50:59Z | 2023-11-19T17:16:48Z | null | 1377534928 |
huggingface/chat-ui | 566 | Is Chat-UI gonna support the new Assistant API? | They store the threads, and there's also multi-modal support | https://github.com/huggingface/chat-ui/issues/566 | open | [
"enhancement",
"models"
] | 2023-11-19T02:06:44Z | 2023-11-20T08:42:49Z | 1 | wayliums |
huggingface/alignment-handbook | 40 | How do I get the training scrips to utilize all my GPUs? | Hello there,
I'm running this script:
```
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml
```
... but on my machine with 2x3090s ... only GPU 0 is being utilized.
What do I ... | https://github.com/huggingface/alignment-handbook/issues/40 | closed | [] | 2023-11-19T00:11:24Z | 2023-11-19T01:20:21Z | null | ohmeow |
huggingface/transformers.js | 401 | [Question | Bug] What am I doing wrong while using the `question-answering` model? | ## The Problem
I'm trying to use `question-answering` model to answer simple questions in a given context. But I always get a TypeError about floats. I guess that's an internal issue, because at top level of code I am not using floating point numbers. But maybe I am doing something wrong.
By the way, I'm using Ty... | https://github.com/huggingface/transformers.js/issues/401 | closed | [
"question"
] | 2023-11-18T12:58:50Z | 2023-11-19T12:44:00Z | null | AyresMonteiro |
huggingface/transformers.js | 399 | [Question] Is it possible to encode and decode with `AutoTokenizer.from_pretrained` and keep spaces? | I'm trying to build a pure JS online tokenizer, visually similar to https://github.com/1rgs/tokenwiz (but without the Python backend)
I'm doing something like:
```js
const model = await AutoTokenizer.from_pretrained('mistralai/Mistral-7B-v0.1')
const textInput = `[INST] <<SYS>>
You are a friendly Llama.
<</SY... | https://github.com/huggingface/transformers.js/issues/399 | closed | [
"question"
] | 2023-11-17T18:46:05Z | 2023-11-17T20:18:02Z | null | daaain |
huggingface/alignment-handbook | 39 | Why zephyr-7b-dpo-lora is finetuned from mistralai/Mistral-7B-v0.1 instead of zepher-7b-sft model? | There is a misalignment between zephyr-7b-dpo-lora and zephyr-7b-dpo-full.
The former one is finetuned from mistralai/Mistral-7B-v0.1.
The latter is finetuned from zephyr-7b-dpo-full.
I wonder what causes this misalignment ?
Also, have you benchmarked performance improvement of the lora finetunning script? In m... | https://github.com/huggingface/alignment-handbook/issues/39 | open | [] | 2023-11-17T18:11:59Z | 2024-03-21T19:18:08Z | 2 | ChenDRAG |
huggingface/optimum | 1,545 | Add support to export facebook encodec models to ONNX | ### Feature request
When I try to use optimum-cli to export the facebook/encodec_32khz model I get this error:
```
% optimum-cli export onnx --model facebook/encodec_32khz encodec.onnx
Framework not specified. Using pt to export to ONNX.
/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-pack... | https://github.com/huggingface/optimum/issues/1545 | open | [
"feature-request",
"onnx"
] | 2023-11-17T11:16:01Z | 2025-12-12T06:23:33Z | 6 | giamic |
pytorch/audio | 3,704 | Random cropping for variable length sequences | ### 🚀 The feature
I am proposing to add a `torch.nn.Module` transform that automatically crops/pads signals (with different options for padding such as constant/mirroring). I have the implementation already local so I would push it myself if this is alright.
The interface would like as follows:
```python
class... | https://github.com/pytorch/audio/issues/3704 | open | [] | 2023-11-17T10:37:24Z | 2024-05-23T06:24:00Z | 4 | ATriantafyllopoulos |
huggingface/peft | 1,142 | How to do Gradient Checkpoint + LoRA | ### System Info
<img width="570" alt="image" src="https://github.com/huggingface/peft/assets/18441985/9b3ae040-d78a-477b-a9ec-6ab26b687a68">
### Who can help?
I need help with using LoRA + gradient checkpointing.
Using the reentrant option appears to be the solution, but it slows down training a lot, for LLam... | https://github.com/huggingface/peft/issues/1142 | closed | [] | 2023-11-17T09:34:16Z | 2025-10-06T10:22:58Z | null | tcapelle |
pytorch/pytorch | 113,933 | How to re-use torch.compile results in different python processes? | ### 🚀 The feature, motivation and pitch
I'm trying to compile my custom vision transformer-based model. The compiled version is indeed faster than the traditional one.
However, as scaled_dot_product_attention does not support dynamic shapes, the program compiles the transformer block for every input size. Thus, th... | https://github.com/pytorch/pytorch/issues/113933 | closed | [
"high priority",
"feature",
"triaged",
"months",
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo"
] | 2023-11-17T08:22:11Z | 2024-08-30T06:47:28Z | null | flishwang |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.