repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/chat-ui | 341 | SSL Wrong version number error | i have added this
"endpoints": [
{"url": "http://127.0.0.1:8080/generate_stream", "weight": 100}
],
in the model but i am getting this error
TypeError: fetch failed
at fetch (/home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/undici/index.js:109:13)
at process.processTicksAndReject... | https://github.com/huggingface/chat-ui/issues/341 | closed | [
"support"
] | 2023-07-12T04:40:58Z | 2023-09-18T14:00:27Z | 4 | swikrit21 |
huggingface/diffusers | 4,054 | [SD-XL] How to apply invisible-watermark for latent output | ### Describe the bug
As a part of the license with SAI, we need to ensure the invisible watermark is applied across all images output by these models, including the Img2Img pipeline.
### Reproduction
```py
# if xformers or torch_2_0 is used attention block does not need
# to be in float32 which can... | https://github.com/huggingface/diffusers/issues/4054 | closed | [
"bug"
] | 2023-07-12T03:58:04Z | 2023-07-12T10:21:29Z | null | bghira |
huggingface/transformers.js | 192 | Table Question Answering Support? | Hi - Interested in support for table question answering models. It's noted that these aren't supported, but is there any reason they wouldn't work if leveraged?
| https://github.com/huggingface/transformers.js/issues/192 | open | [
"question"
] | 2023-07-12T01:12:07Z | 2023-07-13T16:18:19Z | null | timtutt |
huggingface/peft | 685 | Matrix mistmatch when trying to adapt Falcon with QLoRA, how to fix? | ### System Info
```
(data_quality) brando9~ $ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang ver... | https://github.com/huggingface/peft/issues/685 | closed | [] | 2023-07-11T20:01:37Z | 2023-07-24T00:11:02Z | null | brando90 |
huggingface/diffusers | 4,047 | How to set lora scale when loading a LoRA model? | Hey there, first of all thanks for your fantastic work!
I am loading LoRA weights, and I would like to set the scale of them being applied. Checking the code, it appears to be possible as shown [here](https://github.com/huggingface/diffusers/blob/fc7aa64ea8f5979b67bd730777e8e1c32e3adb05/src/diffusers/loaders.py#L109... | https://github.com/huggingface/diffusers/issues/4047 | closed | [] | 2023-07-11T17:38:05Z | 2023-08-29T05:30:44Z | null | pietrobolcato |
huggingface/diffusers | 4,042 | How to combine the reference-only with inpainting and depth control? | ### Model/Pipeline/Scheduler description
Hi, I recently want to combine the reference-only with image inpaint , with depth control to replace background for portrait images. However, I have no idea to build this pipeline as for there is no reference with inpaint pipeline example. Could you please help me to figure it... | https://github.com/huggingface/diffusers/issues/4042 | closed | [] | 2023-07-11T12:17:24Z | 2023-07-14T06:12:29Z | null | AmberCheng |
pytorch/text | 2,190 | Missing documentation for T5 model | ## 📚 Documentation
**Description**
<!-- A clear and concise description of what content in https://pytorch.org/text/stable/index.html is an issue. -->
As per title. There is no documentation on T5 model although it exists
https://pytorch.org/text/stable/models.html
| https://github.com/pytorch/text/issues/2190 | open | [] | 2023-07-11T10:40:37Z | 2023-07-11T10:40:37Z | 0 | gau-nernst |
huggingface/chat-ui | 340 | [WebSearch] "Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 1000 `inputs` tokens and 1024 `max_new_tokens`" | Hello there,
Title says it all.
We are not using any custom endpoints/models. We're just relying on the HuggingFace's API inferences.
Is there a way to increase/decrease the inputs token when using WebSearch (or even just increase the max sum)? Because it works fine if `max_new_tokens` is set to 512 BUT it, obv... | https://github.com/huggingface/chat-ui/issues/340 | closed | [
"question",
"models"
] | 2023-07-11T07:33:18Z | 2023-07-12T09:16:21Z | null | gollumeo |
huggingface/diffusers | 4,029 | How can I make diffuser pipeline to use .safetensors file for SDXL? | Cloning entire repo is taking 100 GB
How can I make below code to use .safetensors file instead of diffusers?
Lets say I have downloaded my safetensors file into path.safetensors
How to provide it?
The below code working but we are cloning 100 GB instead of just single 14 GB safetensors. Waste of bandwidth... | https://github.com/huggingface/diffusers/issues/4029 | closed | [] | 2023-07-10T21:52:22Z | 2023-12-11T18:45:18Z | null | FurkanGozukara |
huggingface/chat-ui | 337 | Feature Request: Save messages and error message even if text generation endpoint fails | Situation: Text generation endpoint is not running. Then user sends a message.
Current Behavior: UI throws an error and saves conversation to mongodb like this, with an empty message list.
```
{
_id: ObjectId('64ac1abc2ac09222e24cc984'),
title: 'Untitled 5',
messages: [],
model: 'GPT',
creat... | https://github.com/huggingface/chat-ui/issues/337 | closed | [
"enhancement",
"back",
"p2"
] | 2023-07-10T15:18:52Z | 2023-10-10T11:16:22Z | 1 | loganlebanoff |
huggingface/transformers.js | 187 | [Question] Performance and size of models | Great project, tons of potential! I have a general question I thought I may ask. Using the convert.py scripts, I took a Pytorch model and converted it to ONNX. With quantizing, I get a full 428MB model and a 110MB _quantized model. Now how does it work for the user exactly? Does the user automatically download the _qua... | https://github.com/huggingface/transformers.js/issues/187 | closed | [
"question"
] | 2023-07-10T14:39:31Z | 2023-07-11T17:06:38Z | null | sabatale |
huggingface/chat-ui | 336 | how to work in chat-ui with non streaming data? | I was working in a chat-ui by providing my endpoints only which is hosted in a localhost:8000/generate. I dont have any model but endpoints only so can you provide me a solution for working in only endpoints and non streaming data( application/json or application/plain). I have model hosted in this server.
in modelE... | https://github.com/huggingface/chat-ui/issues/336 | closed | [] | 2023-07-10T13:43:17Z | 2023-07-11T08:29:40Z | null | swikrit21 |
huggingface/transformers.js | 186 | [Question] How to interpret boxes in object detection example ? | hi,
can anyone help me how to interpret boxes while using object detection with this model "Xenova/detr-resnet-50".
i want to crop out the detected object from the image using sharp (nodejs) ? how can i pass these boxes to sharp resize function ?
| https://github.com/huggingface/transformers.js/issues/186 | closed | [
"question"
] | 2023-07-10T12:59:22Z | 2023-07-11T00:55:13Z | null | geminigeek |
huggingface/chat-ui | 335 | Bug: Unexpected execution result on Firefox browser with Chat-UI ver. 0.3.0 | I recently installed the 0.3.0 version of the HF Chat-UI software.
I then performed an evaluation using the **HuggingFaceH4/starchat-beta** model.
At that time, I typed the question "_Could you tell me about the weather in Toyko City in Japan on July-10-2023_?" and ran it.
Unfortunately, the results varied bet... | https://github.com/huggingface/chat-ui/issues/335 | closed | [
"support"
] | 2023-07-10T04:40:40Z | 2023-09-11T09:32:14Z | 2 | leemgs |
huggingface/chat-ui | 334 | Chat-ui is starting, but nothing happends | # Description:
When starting the Chat-ui, the initialization process begins as expected but stalls indefinitely, without any evident progress. The application doesn't crash nor gives any errors. This issue occurs across multiple attempts, regardless of browser type or device.
# Steps to reproduce:
- Install prer... | https://github.com/huggingface/chat-ui/issues/334 | closed | [
"support"
] | 2023-07-09T13:53:34Z | 2023-09-11T09:31:49Z | 2 | Notespeak |
huggingface/diffusers | 3,988 | how to use part of the controlnet models with a "StableDiffusionControlNetInpaintPipeline" object? | I created a "StableDiffusionControlNetInpaintPipeline" object with a list of controlnet models such as "canny","openpose", but sometimes I want to use canny only or openpose only.Is there's a way to reuse part of the controlnet models with a already inited "StableDiffusionControlNetInpaintPipeline" object? | https://github.com/huggingface/diffusers/issues/3988 | closed | [] | 2023-07-07T09:18:18Z | 2023-08-01T04:51:41Z | null | AdamMayor2018 |
pytorch/pytorch | 104,764 | How to integrate the new cpp file with Pytorch geometric? | ### 🚀 The feature, motivation and pitch
I am using neighbour loader function in my code, which uses sample_adj_cpu function to sample neighbours. I am making some changes in this function which is present in the following file.
File link:
[[pytorch_sparse](https://github.com/rusty1s/pytorch_sparse/tree/master)/[c... | https://github.com/pytorch/pytorch/issues/104764 | closed | [] | 2023-07-07T07:48:04Z | 2023-07-07T16:32:01Z | null | shivanisankhyan |
pytorch/TensorRT | 2,082 | ❓ [Question] How to decrease the latency of the inference? | ## ❓ Question
Hi. I convert pytorch retinaface and arcface model to TensorRT via torch_tensorrt library. Everything is okay but after some iterations inference is freezing and the time for handling the image is badly increased (>10x).
Snippet of inference simulation is here:
## Environment
TensorRT Version: 8.4... | https://github.com/pytorch/TensorRT/issues/2082 | closed | [
"question",
"No Activity",
"component: runtime",
"performance"
] | 2023-07-07T05:51:35Z | 2023-10-16T00:02:22Z | null | hvildan |
huggingface/optimum-habana | 292 | Where in the directory "/tmp/tst-summarization", is the summarization output stored? | ### System Info
```shell
Optimum Habana : 1.6.0
SynapseAI : 1.10.0
Docker Image : Habana® Deep Learning Base AMI (Ubuntu 20.04)
Volume : 1000 GiB
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as G... | https://github.com/huggingface/optimum-habana/issues/292 | closed | [
"bug"
] | 2023-07-07T03:24:31Z | 2023-07-18T08:30:21Z | null | Abhaycnvrg |
huggingface/trl | 503 | How to get labels into the SFTTrainer | Hi!
I am trying to prompt tune medalpaca 7b using prompt tuning or lora with the SFTTrainer. I have a prompt and I have labels that I want the model to output. I have made a Dataset class that inherits from torch.utils.data.Dataset to prepare my inputs, but I am wondering, if there is some way to make the trainer use ... | https://github.com/huggingface/trl/issues/503 | closed | [] | 2023-07-06T22:19:21Z | 2023-08-14T15:05:10Z | null | MaggieK410 |
huggingface/transformers.js | 182 | Website and extension using same model | Per the chrome extension example, you pack the model with the extension. Is there a way for a website and chrome extension to use the same cached model? If my project has both a website and extension, I hope they could use a single model instead of having store 2 on the user's machine.
| https://github.com/huggingface/transformers.js/issues/182 | open | [
"question"
] | 2023-07-06T17:43:48Z | 2023-07-16T17:26:09Z | null | escottgoodwin |
huggingface/chat-ui | 331 | How to send model name as a input to API endpoint | I want to host two models and query them by switching between . The problem is I'm not able to send model name as a parameter from UI to API endpoints.
Can someone help on this? | https://github.com/huggingface/chat-ui/issues/331 | closed | [
"question"
] | 2023-07-06T13:04:04Z | 2023-09-18T14:03:18Z | null | sankethgadadinni |
huggingface/transformers | 24,685 | How to get the last 4 Hidden states from the feature extraction pipeline | I have defined a pipeline for Feature extraction
```
# Create the pipeline
p = pipeline(
task="feature-extraction",
tokenizer="microsoft/biogpt",
model="microsoft/biogpt",
framework="pt",
device=0
)
bio_gpt = AutoModel.from_pretrained("microsoft/biogpt", output_hidden_states= True)
bio_gp... | https://github.com/huggingface/transformers/issues/24685 | closed | [] | 2023-07-06T08:45:08Z | 2023-08-14T15:02:35Z | null | Luke-4 |
pytorch/serve | 2,446 | is TS_JOB_QUEUE_SIZE a valid environment variable? | ### 📚 The doc issue
[This page](https://pytorch.org/serve/configuration.html) says environment variables are equivalent to server configuration set in `config.properties`
Setting `TS_JOB_QUEUE_SIZE` as an environment variable has no effect in Docker version 0.8.0
```
Torchserve version: 0.8.0
TS Home: /home/v... | https://github.com/pytorch/serve/issues/2446 | closed | [
"question",
"triaged",
"docker"
] | 2023-07-06T01:18:47Z | 2023-10-28T19:43:36Z | null | sreeprasannar |
huggingface/setfit | 393 | AttributeError: 'list' object has no attribute 'shuffle' | I am getting the "AttributeError: 'list' object has no attribute 'shuffle'" error when I try to use setfit.
The dataset has two columns; one text and the second is the label column. | https://github.com/huggingface/setfit/issues/393 | closed | [
"question"
] | 2023-07-05T16:47:17Z | 2023-12-05T14:41:13Z | null | gpirge |
huggingface/datasets | 6,008 | Dataset.from_generator consistently freezes at ~1000 rows | ### Describe the bug
Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I
Somehow it worked a few times but mostly this makes the datasets library much more ... | https://github.com/huggingface/datasets/issues/6008 | closed | [] | 2023-07-05T16:06:48Z | 2023-07-10T13:46:39Z | 3 | andreemic |
pytorch/torchx | 737 | -j vs --cpu/--gpu in ddp | ## 📚 Documentation
## Link
[https://pytorch.org/torchx/latest/components/distributed.html](https://pytorch.org/torchx/latest/components/distributed.html)
## What does it currently say?
Not clear whether --cpu, --gpu arguments are overrided by -j arguments, although in my testing (launch then run top, etc.) it ... | https://github.com/meta-pytorch/torchx/issues/737 | open | [] | 2023-07-05T15:57:56Z | 2023-07-12T20:47:24Z | 1 | godfrey-cw |
pytorch/pytorch | 104,617 | How to integrate the new cpp file with Pytorch geometroic? | ### 🚀 The feature, motivation and pitch
I am using neighbour loader function in my code, which uses sample_adj_cpu function to sample neighbours. I am making some changes in this function which is present in the following file.
File link:
[[pytorch_sparse](https://github.com/rusty1s/pytorch_sparse/tree/master)/[... | https://github.com/pytorch/pytorch/issues/104617 | closed | [
"module: sparse",
"triaged"
] | 2023-07-05T06:47:12Z | 2023-07-12T22:10:30Z | null | shivanisankhyan |
huggingface/dataset-viewer | 1,482 | diagnose why the mongo server uses so much CPU | we have many alerts on the use of CPU on the mongo server.
```
System: CPU (User) % has gone above 95
```
Why? | https://github.com/huggingface/dataset-viewer/issues/1482 | closed | [
"question",
"infra",
"improvement / optimization",
"P1"
] | 2023-07-04T16:04:06Z | 2024-02-06T14:49:20Z | null | severo |
huggingface/text-generation-inference | 536 | How to enable vllm | ### Feature request
How to enable vllm
### Motivation
How to enable vllm
### Your contribution
How to enable vllm | https://github.com/huggingface/text-generation-inference/issues/536 | closed | [] | 2023-07-04T05:20:21Z | 2023-07-04T10:56:29Z | null | lucasjinreal |
huggingface/transformers.js | 180 | [Question] Running transformers.js in a browser extension | Hello,
I'm trying to build a chrome extension that uses Transformers.js. When I try to import it in the background worker script, I first get an error that says process is not available, because apparently someone decided browser plugins shouldn't use process.env anymore. I found a solution that said to put
```
... | https://github.com/huggingface/transformers.js/issues/180 | closed | [
"question"
] | 2023-07-04T01:09:29Z | 2023-07-16T15:58:30Z | null | davidtbo |
huggingface/datasets | 6,003 | interleave_datasets & DataCollatorForLanguageModeling having a conflict ? | ### Describe the bug
Hi everyone :)
I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:
- `tokenize()` runs fine
- `group_text()` runs fine
... | https://github.com/huggingface/datasets/issues/6003 | open | [] | 2023-07-03T17:15:31Z | 2023-07-03T17:15:31Z | 0 | PonteIneptique |
huggingface/dataset-viewer | 1,472 | How to show fan-in jobs' results in response ("pending" and "failed" keys) | In cache entries of fan-in jobs we have keys `pending` and `failed`. For example, config-level `/parquet` response has the following format (only "parquet_files" key):
```python
{
"parquet_files": [
{
"dataset": "duorc",
"config": "ParaphraseRC",
"split": "test",
... | https://github.com/huggingface/dataset-viewer/issues/1472 | open | [
"question",
"api",
"P2"
] | 2023-07-03T16:49:10Z | 2023-08-11T15:26:24Z | null | polinaeterna |
huggingface/blog | 1,281 | How to push or shere lora adapter to hugging face hub? | hi, i trained falcon model and already set push_to_hub paramter in training argument, but they not working.
```
from transformers import TrainingArguments
output_dir = "chatb_f"
per_device_train_batch_size = 4
gradient_accumulation_steps = 4
optim = "paged_adamw_32bit"
save_steps = 60
logging_steps = 10
le... | https://github.com/huggingface/blog/issues/1281 | open | [] | 2023-07-01T13:56:47Z | 2023-07-01T13:57:40Z | null | imrankh46 |
huggingface/diffusers | 3,918 | How to control the position of an object in an image using text in a txt2img model? | How to control the position of an object in an image using text in a txt2img model? I know this is easy to achieve in an img2img model, but how can it be done in a txt2img model?
Or, how can a model be fine-tuned to achieve this effect? For example, specifying x=0, y=1, which corresponds to the top-left corner.
I... | https://github.com/huggingface/diffusers/issues/3918 | closed | [
"stale"
] | 2023-07-01T02:44:24Z | 2023-08-08T15:03:15Z | null | XiaoyuZhuang |
huggingface/dataset-viewer | 1,464 | Change the way we represent ResponseAlreadyComputedError in the cache | When a "parallel" step has already been computed, an error is stored in the cache with `ResponseAlreadyComputedError`error_code, and http status 500 (ie: if `split-first-rows-from-streaming` exists, then `split-first-rows-from-parquet` does not need to be computed).
But it makes it hard to monitor the "true" errors.... | https://github.com/huggingface/dataset-viewer/issues/1464 | closed | [
"question",
"improvement / optimization",
"P2"
] | 2023-06-30T18:13:34Z | 2024-02-23T09:56:05Z | null | severo |
huggingface/transformers.js | 176 | [Question] Embeddings for the Entire Document | <!-- QUESTION GOES HERE -->
Hi Thanks for all the effort, I really appreciate it. I enjoy coding in JS and do all things in JS.
Is it a good idea to load the entire json document to get embeddings? What tokenizer should I choose? I have a tone of valuable information in my key and value pairs? or should I craft a s... | https://github.com/huggingface/transformers.js/issues/176 | closed | [
"question"
] | 2023-06-30T16:20:37Z | 2023-06-30T22:43:03Z | null | hadminh |
huggingface/sentence-transformers | 2,247 | how to tune hyperparameters using optuna or raytune | I want to finetune the MiniLM model and tune the hyperparameters of the same, but the model.fit function doesn't return any loss. Nor does it shows any performance metrics while training the model. What do you suggest in this case? | https://github.com/huggingface/sentence-transformers/issues/2247 | open | [] | 2023-06-30T13:16:04Z | 2023-06-30T13:16:04Z | null | nikshrimali |
huggingface/diffusers | 3,914 | how to fine-tuning the sd model in low resolutions | When fine-tuning the stable diffusion model, there is a parameter called 'resolution' which, if set to a value like 128 or 256 to reduce GPU memory usage, could potentially have negative effects on training performance and results.
Would setting the resolution to a value other than 512, such as 128 or 256, have any ... | https://github.com/huggingface/diffusers/issues/3914 | closed | [
"stale"
] | 2023-06-30T12:42:12Z | 2023-08-08T15:03:16Z | null | XiaoyuZhuang |
pytorch/pytorch | 104,450 | Numpy/scipy module works fine with Torch modules, but not TorchScript. How to torchscript a numpy/scipy module? | ### 🐛 Numpy module works fine with Torch modules, but not TorchScript.
```python
from scipy.signal import find_peaks
batch_size = 1
input_data_shape = 1000
input_shape = (batch_size, input_data_shape)
reference_inputs = numpy.random.random(input_shape)
reference_outputs, _ = find_peaks(reference_inputs[0,... | https://github.com/pytorch/pytorch/issues/104450 | open | [
"oncall: jit"
] | 2023-06-30T00:29:43Z | 2023-08-02T17:55:14Z | null | kzhai |
huggingface/optimum | 1,148 | Falcon-40b-instruct on Runpod | ### System Info
```shell
2 x A100 80GB
32 vCPU 251 GB RAM
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give detai... | https://github.com/huggingface/optimum/issues/1148 | closed | [
"bug"
] | 2023-06-29T18:48:05Z | 2023-06-30T15:39:29Z | 3 | Mrin7 |
huggingface/text-generation-inference | 509 | Question: How to estimate memory requirements for a certain batch size/ | I was just wondering how the GPU memory requirements vary depending on model size/batch size of request/max tokens. In doing some experiments where I needed the server to keep running for a long time, I found that it often ran out of memory and shut down - is there a way to estimate the memory footprint based on these ... | https://github.com/huggingface/text-generation-inference/issues/509 | closed | [] | 2023-06-29T15:39:51Z | 2023-07-03T01:41:02Z | null | vaishakkrishna |
huggingface/transformers.js | 171 | [Doc request] Add an example guide of how to use it in Svelte (and deploy to HF Spaces) | Similar to the cool React guide, would be awesome to showcase how to use transformers.js from Svelte (and how to deploy the resulting app to Spaces)
No need to do a SvelteKit version IMO, Svelte would be sufficient
Maybe a good first issue for the community? | https://github.com/huggingface/transformers.js/issues/171 | open | [
"enhancement",
"help wanted",
"good first issue"
] | 2023-06-29T10:25:10Z | 2023-08-21T20:36:59Z | null | julien-c |
huggingface/optimum | 1,145 | How to use mean pooling with ONNX export with optimum-cli | ### System Info
```shell
- `optimum` version: 1.8.8
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 2.0.1+cpu (cuda availabe: False)
- Tensorflow version (GPU?): not installed (cuda availabe: NA)
```
###... | https://github.com/huggingface/optimum/issues/1145 | open | [
"bug"
] | 2023-06-29T05:57:35Z | 2023-06-29T05:57:35Z | null | aunwesha |
huggingface/chat-ui | 328 | Is there a way to see all of a user's history? | I want to see the chat history of all my users. | https://github.com/huggingface/chat-ui/issues/328 | closed | [
"question"
] | 2023-06-29T05:01:55Z | 2023-07-03T10:43:53Z | null | ildoonet |
pytorch/tutorials | 2,495 | [BUG] - Only one trial completes on Ax NAS | ### Add Link
https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html
### Describe the bug
Hi,
I was able to get the tutorial notebook working, and now I am trying to implement Ax-based NAS on my own model. However, only one of the trials complete and all the others fail. I have one ob... | https://github.com/pytorch/tutorials/issues/2495 | closed | [
"bug",
"question",
"ax"
] | 2023-06-28T23:02:31Z | 2023-10-30T17:00:14Z | null | ekurtgl |
huggingface/chat-ui | 327 | Tokens limits issue | Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 603 `inputs tokens and 1024 `max_new_tokens
When deployed, the ui is working fine for like 2 or 3 promts, then every prompt we try we get a red line on top with a pop-up having this message. Please how can we remove this limitation o... | https://github.com/huggingface/chat-ui/issues/327 | open | [
"question",
"back"
] | 2023-06-28T18:09:19Z | 2023-09-18T14:03:59Z | null | Billyroot |
huggingface/diffusers | 3,890 | How to apply the schedulers in diffusers to original SD | Hi! Thanks for this great work! Diffusers helps me a lot in many aspects!
Because of my recent work, I would like to know wether the schedulers in diffusers can be directly used in original SD? If yes, what should I do?
Any response will be greatly appreciated! Again, thank you all for this convenient framework! | https://github.com/huggingface/diffusers/issues/3890 | closed | [
"stale"
] | 2023-06-28T11:02:41Z | 2023-08-05T15:04:00Z | null | volcverse |
huggingface/dataset-viewer | 1,446 | Add fields `viewer` and `preview` to /is-valid | For coherence with /valid, we should add the `viewer` and `preview` fields to /is-valid.
We should also consider deprecating the current `valid` field (as in https://github.com/huggingface/datasets-server/issues/1445). Note that it's in use in https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface... | https://github.com/huggingface/dataset-viewer/issues/1446 | closed | [
"question",
"api"
] | 2023-06-28T09:19:56Z | 2023-06-29T14:13:16Z | null | severo |
huggingface/dataset-viewer | 1,445 | Remove `.valid` from `/valid` endpoint? | We recently added to fields to `/valid`:
- `viewer`: all the datasets that have a valid dataset viewer
- `preview`: all the datasets that don't have a valid dataset viewer, but have a dataset preview
And the Hub does not use the original field `valid` anymore. We still fill it with the union of both sets.
Shoul... | https://github.com/huggingface/dataset-viewer/issues/1445 | closed | [
"question",
"api"
] | 2023-06-28T09:17:13Z | 2023-07-26T15:47:35Z | null | severo |
pytorch/kineto | 775 | Profile particular functions / lines | Hey, is there a way to profile particular functions or code lines with one profiler i.e. not to have separate `with profile as..`statements around each of them?
Something similar to the [NVIDIA nvtx markers](https://docs.nvidia.com/cuda/profiler-users-guide/).
Use case:
Want to profile only particular activity su... | https://github.com/pytorch/kineto/issues/775 | closed | [
"question"
] | 2023-06-28T02:03:02Z | 2023-06-29T16:50:57Z | null | shradhasehgal |
pytorch/kineto | 774 | Question about step time graph in Overview page | Hi, I am wondering what 'step' on the X axis represents in the step-time graph on the overview page.
I set my profiling schedule with 5 steps for 'active', yet the profiling results only include time for step 0 only and not steps 0 - 4.
Could you clarify what 'step' here refers to if not each of the step numbers th... | https://github.com/pytorch/kineto/issues/774 | closed | [
"question",
"plugin"
] | 2023-06-28T01:22:30Z | 2024-04-23T15:28:39Z | null | shradhasehgal |
pytorch/tutorials | 2,493 | [BUG] - ax_multiobjective_nas_tutorial.ipynb fails | ERROR: type should be string, got "\r\nhttps://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html\r\n\r\n### Describe the bug\r\n\r\nHi,\r\n\r\nI am trying to get the [ax_multiobjective_nas_tutorial.ipnb tutorial](https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html) running on my local machine. I came until experiment running part without any problem, but when I start running the experiment, all the trials fail. I didn't change anything in the original notebook. This is the output:\r\n\r\n\r\n\r\nI tried running it on Google colab but got the same error.\r\n\r\n\r\n\r\nFull log:\r\n\r\n---------------------------------------------------------------------------\r\nFailureRateExceededError Traceback (most recent call last)\r\nCell In[11], line 1\r\n----> 1 scheduler.run_all_trials()\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:999](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:999), in Scheduler.run_all_trials(self, timeout_hours, idle_callback)\r\n 992 if self.options.total_trials is None:\r\n 993 # NOTE: Capping on number of trials will likely be needed as fallback\r\n 994 # for most stopping criteria, so we ensure `num_trials` is specified.\r\n 995 raise ValueError( # pragma: no cover\r\n 996 \"Please either specify `num_trials` in `SchedulerOptions` input \"\r\n 997 \"to the `Scheduler` or use `run_n_trials` instead of `run_all_trials`.\"\r\n 998 )\r\n--> 999 for _ in self.run_trials_and_yield_results(\r\n 1000 max_trials=not_none(self.options.total_trials),\r\n 1001 timeout_hours=timeout_hours,\r\n 1002 idle_callback=idle_callback,\r\n 1003 ):\r\n 1004 pass\r\n 1005 return self.summarize_final_result()\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:854](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:854), in Scheduler.run_trials_and_yield_results(self, max_trials, ignore_global_stopping_strategy, timeout_hours, idle_callback)\r\n 849 n_remaining_to_run = max_trials\r\n 850 while (\r\n 851 not self.should_consider_optimization_complete()[0]\r\n 852 and n_remaining_to_run > 0\r\n 853 ):\r\n--> 854 if self.should_abort_optimization():\r\n 855 yield self._abort_optimization(num_preexisting_trials=n_existing)\r\n 856 return\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:712](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:712), in Scheduler.should_abort_optimization(self)\r\n 707 \"\"\"Checks whether this scheduler has reached some intertuption [/](https://file+.vscode-resource.vscode-cdn.net/) abort\r\n 708 criterion, such as an overall optimization timeout, tolerated failure rate, etc.\r\n 709 \"\"\"\r\n 710 # if failure rate is exceeded, raise an exception.\r\n 711 # this check should precede others to ensure it is not skipped.\r\n--> 712 self.error_if_failure_rate_exceeded()\r\n 714 # if optimization is timed out, return True, else return False\r\n 715 timed_out = (\r\n 716 self._timeout_hours is not None\r\n 717 and self._latest_optimization_start_timestamp is not None\r\n (...)\r\n 720 >= not_none(self._timeout_hours) * 60 * 60 * 1000\r\n 721 )\r\n\r\nFile [~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:779](https://file+.vscode-resource.vscode-cdn.net/home/emre/Desktop/NXP/NAS/tpot/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/ax/service/scheduler.py:779), in Scheduler.error_if_failure_rate_exceeded(self, force_check)\r\n 771 if self._num_trials_bad_due_to_err > num_bad_in_scheduler [/](https://file+.vscode-resource.vscode-cdn.net/) 2:\r\n 772 self.logger.warn(\r\n 773 \"MetricFetchE INFO: Sweep aborted due to an exceeded error rate, \"\r\n 774 \"which was primarily caused by failure to fetch metrics. Please \"\r\n 775 \"check if anything could cause your metrics to be flakey or \"\r\n 776 \"broken.\"\r\n 777 )\r\n--> 779 raise self._get_failure_rate_exceeded_error(\r\n 780 num_bad_in_scheduler=num_bad_in_scheduler,\r\n 781 num_ran_in_scheduler=num_ran_in_scheduler,\r\n 782 )\r\n\r\nFailureRateExceededError: Failure rate exceeds the tolerated trial failure rate of 0.5 (at least 8 out of first 8 trials failed). Checks are triggered both at the end of a optimization and if at least 5 trials have failed.\r\n\r\n\r\nWhat do you think might be the problem here? Thank you.\r\n\r\nBest,\r\nEmre\r\n\r\n### Describe your environment\r\n\r\nUbuntu " | https://github.com/pytorch/tutorials/issues/2493 | closed | [
"question",
"ax"
] | 2023-06-27T23:09:05Z | 2023-06-28T17:46:51Z | null | ekurtgl |
huggingface/diffusers | 3,882 | How to use models like chilloutmix to do inpainting task? | I tried as https://huggingface.co/docs/diffusers/api/diffusion_pipeline mentioned:
`text2img = StableDiffusionPipeline.from_pretrained("/data/cx/ysp/aigc-smart-painter/models/chilloutmix_NiPrunedFp32Fix")
inpaint = StableDiffusionInpaintPipeline(**text2img.components)
seger = RawSeger()
REST_API_URL = 'http://local... | https://github.com/huggingface/diffusers/issues/3882 | closed | [
"stale"
] | 2023-06-27T15:25:31Z | 2023-08-05T15:04:07Z | null | AdamMayor2018 |
huggingface/diffusers | 3,881 | How many images and how many epochs are required to fine tune LORA for stable diffusion on custom image dataset | I am trying to finetune LORA on a movie dataset , but I am using custom dataset which has 3-4 movie characters , instead of using the actual names of the actor we are using in movie name of the characters , how big the dataset would be required in terms of total number of images, and number of images per character and ... | https://github.com/huggingface/diffusers/issues/3881 | closed | [
"stale"
] | 2023-06-27T11:05:53Z | 2023-08-04T15:03:17Z | null | atharmzaalo2023 |
pytorch/TensorRT | 2,062 | ❓ [Question] "When the performance of an int8 model improves compared to an fp32 model after QAT" | ## ❓ Question
<!-- Your question -->
I have a question because there is something I do not understand during the QAT.
code ref: https://pytorch.org/TensorRT/_notebooks/vgg-qat.html#4
Phenomenon: The model with QAT applied and the simple TRT-converted model without QAT show higher accuracy than the fp32 model.... | https://github.com/pytorch/TensorRT/issues/2062 | closed | [
"question",
"No Activity",
"component: quantization"
] | 2023-06-27T08:20:34Z | 2023-10-09T00:02:22Z | null | JongSeok553 |
pytorch/data | 1,192 | Is torchdata still being actively developed? | No commits since June 7 (3 weeks ago). And @ejguan mentioned in https://github.com/pytorch/data/issues/1184#issuecomment-1593476769 they and @NivekT, the primary contributors, are no longer working on it.
Can anyone comment on whether torchdata will continue to be developed or supported? | https://github.com/meta-pytorch/data/issues/1192 | closed | [] | 2023-06-26T21:51:48Z | 2023-07-24T02:41:31Z | 6 | lendle |
huggingface/peft | 636 | How to save full model weights and not just the adapters ? | ### System Info
peft==0.4.0.dev0
I'm not sure if this should be a bug report, so sorry if this is not convenient.
According to the `save_pretrained`method docstring, this saves the adapter model only and not the full model weights, is there an option where I can save the full model weights ? The use case is that ... | https://github.com/huggingface/peft/issues/636 | closed | [] | 2023-06-26T15:30:48Z | 2025-03-13T11:52:23Z | null | azayz |
huggingface/peft | 631 | How to train multiple LoRAs at once? | Hi! I would like to train multiple LoRAs at once (for some reason). Although `requires_grad` is True for all LoRA weight matrices, only the first LoRA weight matrix will calculate the gradient, and the others will not calculate the gradient - and will not be updated. How can I train them in one forward process?
1. I... | https://github.com/huggingface/peft/issues/631 | closed | [
"enhancement"
] | 2023-06-26T09:30:16Z | 2023-08-18T13:41:32Z | null | meteorlin |
huggingface/optimum | 1,135 | Donut document parsing export to onnx does not work. | ### System Info
```shell
optimum==1.8.8
python==3.11.3
system linux
```
### Who can help?
The donut export does not work with the following commands, does anybody know how to get this running or know about the status.
```
optimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/... | https://github.com/huggingface/optimum/issues/1135 | closed | [
"bug"
] | 2023-06-26T08:57:01Z | 2023-06-26T10:17:32Z | 3 | casperthuis |
huggingface/peft | 630 | How to switch to P-Tuning v2 | We can find the `P-Tuning v2` in
https://github.com/huggingface/peft/blob/8af8dbd2ec9b4b8f664541e9625f898db7c7c78f/README.md?plain=1#L29
But how can I switch to `P-Tuning v2`? | https://github.com/huggingface/peft/issues/630 | closed | [
"solved"
] | 2023-06-26T08:52:42Z | 2023-08-04T15:03:30Z | null | jiahuanluo |
pytorch/pytorch | 104,159 | how to optimize torch.argwhere? | `t0 = time.time()
xx = torch.argwhere(x) ## x.shape = (15120,150) x.device = cuda:0 and the gpu is gtx1050
print(time.time() - t0)`
the output is always near 0.15s,how can i reduce the cost time ? or there is other high efficient methods to replace argwhere?
cc @albanD | https://github.com/pytorch/pytorch/issues/104159 | closed | [
"module: performance",
"triaged",
"module: python frontend"
] | 2023-06-25T15:12:53Z | 2023-06-28T18:10:17Z | null | Soikie |
pytorch/torchx | 735 | With Volcano, why or when to use TorchX? | ## ❓ Questions and Help
### Question
We can run Pytorch DDP or elastic with just Volcano, right? What does TorchX offer differently from Volcano?
| https://github.com/meta-pytorch/torchx/issues/735 | closed | [] | 2023-06-25T07:54:40Z | 2023-07-12T20:41:59Z | 2 | zxcware |
huggingface/optimum | 1,134 | ValueError: ..set the option `trust_remote_code=True` to remove this error | ### System Info
```shell
- `optimum` version: 1.8.8
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 2.0.1+cpu (cuda availabe: False)
- Tensorflow version (GPU?): not installed (cuda availabe: NA)
```
###... | https://github.com/huggingface/optimum/issues/1134 | closed | [
"bug"
] | 2023-06-24T12:47:35Z | 2023-07-06T16:38:30Z | 5 | diptenduLF |
pytorch/tutorials | 2,487 | [BUG] No ways provided to replicate fps on retrained models. | ### Add Link
https://pytorch.org/tutorials/intermediate/realtime_rpi.html
### Describe the bug
I am getting 25-30fps on my rpi4 with provided snippet.
However, after finetuning mobilenet_v2 and applying:
```
# Quantize the model
quantized_model = torch.quantization.quantize_dynamic(
model, {torch.nn.L... | https://github.com/pytorch/tutorials/issues/2487 | open | [
"bug",
"module: vision"
] | 2023-06-24T12:04:23Z | 2023-06-26T20:29:24Z | 2 | Huxwell |
huggingface/chat-ui | 322 | Chat using WizardCoder | Hello,
Can you please post an example of .env.local for:
WizardLM/WizardCoder-15B-V1.0 | https://github.com/huggingface/chat-ui/issues/322 | open | [] | 2023-06-23T18:44:07Z | 2023-08-14T20:52:39Z | 2 | vitalyshalumov |
huggingface/chat-ui | 321 | Chat-UI not loading Tailwind colors. | **Problem**
When specifying `PUBLIC_APP_COLOR` in either the `.env` or the `.env.local` file, the chat-UI color does not change regardless of which color is used. Even when `PUBLIC_APP_COLOR=blue` as set in this repository, the chat-UI color does not match with TailwindCSS's blue color palette:
**TailwindCSS bl... | https://github.com/huggingface/chat-ui/issues/321 | closed | [
"question",
"front"
] | 2023-06-23T15:54:43Z | 2023-09-18T13:12:15Z | null | ckanaar |
huggingface/peft | 622 | LoRA results in 4-6% lower performance compared to full fine-tuning | I am working on fine-tuning LLMs (6B to 40B parameters) using the LoRA framework on an instruction tuning dataset comprising of instructions corresponding to ~20 tasks (a mix of factual as well as open-ended tasks). The input to the model consists of a conversation snippet between two individuals along with a task-spec... | https://github.com/huggingface/peft/issues/622 | closed | [
"question"
] | 2023-06-23T10:50:24Z | 2023-07-24T12:12:18Z | null | digvijayingle016 |
huggingface/setfit | 389 | gradient_accumulation | Is there a way in setFitTrainer to change the gradient_accumulation like you can do in the regular Trainer class in TrainingArguments? Also just in general I am looking for tips to make training faster. | https://github.com/huggingface/setfit/issues/389 | closed | [
"question"
] | 2023-06-22T21:18:37Z | 2023-11-11T05:32:34Z | null | zackduitz |
huggingface/datasets | 5,982 | 404 on Datasets Documentation Page | ### Describe the bug
Getting a 404 from the Hugging Face Datasets docs page:
https://huggingface.co/docs/datasets/index
### Steps to reproduce the bug
1. Go to URL https://huggingface.co/docs/datasets/index
2. Notice 404 not found
### Expected behavior
URL should either show docs or redirect to new location
#... | https://github.com/huggingface/datasets/issues/5982 | closed | [] | 2023-06-22T20:14:57Z | 2023-06-26T15:45:03Z | 2 | kmulka-bloomberg |
huggingface/chat-ui | 317 | Issues when trying to deploy on cPanel (shared hosting) | Hello there,
Is there something special to do to be able to deploy chat-ui on a shared hosting using cPanel?
I tried using the Node.JS Apps Manager as follows

But even when switching my entry point to ser... | https://github.com/huggingface/chat-ui/issues/317 | closed | [
"support"
] | 2023-06-22T17:32:00Z | 2023-09-18T13:12:53Z | 1 | gollumeo |
huggingface/transformers.js | 161 | [Question] whisper vs. ort-wasm-simd-threaded.wasm | While looking into https://cdn.jsdelivr.net/npm/@xenova/transformers@2.2.0/dist/transformers.js I can see a reference to **ort-wasm-simd-threaded.wasm** however that one never seem to be loaded for whisper/automatic-speech-recognition ( https://huggingface.co/spaces/Xenova/whisper-web ) while it always use **ort-wasm-s... | https://github.com/huggingface/transformers.js/issues/161 | open | [
"question"
] | 2023-06-22T06:41:31Z | 2023-08-15T16:36:01Z | null | jozefchutka |
huggingface/datasets | 5,975 | Streaming Dataset behind Proxy - FileNotFoundError | ### Describe the bug
When trying to stream a dataset i get the following error after a few minutes of waiting.
```
FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json
If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
I hav... | https://github.com/huggingface/datasets/issues/5975 | closed | [] | 2023-06-21T19:10:02Z | 2023-06-30T05:55:39Z | 9 | Veluchs |
huggingface/transformers.js | 158 | [Question] How do I use this library with ts-node? | I have a non-Web/browser-based project that uses TypeScript with ts-node.
The "pipeline" function attempts to use the JavaScript Fetch API, which is not included with NodeJS, and the code therefore fails with an error: "fetch is not defined."
The "node-fetch" package doesn't seem to provide a compatible API.
| https://github.com/huggingface/transformers.js/issues/158 | open | [
"question"
] | 2023-06-21T17:42:11Z | 2023-08-17T13:20:51Z | null | moonman239 |
pytorch/TensorRT | 2,044 | ❓ [Question] How can I install the latest version of python API? Torch and Tensorrt's CUDA dependencies conflict with each other. | ## ❓ Question
<!-- Your question -->
## What you have already tried
<!-- -->
I have already create a python=3.9 env, when I use the command 'pip install torch-tensorrt', I find that the torch version that the latest torch-tensorrt needs is 2.0.1 and the tensorrt version it needs is 8.6.1, but these two packag... | https://github.com/pytorch/TensorRT/issues/2044 | closed | [
"question",
"No Activity"
] | 2023-06-21T17:12:54Z | 2023-10-16T00:02:24Z | null | 1585231086 |
pytorch/pytorch | 103,962 | How to unwrap after auto_wrap in FSDP? | I am currently fine-tuning a LLM (LLaMA) and would like to retrieve the gradients of each weight (parameter) after every gradient update. However, I notice that weights are (auto) wrapped into stuff like “_fsdp_wrapped_module._flat_param” during training. I need to map these wrapped weights to the original LLaMA archit... | https://github.com/pytorch/pytorch/issues/103962 | open | [
"oncall: distributed",
"triaged",
"module: fsdp"
] | 2023-06-21T11:27:10Z | 2023-10-27T15:16:22Z | null | ZN1010 |
pytorch/pytorch | 103,958 | How to modify gradients of an FSDP model? | ### 📚 The doc issue
I've initially posted the question on [forum](https://discuss.pytorch.org/t/modify-gradients-of-an-fsdp-model/182159) 7 days ago, but crossposting here as well for better visibility since I couldn't get any answers there.
Hi everyone,
I have an FSDP model which has zeros in some of the `tor... | https://github.com/pytorch/pytorch/issues/103958 | closed | [
"oncall: distributed",
"module: fsdp"
] | 2023-06-21T09:33:32Z | 2025-04-03T23:45:25Z | null | eldarkurtic |
huggingface/chat-ui | 314 | 500 Internal Error | 
| https://github.com/huggingface/chat-ui/issues/314 | closed | [
"question",
"support"
] | 2023-06-21T08:58:52Z | 2023-06-22T13:13:57Z | null | kasinadhsarma |
huggingface/datasets | 5,971 | Docs: make "repository structure" easier to find | The page https://huggingface.co/docs/datasets/repository_structure explains how to create a simple repository structure without a dataset script.
It's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages. | https://github.com/huggingface/datasets/issues/5971 | open | [
"documentation"
] | 2023-06-21T08:26:44Z | 2023-07-05T06:51:38Z | 5 | severo |
huggingface/chat-ui | 313 | MongoDB | I have a free teir MongoDB acount but not sure how to get url plz help | https://github.com/huggingface/chat-ui/issues/313 | closed | [
"support"
] | 2023-06-21T07:47:18Z | 2023-06-23T08:34:42Z | 5 | Toaster496 |
pytorch/TensorRT | 2,028 | ❓ [Question] Torch-TensorRT 1.3.0 uses cuDNN 8.6.0 instead of 8.5.0 | ## ❓ Question
Hi, I am using torch-tensorRT 1.3.0, it seems it is linked to cuDNN 8.6.0 instead of 8.5.0 as described in the release note? Please find my environment setup below
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): ... | https://github.com/pytorch/TensorRT/issues/2028 | closed | [
"question",
"No Activity"
] | 2023-06-20T16:00:55Z | 2023-09-30T00:02:07Z | null | akaimody123 |
huggingface/peft | 607 | trainer with multi-gpu | I want to use trainer.predict to predict datasets by multi-gpu, but actually I only use single one gpu
when I print Seq2SeqTrainingArguments , I get

It shows 8 gpu
I check my code, when I load model, I find somethin... | https://github.com/huggingface/peft/issues/607 | closed | [
"question"
] | 2023-06-20T08:58:37Z | 2023-07-28T15:03:31Z | null | hrdxwandg |
pytorch/data | 1,190 | Dataloader2 with FullSyncIterDataPipe throws error during initilization | ### 🐛 Describe the bug
Hi, we found some strange during using Dataloader2. Here's some details about the issue.
- We are a long run training job with 8 AWS P4 nodes. It's using HuggingFace trainer.
- In HuggingFace training, it will call evaluation every `traininig_args.eval_steps` training steps.
- I overrided ... | https://github.com/meta-pytorch/data/issues/1190 | open | [] | 2023-06-19T18:25:36Z | 2023-06-22T17:30:46Z | 3 | chenxingyu-cs |
huggingface/chat-ui | 311 | Unable to build with Docker | Hey,
I'm trying to create a docker container with Chat-Ui but i'm facing a wall.
I cloned this repo in a folder on a server and modified the `.env` file, thinking that it would be easy to deploy a docker container out of it but I could not be more wrong !
After trying to build my container with `docker build -t c... | https://github.com/huggingface/chat-ui/issues/311 | closed | [
"support"
] | 2023-06-19T15:11:36Z | 2023-09-18T13:14:04Z | 1 | samichaignonmejai |
pytorch/text | 2,183 | ImportError: cannot import name 'Field' from 'torchtext.data' | ## ❓ Questions and Help
**Description**
I'm using pytorch2.0.0, the version of torchtext is 0.15.2, when I import "Field" and "BucketIterator" in the code(`from torchtext.data import Field, BucketIterator`), I got an error from this sentence: `ImportError: cannot import name 'Field' from ' torchtext.data' (D:\ML_Py... | https://github.com/pytorch/text/issues/2183 | open | [] | 2023-06-19T11:28:42Z | 2023-08-20T06:14:30Z | 2 | MrMoe830 |
huggingface/chat-ui | 310 | Dockerfile issue : can't modify .env.local before building the docker | Hey, I'm having an issue building chat-ui dockerfile.
Indeed, i have to point my DB and my endpoints (or my HF token) in the .env.local file, but the file is built after running the `npm install`, therefore I can't modify my .env.local before building my Docker.
The issues are that both my connection with mongoDB and... | https://github.com/huggingface/chat-ui/issues/310 | open | [
"support"
] | 2023-06-19T10:48:04Z | 2023-07-05T03:09:16Z | 1 | samichaignonmejai |
huggingface/chat-ui | 309 | 'Task not found in this model' when running another model | Hello there,
I tried to change the original model to guanaco-33d (also tried with the 65-b) but I always end up having the error "Task not found in this model".
Here's what I changed in the .env:
```.env
MODELS=`[
{
"name": "timdettmers/guanaco-33b",
"datasetName": "timdettmers/openassistant-gua... | https://github.com/huggingface/chat-ui/issues/309 | closed | [
"support",
"models"
] | 2023-06-19T09:42:41Z | 2023-06-23T12:27:50Z | 1 | gollumeo |
huggingface/chat-ui | 308 | 'Task not found' when trying to use the guacano-33b model | Hello there,
I tried to change the original model, so my team can work with the guanaco-33b model. But now, I always end up having "Task not found for this model" errors.
Here's what I changed on the .env:
```.env
MODELS=`[
{
"name": "timdettmers/guanaco-33b",
"datasetName": "timdettmers/opena... | https://github.com/huggingface/chat-ui/issues/308 | closed | [] | 2023-06-19T09:38:55Z | 2023-06-19T09:39:08Z | 0 | gollumeo |
huggingface/chat-ui | 307 | Add API endpoints documentation | We want to make it easy for people to build cool apps on top of chat-ui, and this requires API specs that are easily accessible.
I'm not sure what tools are available in the sveltekit ecosystem for this. My first guess would be to generate an openAPI spec somehow from our server endpoints (or do it manually if that ... | https://github.com/huggingface/chat-ui/issues/307 | open | [
"documentation",
"enhancement",
"back",
"p2"
] | 2023-06-19T09:08:19Z | 2024-05-29T13:43:10Z | 5 | nsarrazin |
pytorch/tutorials | 2,478 | TransformerEncoder is not causal | ### Add Link
https://pytorch.org/tutorials/beginner/transformer_tutorial.html

for language modeling, src_mask should be mask future words
### Describe the bug
is there anything wrong?
### Describe your enviro... | https://github.com/pytorch/tutorials/issues/2478 | closed | [
"bug",
"module: torchtext",
"medium",
"docathon-h2-2023"
] | 2023-06-18T15:26:46Z | 2023-11-10T22:31:04Z | 10 | bigheary |
huggingface/api-inference-community | 295 | What is the ratelimit for inference api for pro users? | What is the rate limit for inference API for pro users?
Also can we use the endpoint for prod, which makes 3 to 10 RPS? | https://github.com/huggingface/api-inference-community/issues/295 | closed | [] | 2023-06-18T07:17:23Z | 2023-06-19T09:01:02Z | null | bigint |
huggingface/chat-ui | 304 | Code blocks | How do code blocks like img attached work under the hood?
Is it the model that generates ``` & it gets detected and converted to code?
Or is it the UI/Backend that detects code and converts it to look like a code block?
<img width="434" alt="Screenshot 2023-06-17 at 3 26 39 PM" src="https://github.com/huggingfac... | https://github.com/huggingface/chat-ui/issues/304 | closed | [
"question"
] | 2023-06-17T13:27:20Z | 2023-09-18T13:17:47Z | null | Muennighoff |
huggingface/optimum | 1,118 | Corrupted-tflite-weights while getting a model from huggingface | ### System Info
```shell
System: MacOS
Onnx: 1.14
tensorflow: 2.11
While converting a model from hugging face to tflite using huggingface-cli, the model conversion ran okay, but later in inferencing(in python and on edge-device), the model started producing random results, as if it wasn't trained at all.
Virt... | https://github.com/huggingface/optimum/issues/1118 | open | [
"bug"
] | 2023-06-16T18:56:06Z | 2023-06-19T05:18:10Z | 1 | saurabhkumar8112 |
huggingface/pytorch-pretrained-BigGAN | 20 | Is the model trained on truncated noise? What was input noise vector characteristics for training? | Hi,
I have noticed in the "utils.py" line 32, you truncated the normal noise in the range [-2,2] by this line of code:
`values = truncnorm.rvs(-2, 2, size=(batch_size, dim_z), random_state=state).astype(np.float32)`
Could you please let me know whether the pre-trained model is also trained using this truncated... | https://github.com/huggingface/pytorch-pretrained-BigGAN/issues/20 | open | [] | 2023-06-16T08:02:52Z | 2023-06-16T08:02:52Z | null | MHVali |
pytorch/text | 2,182 | Explicit dependend on portalocker? | Shouldn't torch/text add an explicit dependency on portalocker now? Without it, I get:
```
= 979 failed, 204 passed, 12 skipped, 1 deselected, 6 warnings in 495.47s (0:08:15) =
```
that's >80% failed tests, and probably does not represent a functional torchtext?
_Originally posted by @h-vetinari in https://githu... | https://github.com/pytorch/text/issues/2182 | open | [] | 2023-06-15T21:45:32Z | 2023-06-15T21:45:32Z | 0 | h-vetinari |
huggingface/chat-ui | 301 | Error when deploying on a distant server : Cannot find base config file "./.svelte-kit/tsconfig.json" | Hey,
I'm having troubles deploying HuggingChat on a distant server, when I run HuggingChat, I get the following error :
```
ai@1.0.0 start-chat-ui
> cd ../chat-ui && npm run dev -- --host 127.0.0.1
> chat-ui@0.3.0 dev
> vite dev --host 127.0.0.1
▲ [WARNING] Cannot find base config file "./.svelte-kit/ts... | https://github.com/huggingface/chat-ui/issues/301 | closed | [
"support"
] | 2023-06-15T19:55:36Z | 2023-06-19T10:50:26Z | 2 | samichaignonmejai |
huggingface/transformers.js | 150 | [Question] How to use transformers.js like the python sentence_transformers library? | Hello all,
Thanks for this great library. I've just discovered it and I'm familiar with the python sentence_transformers module. I know from experience that sentence_transformers wraps a lot of the complexity compared to using transformers directly.
Can you point to an example of using this to replace python's se... | https://github.com/huggingface/transformers.js/issues/150 | closed | [
"question"
] | 2023-06-15T15:30:49Z | 2023-06-18T15:17:04Z | null | davidtbo |
pytorch/kineto | 770 | On demand profiling example / code changes | Hi, is there an example for how we can enable on demand profiling with kineto?
The [libkineto README](https://github.com/pytorch/kineto/tree/main/libkineto) mentions that we can send a 'signal' or 'trigger' on demand profiling, but I am unclear on how we can do so from outside the PyTorch script.
Would highly ap... | https://github.com/pytorch/kineto/issues/770 | closed | [
"question"
] | 2023-06-15T04:12:22Z | 2024-04-23T15:27:23Z | null | shradhasehgal |
huggingface/chat-ui | 299 | Using HuggingChat in a JavaScript/node.js setting? | Hi, I'm not sure whether this is relevant here, but I'd like to use the HuggingChat in a personal web design project, and I'd like to access it through REST/axios, similar to this [here](https://stackoverflow.com/questions/75714587/node-js-turn-hugging-face-image-response-to-buffer-and-send-as-a-discord-attac) (stable ... | https://github.com/huggingface/chat-ui/issues/299 | closed | [] | 2023-06-15T02:59:29Z | 2023-09-18T13:19:32Z | 3 | VatsaDev |
pytorch/xla | 5,188 | Slow Device To Host Transfers | ## ❓ Questions and Help
Recently I tried ResNet-50 on TPUs using this repo and TensorFlow / Keras. The performance difference between the two was about 15% (2844.4 img/s per TPU vs 3283.52 img/s) in favor of TensorFlow / Keras. These results were with logging every _300_ iterations. When I removed the logging, the T... | https://github.com/pytorch/xla/issues/5188 | closed | [
"question",
"runtime"
] | 2023-06-14T23:01:59Z | 2025-04-30T12:53:54Z | null | MikeynJerry |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.