repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/unity-api | 15 | How to download the model to the local call API | Because my internet connection is not very good, I would like to download the model to my local machine and use the Hugging Face API for calling. How can I achieve this? | https://github.com/huggingface/unity-api/issues/15 | closed | [] | 2023-08-23T08:08:40Z | 2023-11-08T10:26:34Z | null | haldon98 |
huggingface/evaluate | 485 | How to use `SubTask` with metrics that require valid `config_name` | ## Issue
Currently I there does not seem to be a way to define the `config_name` for metric for a `SubTask` inside an `evaluate.EvaluationSuite`.
## Version
evaluate version: 0.4.0
transformers version 4.32.0
Python version Python 3.10.6
## Example
For example, consider the following `EvaluationSuite... | https://github.com/huggingface/evaluate/issues/485 | open | [] | 2023-08-22T23:15:43Z | 2023-08-23T16:38:18Z | null | tybrs |
huggingface/diffusers | 4,716 | How to handle SDXL long prompt | ### Describe the bug
I am unable to use embeds prompt in order to handle prompt that is longer than 77 tokens.
### Reproduction
```python
import itertools
import os.path
import random
import string
import time
import typing as typ
import torch
from diffusers import StableDiffusionXLPipeline
from tqdm impo... | https://github.com/huggingface/diffusers/issues/4716 | closed | [
"bug"
] | 2023-08-22T16:28:25Z | 2023-08-27T02:46:18Z | null | elcolie |
huggingface/candle | 547 | How to turn off automatic translation for whisper | When I input Chinese wav file , whisper outputs the English translation
```
ls@LeeeSes-MacBook-Air ~/r/candle (main)> cargo run --release --features accelerate --example whisper -- --model small --language zh --input /Users/ls/Downloads/output.wav
Finished release [optimized] target(s) in 0.38s
Running `ta... | https://github.com/huggingface/candle/issues/547 | closed | [] | 2023-08-22T11:16:45Z | 2023-08-22T18:52:40Z | null | LeeeSe |
huggingface/trl | 674 | How to load the model and the checkpoint after trained the model? | I trained my model using the code in the sft_trainer.py. And I save the checkpoint and the model in the same dir.
But I don't know how to load the model with the checkpoint. Or I just want to konw that `trainer.save_model(script_args.output_dir)` means I have save a trained model, not just a checkpoint?
I try many w... | https://github.com/huggingface/trl/issues/674 | closed | [] | 2023-08-22T10:31:01Z | 2023-11-27T21:34:30Z | null | ccwdb |
huggingface/text-generation-inference | 899 | text-generation-launcher tool how to use multi gpu cards? | ### System Info
text-generation-launcher 1.0.0 how to use multi gpu cards?
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
CUDA_VISIBLE_DEVICES=0,1,2,3 text-generation-launcher --model-id falcon-40b-instruct --sha... | https://github.com/huggingface/text-generation-inference/issues/899 | closed | [] | 2023-08-22T10:09:17Z | 2023-08-22T10:13:06Z | null | luefei |
huggingface/chat-ui | 411 | Chat-ui crashes TGI? | Hey!
When I deploy TGI Endpoint locally and test it with the following cli request:
`curl 127.0.0.1:8080/generate_stream \
-X POST \
-d '{"inputs":"def calculate_fibonacci(n:str):","parameters":{"max_new_tokens":100}}' \
-H 'Content-Type: application/json'`
It works without any problem. Even lo... | https://github.com/huggingface/chat-ui/issues/411 | open | [] | 2023-08-22T08:48:02Z | 2023-08-23T06:45:26Z | 0 | schauppi |
huggingface/accelerate | 1,870 | [Question] How to optimize two loss alternately with gradient accumulation? | I want to update a model by optimizing two loss alternately with gradient accumulation like this
```python
# Suppose gradient_accumulation is set to 2.
optimizer = optim(unet.parameters())
with accelerator.accumulate(unet):
outputs = unet(input)
loss1 = loss_func1(outputs)
loss1.backward()
opt... | https://github.com/huggingface/accelerate/issues/1870 | closed | [] | 2023-08-21T12:49:19Z | 2023-10-24T15:06:33Z | null | hkunzhe |
huggingface/candle | 538 | How to disable openssl-sys being included? | I would like to stop openssl-sys from being included in my project when using candle, I'm not sure how to do this. I tried adding the below to my Cargo.toml but it didn't change anything. The reason I want to do it is because I get an error when trying to compile my library to aarch64-linux-android saying that pkg-conf... | https://github.com/huggingface/candle/issues/538 | closed | [] | 2023-08-21T10:47:26Z | 2023-08-21T20:38:57Z | null | soupslurpr |
huggingface/optimum | 1,298 | Support BetterTransfomer for the Baichuan LLM model | ### Feature request
is it possible to support Baichuan model with BetterTransformer?
https://huggingface.co/baichuan-inc/Baichuan-13B-Chat
### Motivation
A very popular Chinese and English large language model.
### Your contribution
hope you can achieve it. Thanks. | https://github.com/huggingface/optimum/issues/1298 | closed | [
"feature-request",
"bettertransformer",
"Stale"
] | 2023-08-21T08:18:16Z | 2025-05-04T02:17:22Z | 1 | BobLiu20 |
huggingface/candle | 533 | How to convert token to text? | Hello, thank you for this ML library in Rust. Sorry if this is a noob question, I'm new to machine learning and this is my first time trying to use a text generation model. I'm using the latest git version. In the quantized llama example, how would I convert a token to a string? I see the print_token function but I wan... | https://github.com/huggingface/candle/issues/533 | closed | [] | 2023-08-21T06:36:08Z | 2023-08-21T07:51:37Z | null | soupslurpr |
huggingface/safetensors | 333 | Slow load weight values from a HF model on a big-endian machine with the latest code | ### System Info
Python: 3.10
PyTorch: the latest main branch (i.e. 2.0.1+)
safetensors: 0.3.3
Platform: s390x (big-endian)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Reproduction
I executed the following code using 0.3.1 and 0.3.3, and w/o safetensors.
```... | https://github.com/huggingface/safetensors/issues/333 | closed | [
"Stale"
] | 2023-08-20T18:19:44Z | 2023-12-12T01:48:51Z | 9 | kiszk |
huggingface/chat-ui | 409 | Deploy Chat UI Spaces Docker template with a PEFT adapter | I tried to accomplish this, but the container failed to launch the chat-ui app, as it seems to assume the model would be a non-adapted model.
Is there a way to make it work? | https://github.com/huggingface/chat-ui/issues/409 | closed | [
"bug",
"back"
] | 2023-08-20T05:26:50Z | 2023-09-11T09:37:29Z | 4 | lrtherond |
huggingface/datasets | 6,163 | Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32 | ### Describe the bug
I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are inte... | https://github.com/huggingface/datasets/issues/6163 | open | [] | 2023-08-19T11:34:40Z | 2025-07-22T12:04:46Z | 2 | shishirCTC |
huggingface/sentence-transformers | 2,278 | How to set the no. of epochs for fine-tuning SBERT? | Hello,
I am fine-tuning an biencoder SBERT model on domain specific data for semantic similarity. There is no loss value posted by the `fit ` function from the package. Any idea how to know if the model is overfitting or underfiting the dataset after each epoch? This could help me in deciding the appropriate no. of ep... | https://github.com/huggingface/sentence-transformers/issues/2278 | open | [] | 2023-08-18T18:14:05Z | 2024-01-29T17:00:13Z | null | power-puff-gg |
huggingface/setfit | 409 | model_head.pkl not found on HuggingFace Hub | i got message:
"model_head.pkl not found on HuggingFace Hub, initialising classification head with random weights. You should TRAIN this model on a downstream task to use it for predictions and inference."
is there something missing or is it normal? | https://github.com/huggingface/setfit/issues/409 | closed | [
"question"
] | 2023-08-18T07:52:20Z | 2023-11-24T14:20:51Z | null | andysingal |
huggingface/autotrain-advanced | 216 | How to do inference after train llama2 | i trained model using this command
```
autotrain llm --train --project_name 'llama2-indo-testing' \
--model meta-llama/Llama-2-7b-hf \
--data_path data/ \
--text_column text \
--use_peft \
--use_int4 \
--learning_rate 2e-4 \
--train_batch_size 2 \
--num_train_epochs 3 \
--... | https://github.com/huggingface/autotrain-advanced/issues/216 | closed | [] | 2023-08-18T04:36:37Z | 2023-12-18T15:30:38Z | null | muhammadfhadli1453 |
huggingface/diffusers | 4,662 | How to call a different scheduler when training a model from repo | I notice that the settings in train_dreambooth_lora_sdxl.py and the scheduler config from the repo seem to conflict. In the .py the noise scheduler is DDPM but whenever training starts it seems to still indicate that I am using the repo config scheduler, ie. EulerDiscreteScheduler. It used to be you could specify sched... | https://github.com/huggingface/diffusers/issues/4662 | closed | [] | 2023-08-17T21:40:10Z | 2023-08-18T04:18:11Z | null | jmaccall316 |
huggingface/transformers | 25,576 | How can i make a PR for autotokenzier to adapt RWKV world | ### Feature request
Ususally we use own tokenzier with the transformer pipeline,
like this https://github.com/xiaol/Huggingface-RWKV-World/blob/fca236afd5f2815b0dbe6c7ce3c92e51526e2e14/generate_hf_cfg.py#L79C1-L79C1
So far we have a lot of models using new tokenzier, using pipeline with autotokenizer is critica... | https://github.com/huggingface/transformers/issues/25576 | closed | [] | 2023-08-17T16:36:44Z | 2023-09-25T08:02:43Z | null | xiaol |
huggingface/accelerate | 1,854 | How to further accelerate training with 24 cards for 1.3b+ models using accelerateοΌ | I found that when using DeepSpeed Zero (2 or 3) to train 1.3 billion and larger models (such as llama-7b or gpt-neo-1.3b), the training time for 8 * 32G V100 is almost the same as 24 * 32G V100 (I guess it's because of the additional communication overhead introduced by DeepSpeed). Is there any way to further accelerat... | https://github.com/huggingface/accelerate/issues/1854 | closed | [] | 2023-08-17T15:01:09Z | 2023-09-24T15:05:52Z | null | Micheallei |
huggingface/datasets | 6,156 | Why not use self._epoch as seed to shuffle in distributed training with IterableDataset | ### Describe the bug
Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping.
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177
My question ... | https://github.com/huggingface/datasets/issues/6156 | closed | [] | 2023-08-17T10:58:20Z | 2023-08-17T14:33:15Z | 3 | npuichigo |
huggingface/diffusers | 4,643 | when i load a controlnet model,where is the inference code? | I have read the code of con in diffusers/models/controlnet.py.
but when I load a con weight,where is the code?
tks | https://github.com/huggingface/diffusers/issues/4643 | closed | [] | 2023-08-17T02:50:59Z | 2023-08-17T04:55:28Z | null | henbucuoshanghai |
huggingface/dataset-viewer | 1,689 | Handle breaking change in google dependency? | See https://huggingface.co/datasets/bigscience/P3/discussions/6#64dca122e3e44e8000c45616
Should we downgrade the dependency, or fix the datasets? | https://github.com/huggingface/dataset-viewer/issues/1689 | closed | [
"question",
"dependencies",
"P2"
] | 2023-08-16T14:31:28Z | 2024-02-06T14:59:59Z | null | severo |
huggingface/optimum | 1,286 | Support BetterTransfomer for the GeneFormer model | ### Feature request
is it possible to support GeneFormer model with BetterTransformer?
https://huggingface.co/ctheodoris/Geneformer
### Motivation
It's a new paper with an active community in the Hugging Face repository. The training and inference speed is not fast enough.
### Your contribution
Nothing at this ti... | https://github.com/huggingface/optimum/issues/1286 | closed | [
"feature-request",
"bettertransformer",
"Stale"
] | 2023-08-16T03:32:48Z | 2025-05-07T02:13:16Z | 1 | seyedmirnezami |
huggingface/diffusers | 4,618 | How to use dreamshaperXL10_alpha2Xl10.safetensors with controlnet-canny-sdxl-1.0 ? | I want to use dreamshaperXL10_alpha2Xl10.safetensors with controlnet-canny-sdxl-1.0
I downloaded dreamshaperXL10_alpha2Xl10.safetensors file and tried to use :
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
'./dreamshaperXL10_alpha2Xl10.safetensors',
controlnet=controlnet,
use_safetensors=True,
to... | https://github.com/huggingface/diffusers/issues/4618 | closed | [] | 2023-08-15T13:44:54Z | 2023-08-22T01:31:37Z | null | arnold408 |
huggingface/peft | 826 | what is alpha ?? alpha not in paper. | ### Feature request
https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py#L57
this alpha not in paper :
https://arxiv.org/abs/2106.09685
where can i learn this alpha ??
thank you !!
### Motivation
rt
### Your contribution
rt | https://github.com/huggingface/peft/issues/826 | closed | [] | 2023-08-15T09:47:58Z | 2023-09-23T15:03:19Z | null | XuJianzhi |
huggingface/optimum | 1,285 | Merge patch into autogptq | ### Feature request
Currently, there is a patch to get GPTQ quantization working:
```
# !pip install -q git+https://github.com/fxmarty/AutoGPTQ.git@patch-act-order-exllama
```
Is there a plan to try and merge that into the autogptq repo?
### Motivation
autogptq is slow to install. This is easily solved by usin... | https://github.com/huggingface/optimum/issues/1285 | closed | [] | 2023-08-14T16:24:14Z | 2023-08-23T17:17:46Z | 5 | RonanKMcGovern |
huggingface/candle | 443 | What is the minimal requirements of Intel MKL version? | Hello, Thanks for the great work!
I've got an error while compiling with the `-features mkl` option.
For example `cargo install --git https://github.com/huggingface/candle.git candle-examples --examples bert -F mkl`
The error said
```bash
= note: /usr/bin/ld: /workspaces/Kuberian/searcher/target/debug/deps/... | https://github.com/huggingface/candle/issues/443 | closed | [] | 2023-08-14T14:09:01Z | 2024-02-03T16:43:34Z | null | iwanhae |
huggingface/pytorch-image-models | 1,917 | how to change SqueezeExcite in efficientnet | I want to create efficientnet networks using timm, where SqueezeExcite contains three parts ['Conv2d','SiLU','Conv2d'], but it contains four parts ['Conv2d','SiLU','Conv2d','sigmoid'], How should I modify it, thank you
| https://github.com/huggingface/pytorch-image-models/issues/1917 | closed | [
"enhancement"
] | 2023-08-14T11:45:05Z | 2023-08-14T14:13:26Z | null | Yang-Changhui |
huggingface/setfit | 408 | No tutorial or guideline for Few-shot learning on multiclass text classification | I just want to use SBERT for Few Shot multiclass text classification, however I couldn't see any tutorial or explanation for it. Can you explain to me that which "multi_target_strategy" and loss function should I use for multi-class text classification ? | https://github.com/huggingface/setfit/issues/408 | open | [
"documentation",
"question"
] | 2023-08-14T09:02:18Z | 2023-10-03T20:29:25Z | null | ByUnal |
huggingface/diffusers | 4,594 | latents.requires_grad is false in my custom pipeline no matter what. | Hi, in my quest to make a flexible pipeline that can easily add new features instead of creating a pipeline for every variation, I made the following:
```
class StableDiffusionRubberPipeline(StableDiffusionPipeline):
call_funcs=[]
def __init__(
self,
vae: AutoencoderKL,
text_enc... | https://github.com/huggingface/diffusers/issues/4594 | closed | [] | 2023-08-13T15:02:22Z | 2023-08-14T12:11:36Z | null | alexblattner |
huggingface/datasets | 6,153 | custom load dataset to hub | ### System Info
kaggle notebook
i transformed dataset:
```
dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt")
```
to
formatted_dataset:
```
Dataset({
features: ['message_tree_id', 'message_tree_text'],
num_rows: 33143
})
```
but would like to know how to upload to hub
### ... | https://github.com/huggingface/datasets/issues/6153 | closed | [] | 2023-08-13T04:42:22Z | 2023-11-21T11:50:28Z | 5 | andysingal |
huggingface/chat-ui | 398 | meta-llama/Llama-2-7b-chat-hf requires a pro subscription? | I ran the instructions to run locally, and ran into this.
I've been working on my own ui, and thought I'd give this a shot, and if that's the route huggingface is going, I find that very disappointing. I was expecting the model to be hosted locally and routed through fastapi or something | https://github.com/huggingface/chat-ui/issues/398 | closed | [] | 2023-08-12T03:56:55Z | 2023-08-12T04:03:11Z | 1 | thistleknot |
huggingface/chat-ui | 397 | Dynamically adjust `max_new_tokens` | Hi,
I am running a 4096 context length model behind TGI interface. My primary use case is summarization wherein some of my requests can be quite large.
I have set `truncate` to 4000 and that leaves `max_new_tokens` to be at most 4096-4000=96.
So, even if my input length is not 4000 tokens long, say it is only ... | https://github.com/huggingface/chat-ui/issues/397 | open | [
"question",
"back"
] | 2023-08-11T16:37:10Z | 2023-09-18T12:49:49Z | null | abhinavkulkarni |
huggingface/chat-ui | 396 | Long chat history | How do you manage a long chat history?
Do you truncate the history at some point and call the API only with the most recent messages? | https://github.com/huggingface/chat-ui/issues/396 | closed | [
"question"
] | 2023-08-11T15:52:43Z | 2023-09-18T12:50:07Z | null | keidev |
huggingface/trl | 638 | How many and what kind of gpus needed to run the example? | For every script or project in the example directory, could you please tell us how many and what kind of gpus needed to run the experiments? Thanks a lot. | https://github.com/huggingface/trl/issues/638 | closed | [] | 2023-08-11T14:12:34Z | 2023-09-11T08:22:33Z | null | Wallace-222 |
huggingface/chat-ui | 395 | Error's out evetime I try to add a new model | I'm currently having an huge issue. I'm trying to easily add models in to the chat ui. I have made a holder and added a specific model in that folder but I'm unable to actual get to use that model. I'm not sure what I'm doing wrong I've staired at the docs for a few hours re reading and also looked it up on YouTube but... | https://github.com/huggingface/chat-ui/issues/395 | closed | [
"support"
] | 2023-08-11T12:55:03Z | 2023-09-11T09:35:55Z | 3 | Dom-Cogan |
huggingface/dataset-viewer | 1,662 | Should we change 500 to another status code when the error comes from the dataset? | See #1661 for example.
Same for the "retry later" error: is 500 the most appropriate status code? | https://github.com/huggingface/dataset-viewer/issues/1662 | open | [
"question",
"api",
"P2"
] | 2023-08-10T15:57:03Z | 2023-08-14T15:36:27Z | null | severo |
huggingface/datasets | 6,139 | Offline dataset viewer | ### Feature request
The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something t... | https://github.com/huggingface/datasets/issues/6139 | closed | [
"enhancement",
"dataset-viewer"
] | 2023-08-10T11:30:00Z | 2024-09-24T18:36:35Z | 7 | yuvalkirstain |
huggingface/text-generation-inference | 807 | How to create a NCCL group on Kubernetes? | I am deploying text-generation-inference on EKS with each node having 1 NVIDIA A10G GPU.
How should I create a group such that a model like llama-2-13b-chat is able to use GPUs across nodes for inference? | https://github.com/huggingface/text-generation-inference/issues/807 | closed | [
"Stale"
] | 2023-08-10T09:29:59Z | 2024-04-17T01:45:28Z | null | rsaxena-rajat |
huggingface/chat-ui | 394 | Internal server error: Unexpected token ] in JSON at position 1090 | 1:58:23 AM [vite] Error when evaluating SSR module /src/lib/server/models.ts:
|- SyntaxError: Unexpected token ] in JSON at position 1090
at JSON.parse (<anonymous>)
at eval (/home/chat-ui/src/lib/server/models.ts:46:14)
at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks... | https://github.com/huggingface/chat-ui/issues/394 | closed | [
"support"
] | 2023-08-10T02:01:49Z | 2023-09-11T09:36:29Z | 2 | Ichigo3766 |
huggingface/trl | 627 | how to use Reward model? | How to use Reward Model in RLHF PPO stage?
Could you provide an example?
thank you very much | https://github.com/huggingface/trl/issues/627 | closed | [] | 2023-08-09T02:52:23Z | 2023-08-12T02:04:17Z | null | zhuxiaosheng |
huggingface/transformers.js | 243 | QW | hi Joshua how u doing man i wish every thing's good, i just wanna ask if you know any body need any help or have any issues in their nodeJs backend code or their servers it will be a great pleasure to and help | https://github.com/huggingface/transformers.js/issues/243 | closed | [
"question",
"off-topic"
] | 2023-08-08T21:46:13Z | 2023-08-09T19:55:55Z | null | jedLahrim |
huggingface/peft | 808 | What is the correct way to apply LoRA on a custom model (not models on HuggingFace)? | Hi, most models in examples are `transformers` pretrained models.
However, I'm using a custom model and applying LoRA to it:
```
model = MyPytorchModel()
model = PeftModel(model, peft_config)
======= training... ========
model.save_pretrained(save_path)
```
Then, I reload my custom model and merge lora weight:
... | https://github.com/huggingface/peft/issues/808 | closed | [] | 2023-08-08T17:10:36Z | 2025-08-01T21:14:25Z | null | DtYXs |
huggingface/diffusers | 4,533 | How to debug custom pipeline locally ? | Hi,
I build diffusers from source, and I am using ControlNet. However, diffusers seems not to load the custom pipeline from ```diffusers/examples/community/stable_diffusion_controlnet_img2img.py``` as I expected. Instead, it seems to download from the hub and cache a new ```stable_diffusion_controlnet_img2img.py`... | https://github.com/huggingface/diffusers/issues/4533 | closed | [] | 2023-08-08T15:34:40Z | 2023-08-09T12:17:42Z | null | pansanity666 |
huggingface/setfit | 405 | how to set the device id | How do I run multiple training runs on different GPU devices? I don't see any argument which allows me to set this. Thank you! | https://github.com/huggingface/setfit/issues/405 | open | [] | 2023-08-08T08:25:36Z | 2023-08-08T08:25:36Z | null | vahuja4 |
huggingface/transformers.js | 239 | [Question] Adding Custom or Unused Token | <!-- QUESTION GOES HERE -->
Is it possible to add custom range as a token?
For example for price_list of $100-$200
Can we add a custom vocab like this in vocab list
vocab list:
nice
hello
__$100-$200__
fish
... | https://github.com/huggingface/transformers.js/issues/239 | closed | [
"question"
] | 2023-08-07T18:32:20Z | 2023-08-07T20:38:15Z | null | hadminh |
huggingface/chat-ui | 390 | Can I hook it up to a retrieval system for a document chatbot? | I want to use the instructor-xl text embedding model and use FAISS to create and retrieve from a vector store. Sort of a chatbot for documents or a domain specific chatbot. Any ideas on how I can do it? | https://github.com/huggingface/chat-ui/issues/390 | open | [] | 2023-08-07T15:22:10Z | 2024-02-22T12:55:41Z | 9 | adarshxs |
huggingface/diffusers | 4,507 | How to train stable-diffusion-xl-base-1.0 without lora? | Hi, I want to train `stable-diffusion-xl-base-1.0` without lora, how to do this?
I can run `train_text_to_image_lora_sdxl.py` .
But `train_text_to_image.py` with `MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"` with raise an error:
```
diffusers/models/unet_2d_condition.py:836 in forward ... | https://github.com/huggingface/diffusers/issues/4507 | closed | [] | 2023-08-07T10:38:24Z | 2023-08-14T07:25:49Z | null | KimmiShi |
huggingface/text-generation-inference | 782 | What is the correct parameter combination for using dynamic RoPE scaling ? | Hi Team, First of all thanks for the awesome piece of software !!
I want to use `upstage/Llama-2-70b-instruct-v2` model with `--max-input-length=8192 --max-total-tokens=10240` which originally supports `max_position_embeddings=4096`.
I tried running the following command :
```
docker run -it --rm --gpus all... | https://github.com/huggingface/text-generation-inference/issues/782 | closed | [] | 2023-08-07T05:58:14Z | 2023-09-06T13:59:36Z | null | hrushikesh198 |
huggingface/transformers.js | 238 | [Question] Can you list all available models using tranformers.js? | Hey π
I was wondering if it's possible to list available models using the `transformers.js` package?
e.g.
> pipeline.getAvailableModels()
| https://github.com/huggingface/transformers.js/issues/238 | closed | [
"question"
] | 2023-08-07T01:53:35Z | 2023-08-13T23:27:55Z | null | sambowenhughes |
huggingface/chat-ui | 389 | Inject assistant message in the begining of the chat | Hey, is it possible to start a conversation with an assistant message showing up as the first message in the chat? | https://github.com/huggingface/chat-ui/issues/389 | closed | [
"enhancement",
"question"
] | 2023-08-06T17:25:25Z | 2023-09-18T12:52:16Z | null | matankley |
huggingface/diffusers | 4,494 | How to convert a diffuser pipeline of XL to checkpoint or safetensors | I need to fine-tune stable diffusion unet or something like that. Then I have to convert the pipeline into ckpt for webui usage.
Before I use the `scripts/convert_diffusers_to_original_stable_diffusion.py` for transforming.
But currently it cannot convert correctly for XL pipeline and webui may raise bugs.
Thanks i... | https://github.com/huggingface/diffusers/issues/4494 | closed | [
"stale",
"contributions-welcome"
] | 2023-08-06T13:06:54Z | 2023-11-06T04:42:19Z | null | FeiiYin |
huggingface/chat-ui | 388 | Is it down? | It doesnt load for me also your website | https://github.com/huggingface/chat-ui/issues/388 | closed | [] | 2023-08-06T08:54:47Z | 2023-08-08T06:05:48Z | 6 | BenutzerEinsZweiDrei |
huggingface/transformers.js | 237 | [Question] Ipynb for ONNX conversion? | Could you please share the code you're using to convert models to onnx? I know you say in your cards you're using Optimum, but when I try to do it myself, I get much larger onnx files (talking about disk space here) and I don't know what I'm doing wrong. | https://github.com/huggingface/transformers.js/issues/237 | closed | [
"question"
] | 2023-08-06T08:45:19Z | 2023-08-06T09:17:02Z | null | Mihaiii |
huggingface/transformers.js | 233 | [Docs] Mention demo (GitHub pages) in Readme | I love your old demo page on GitHub pages (https://xenova.github.io/transformers.js/), as one can easily play with the models and copy code if needed.
Is there any reason it's not mentioned anymore (or not more visible) in the Readme?
(Sorry, added bug label accidentally, should be question instead) | https://github.com/huggingface/transformers.js/issues/233 | closed | [
"question"
] | 2023-08-04T10:53:48Z | 2023-12-06T15:01:38Z | null | do-me |
huggingface/datasets | 6,120 | Lookahead streaming support? | ### Feature request
From what I understand, streaming dataset currently pulls the data, and process the data as it is requested.
This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment.
While the delays might be dataset specific (or even mappi... | https://github.com/huggingface/datasets/issues/6120 | open | [
"enhancement"
] | 2023-08-04T04:01:52Z | 2023-08-17T17:48:42Z | 1 | PicoCreator |
huggingface/diffusers | 4,459 | how to convert a picture to text embedding, without training these image model like Textual Inversion | clip text: tokens -> text_embedding -> text_features
clip img: img -> img_embedding -> img_features
how inversion without training every time: img -> text_embedding | https://github.com/huggingface/diffusers/issues/4459 | closed | [
"stale"
] | 2023-08-04T01:46:25Z | 2023-09-12T15:03:45Z | null | yanchaoguo |
huggingface/datasets | 6,116 | [Docs] The "Process" how-to guide lacks description of `select_columns` function | ### Feature request
The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the gui... | https://github.com/huggingface/datasets/issues/6116 | closed | [
"enhancement"
] | 2023-08-03T13:45:10Z | 2023-08-16T10:02:53Z | null | unifyh |
huggingface/diffusers | 4,453 | How to convert diffusers SDXL lora into safetensors that works with AUTO1111 webui | ### Describe the bug
I trained a lora on SDXL with this diffusers script: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py
I get great results when using the output .bin with the diffusers inference code.
How can I convert the .bin to .safetensors that can be loa... | https://github.com/huggingface/diffusers/issues/4453 | closed | [
"bug",
"stale"
] | 2023-08-03T11:23:25Z | 2023-09-12T15:03:46Z | null | wangqyqq |
huggingface/text-generation-inference | 765 | How to benchmark a warmed local model by docker | ### System Info
Using the docker run to connected local model and it worked:
`docker run --rm --name tgi --runtime=nvidia --gpus all -p 5001:5001 -v data/nfs/gdiist/model:/data k8s-master:5000/text-generation-inference:0.9.3 --model-id /data/llama-7b-hf --hostname 0.0.0.0 --port 5001 --dtype float16 `
```
2... | https://github.com/huggingface/text-generation-inference/issues/765 | closed | [] | 2023-08-03T09:28:07Z | 2023-10-16T01:50:10Z | null | Laych7 |
huggingface/diffusers | 4,448 | Outpainting results from diffusers' StableDiffusionControlNetPipeline is much worse than those from A1111 webui. How to improve? | I am trying to outpaint some human images (mainly the lower-body part) with SD 1.5 conditioned on ControlNet's inpainting and openpose. I have been using A1111 webui with ControlNet extension and it has been working quite well:
Here are my settings in the webui:
<img width="774" alt="Screenshot 2023-08-03 at 15 08 30... | https://github.com/huggingface/diffusers/issues/4448 | closed | [] | 2023-08-03T07:19:12Z | 2023-08-30T05:35:03Z | null | xiyichen |
huggingface/transformers | 25,280 | How to download files from HF spaces | ### System Info
google colab
### Who can help?
@sanchit-gandhi @rock
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproductio... | https://github.com/huggingface/transformers/issues/25280 | closed | [] | 2023-08-03T07:02:03Z | 2023-09-11T08:02:40Z | null | andysingal |
huggingface/diffusers | 4,445 | How to finetune lora model ? | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
If I have a model from civitai , how to finetune it in sd1.5 and sdxl?
**Describe the solution you'd like**
A clear and concise description of what you w... | https://github.com/huggingface/diffusers/issues/4445 | closed | [
"stale"
] | 2023-08-03T01:55:15Z | 2023-09-12T15:03:49Z | null | kelisiya |
huggingface/sentence-transformers | 2,268 | How to chop up a long document into chunks of max sequence length? | Given a long document, how do I chop it up into chunks so that each chunk is within the [max sequence length](https://www.sbert.net/examples/applications/computing-embeddings/README.html#input-sequence-length) of a model? | https://github.com/huggingface/sentence-transformers/issues/2268 | open | [] | 2023-08-02T16:50:09Z | 2023-08-04T18:47:22Z | null | siddhsql |
huggingface/dataset-viewer | 1,602 | Parallel steps update incoherence | See the discussion https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M/discussions/1#64c9e88a6a26cddbecd9bec6
Before the dataset update, the `split-first-rows-from-parquet` response was a success, and thus the `split-first-rows-from-streaming` response, computed later, is a `ResponseAlreadyComputedError` ... | https://github.com/huggingface/dataset-viewer/issues/1602 | closed | [
"bug",
"question",
"P1"
] | 2023-08-02T13:44:35Z | 2024-02-06T14:52:06Z | null | severo |
huggingface/transformers | 25,264 | [Question] How to load AutoFeatureExtractor on GPU? | Hi, I am following this guide to learn how to do audio classification with wav2vec2: https://huggingface.co/docs/transformers/main/tasks/audio_classification
I intend to extract features of my data with the following codes
```
feature_extractor = AutoFeatureExtractor.from_pretrained("/workspace/models/wav2vec2-lar... | https://github.com/huggingface/transformers/issues/25264 | closed | [] | 2023-08-02T12:26:20Z | 2023-09-11T08:02:43Z | null | treya-lin |
huggingface/datasets | 6,111 | raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." ) | ### Describe the bug
For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for exa... | https://github.com/huggingface/datasets/issues/6111 | closed | [] | 2023-08-02T09:17:29Z | 2023-08-29T02:00:28Z | 3 | 2catycm |
huggingface/transformers | 25,257 | how to print out the data loaded by each epoch during trainer.train() training? | ### Feature request
please tell to me,
how to print out the data loaded by each epoch during trainer.train() training?
### Motivation
how to print out the data loaded by each epoch during trainer.train() training?
### Your contribution
how to print out the data loaded by each epoch during trainer.train() train... | https://github.com/huggingface/transformers/issues/25257 | closed | [] | 2023-08-02T09:13:55Z | 2023-09-11T08:02:47Z | null | ahong007007 |
huggingface/tokenizers | 1,310 | How to train BPE tokenizer with multiple CPU | Hi
I tried to train a BPE tokenizer with about 10GB text, but it seems extremely slow(runs more than 24 hours and not finished yet).
Is there a way to turn on multi CPU training (from htop there only 1 CPU used)?
Here is the code.
```
from tokenizers import Tokenizer, decoders, models, normalizers, pre_to... | https://github.com/huggingface/tokenizers/issues/1310 | closed | [] | 2023-08-02T08:14:07Z | 2023-08-02T09:10:44Z | null | voidmagic |
huggingface/chat-ui | 380 | Issue with Text Generation in Stream Mode | Hi
The text generation in stream mode is not functioning as expected on my development server, which is running behind a reverse proxy with the correct base path defined. I'm only receiving a single response in one go, whereas I expect a continuous stream of text.
Please assist me in resolving this issue. Thank y... | https://github.com/huggingface/chat-ui/issues/380 | closed | [
"support"
] | 2023-08-01T19:07:50Z | 2023-09-10T12:22:16Z | 10 | bilal-rachik |
huggingface/transformers | 25,245 | BLIP-2 request: If it's even possible, can you please provide an official example script of how to get the text(caption) features and image features into the same vector space (e.g. for cross-modal retrieval/search using BLIP-2 models, similar to what we can already do with CLIP.) Thanks in advance. | ### System Info
linux, python 3.8+, pytorch '1.13.0+cu116'
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
... | https://github.com/huggingface/transformers/issues/25245 | closed | [] | 2023-08-01T18:21:07Z | 2023-09-21T08:03:25Z | null | wingz1 |
huggingface/dataset-viewer | 1,591 | Should we convert the datasets to other formats than parquet? | One OP asked for CSV conversion (not explicitly from the Hub itself): https://huggingface.co/datasets/medical_questions_pairs/discussions/3#64c8c2af527d76365563285c | https://github.com/huggingface/dataset-viewer/issues/1591 | closed | [
"question",
"feature request",
"P2"
] | 2023-08-01T13:47:12Z | 2024-06-19T14:19:01Z | null | severo |
huggingface/optimum | 1,243 | transformers.convert_graph_to_onnx.quantize equivalent with optimum? | Historically, I've used the following to quantize a model after training:
```python
import sys
from pathlib import Path
from transformers.convert_graph_to_onnx import quantize
input_file = sys.argv[1]
print("Performing quantization of model '{}'".format(input_file))
quantized_model_path = quantize(Path(inp... | https://github.com/huggingface/optimum/issues/1243 | closed | [] | 2023-08-01T07:59:03Z | 2023-08-01T21:45:46Z | 2 | jobergum |
huggingface/sentence-transformers | 2,266 | How to measure the quanlity of embeddings? | I am using `sentence-transformers` to encode the big texts into input embeddings for a text classification task. However, I'm unsure how to compare the quality of embeddings when evaluating multiple models' performance. Could you please provide some advice? | https://github.com/huggingface/sentence-transformers/issues/2266 | open | [] | 2023-08-01T06:59:41Z | 2023-09-01T06:12:39Z | null | sgwhat |
huggingface/trl | 597 | How to run using multi-GPUs? | Hi, I'm not so familiar with the training method using multi-GPUs.
I have a machine with 8 A100s, what should I do to full params SFT a llama2-7B model?
How to use the trl tool?
Thanks. | https://github.com/huggingface/trl/issues/597 | closed | [] | 2023-08-01T06:36:27Z | 2023-08-21T03:39:46Z | null | jyC23333 |
huggingface/diffusers | 4,407 | how to store hub_download on local directory? | ### Describe the bug
running:
from huggingface_hub import hf_hub_url, hf_hub_download
```
# Generate/show the URL
hf_hub_url(
repo_id="XpucT/Deliberate",
filename="Deliberate-inpainting.safetensors",
)
# Download the file
hf_hub_download(
repo_id="XpucT/Deliberate",
filename="Deliberate-inpain... | https://github.com/huggingface/diffusers/issues/4407 | closed | [
"bug"
] | 2023-08-01T05:21:39Z | 2023-08-01T05:55:46Z | null | andysingal |
huggingface/datasets | 6,108 | Loading local datasets got strangely stuck | ### Describe the bug
I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as:
```python
ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=... | https://github.com/huggingface/datasets/issues/6108 | open | [] | 2023-08-01T02:28:06Z | 2024-12-31T16:01:00Z | 7 | LoveCatc |
huggingface/chat-ui | 379 | Issue with Chat UI when deploying Text Generation API on a remote server |
I am facing an issue with the Chat UI while using the Text Generation API. Everything works correctly when the Text Generation API is deployed on localhost, but the Chat UI doesn't work when the Text Generation API is deployed on a remote server.
Steps to reproduce the problem:
1. Deploy the Text Generation API o... | https://github.com/huggingface/chat-ui/issues/379 | open | [
"support"
] | 2023-07-31T17:22:49Z | 2023-09-18T12:55:45Z | 0 | bilal-rachik |
huggingface/chat-ui | 378 | Add support for endpoints requiring client authentication using PKI | Hi,
Are you open to adding support for endpoints that require client authentication using PKI? I have a requirement to use client authentication with our backend inference server.
Currently authentication config from each endpoint is passed to the headers arg of the fetch command: https://github.com/huggingface/... | https://github.com/huggingface/chat-ui/issues/378 | closed | [
"question",
"front"
] | 2023-07-31T17:13:53Z | 2023-08-15T18:51:29Z | null | cambriancoder |
huggingface/chat-ui | 377 | Provide a login button, for existing users? | I just changed to another laptop, and didn't find a login button to see and work with my account from Huggingface. After I used once the Chat, I got a message to Login. I would suggest making it more traditional to have a username and a login button on the left sidebar. | https://github.com/huggingface/chat-ui/issues/377 | closed | [
"enhancement",
"front"
] | 2023-07-31T12:08:52Z | 2023-08-02T12:19:30Z | 1 | tobiashochguertel |
huggingface/datasets | 6,104 | HF Datasets data access is extremely slow even when in memory | ### Describe the bug
Doing a simple `some_dataset[:10]` can take more than a minute.
Profiling it:
<img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab">
`some_dataset` is completely in memory with no disk cache.
This is proving fat... | https://github.com/huggingface/datasets/issues/6104 | open | [] | 2023-07-31T11:12:19Z | 2023-08-01T11:22:43Z | 1 | NightMachinery |
huggingface/diffusers | 4,382 | HOW TO Overcoming the Influence of Seed and Enhancing the Role of Text Prompts | I fine-tuned a text2img model using Lora, based on the v1.5 version of stable diffusion. The results generated are very good.
But they canβt be controlled. It seems that the generated results are more based on the seed. Changing the seed changes the image, And if I donβt change the seed and only change the text prompt... | https://github.com/huggingface/diffusers/issues/4382 | closed | [] | 2023-07-31T07:41:03Z | 2023-08-02T09:23:50Z | null | XiaoyuZhuang |
huggingface/transformers.js | 230 | [Question] distiluse-base-multilingual-cased-v2 - wrong vector dimension (768 vs 512) in onnx version? | I was just playing around with the model [distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2) and noticed that your onnx versions both (quantized and normal) produce embeddings with 768-dimensional vectors instead of 512.
Example:
index.html
... | https://github.com/huggingface/transformers.js/issues/230 | closed | [
"question"
] | 2023-07-30T16:49:36Z | 2024-10-18T13:30:12Z | null | do-me |
huggingface/trl | 592 | How to load a custom structure modelοΌ | helloοΌ when I run the following code, I am prompted that only support `AutoModelForCausalLMWithValueHead` and `AutoModelForSeq2SeqLMWithValueHead`. But these two structures seem to only be able to load the specified pre-trained model.
`ppo_trainer = PPOTrainer(config, gen_model, gen_ref_model, tokenizer)`
My model... | https://github.com/huggingface/trl/issues/592 | closed | [] | 2023-07-30T15:42:18Z | 2023-08-31T11:00:56Z | null | estuday |
huggingface/datasets | 6,099 | How do i get "amazon_us_reviews | ### Feature request
I have been trying to load 'amazon_us_dataset" but unable to do so.
`amazon_us_reviews = load_dataset('amazon_us_reviews')`
`print(amazon_us_reviews)`
> [ValueError: Config name is missing.
Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1... | https://github.com/huggingface/datasets/issues/6099 | closed | [
"enhancement"
] | 2023-07-30T11:02:17Z | 2023-08-21T05:08:08Z | 10 | IqraBaluch |
huggingface/trl | 591 | how to use SFTTrainer for multi turns dialogue? | I wanto use SFTTrainer to train a multi turns dialogues. does it apply to llama-2-7b-cha-hf? is it same to llama-2-7b-hf for instruction tune?
my dataset is multi turns dialogues.
the prompt is:
```
<s>[INST] <<SYS>>
{{ system_prompt }}
<</SYS>>
{{ user_msg_1 }} [/INST] {{ model_answer_1 }} </s><s>[INST] {{ u... | https://github.com/huggingface/trl/issues/591 | closed | [] | 2023-07-30T05:47:40Z | 2023-08-01T06:21:04Z | null | moseshu |
huggingface/transformers.js | 228 | [Question] Chaining automatic-speech recognition tasks sometimes produces weird output? | Hi! I'm using the automatic-speech recognition task with vanilla nodejs (20) for (almost) live transcription (after the person has stopped talking)
This is the setup I'm using as per the docs:
```
const multilingual = true;
const model = "base";
const modelName = `Xenova/whisper-${model}${multilingual ? "" : "... | https://github.com/huggingface/transformers.js/issues/228 | closed | [
"question"
] | 2023-07-30T01:32:26Z | 2024-12-07T14:45:02Z | null | funiel |
huggingface/diffusers | 4,363 | how to properly load sd_xl_base_1.0_0.9vae.safetensors | ### Describe the bug
hi, how should i load sd_xl_base_1.0_0.9vae.safetensors given the namespace is the same as 1.0 one?
### Reproduction
N/A
### Logs
_No response_
### System Info
ec2
### Who can help?
@sayakpaul @patrick | https://github.com/huggingface/diffusers/issues/4363 | closed | [
"bug",
"stale"
] | 2023-07-29T21:16:34Z | 2023-10-18T15:14:58Z | null | MaxTran96 |
huggingface/optimum-neuron | 151 | any example of how to use with Accelerate? | All the examples seem to replace `Trainer` but we are using `Accelerate`. Much appreciated! :) | https://github.com/huggingface/optimum-neuron/issues/151 | closed | [
"Stale"
] | 2023-07-29T05:51:20Z | 2024-12-02T08:05:47Z | null | jiangts |
huggingface/transformers.js | 226 | voice recognition | @xenova hello bro i wish every things is good on you so i just wanna ask if we can recognize an audio file using his buffer ecxept wav extensions only i mean using mp3 file buffer or flac extension?
```
// Load audio data
let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
... | https://github.com/huggingface/transformers.js/issues/226 | closed | [
"question"
] | 2023-07-28T16:14:50Z | 2023-08-20T23:43:31Z | null | jedLahrim |
huggingface/chat-ui | 372 | Can I add i18n support? | Would be great to support the standard i18n in frontend, we can contribute with it, do you see that it would be an accepted contribution?
Maybe using this lib [kaisermann/svelte-i18n](https://github.com/kaisermann/svelte-i18n/blob/main/docs/Getting%20Started.md) | https://github.com/huggingface/chat-ui/issues/372 | closed | [
"enhancement",
"question",
"front"
] | 2023-07-28T11:56:55Z | 2024-06-17T18:07:41Z | null | juancgalvis |
huggingface/chat-ui | 371 | Improve the UI, to be flexible width? | The left sidebar is growing here, and I wished I could make it wider. Same for the middle part, which is centered, and sometimes I have to scroll to the side to see the whole code block because the middle part has a left and right margin, what I can't control.
It would be great when we could set the percent value fo... | https://github.com/huggingface/chat-ui/issues/371 | open | [] | 2023-07-28T11:27:27Z | 2023-07-28T15:16:38Z | 2 | tobiashochguertel |
huggingface/accelerate | 1,786 | Problem about how to save memory on 2 GPU at one machine. | Why I run my script on one GPU at batch_size 8,nothing happened, I use the accelerate launch my script on 2 GPU at same batch_size, both process terminate because CUDA out of Memory.
Here is my config :
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
dynamo_config:
dynamo_ba... | https://github.com/huggingface/accelerate/issues/1786 | closed | [] | 2023-07-28T09:42:43Z | 2023-09-15T15:06:17Z | null | Kangkang625 |
huggingface/text-generation-inference | 720 | How to make sure the local tgi server's performance is ok | ### Feature request
Hello, I just deployed the tgi server as docs in docker container on an single A100 and have a load test with bloom-7b1, but the performance has come a long way from other inference servers, like vllm, fastertransformer in the same environment & condition. So, if there is something like an offici... | https://github.com/huggingface/text-generation-inference/issues/720 | closed | [
"Stale"
] | 2023-07-28T07:57:18Z | 2024-04-25T01:58:42Z | null | lichangW |
huggingface/transformers.js | 224 | [Question] Merge whisper-base.en main and output_attentions? | I can see there is `output_attentions` branch on https://huggingface.co/Xenova/whisper-base.en/tree/main and the difference from `main` seems it can support `return_timestamps: 'word'`.
Is there a plan/schedule to merge these two?
Or these two branches are incompatible to be merged together? In such case, will bo... | https://github.com/huggingface/transformers.js/issues/224 | closed | [
"question"
] | 2023-07-28T07:44:52Z | 2023-09-04T20:59:21Z | null | jozefchutka |
huggingface/blog | 1,352 | How to train the autoformer? | Dear authors,
I have read your blog at https://huggingface.co/blog/autoformer, it is great to explain why transformer is better than Dlinear.
However, I am wondering how to train my own Autoformer instead of using a pretrained Autoformer.
Best regards | https://github.com/huggingface/blog/issues/1352 | open | [] | 2023-07-28T03:28:33Z | 2023-12-07T17:40:09Z | null | AppleMax1992 |
huggingface/text-generation-inference | 718 | How to make sure Flash and PagedAttention are running? | ### System Info
I am running the following for llamav2, and was wondering how I can make sure pagedattention and flashattention are running? any Flag to be set or they are enabled by default?
```
docker run --gpus all --shm-size 1g -p $PORT:80 \
-v $PWD/data:/data \
-e HUGGING_FACE_HUB_T... | https://github.com/huggingface/text-generation-inference/issues/718 | closed | [] | 2023-07-27T22:55:26Z | 2023-07-28T08:19:20Z | null | HamidShojanazeri |
huggingface/text-generation-inference | 716 | How to load private model in tgi in docker and difference inference performance when loading from huggingface/loading from locally directory | Hi team,
How do we load a private model in tgi in the docker because of the access issue?
One solution I think is to pre-download the model and then mount the model directory and load into tgi. However, I find out there is a big performance inference gap between these two methods and could the team provide some... | https://github.com/huggingface/text-generation-inference/issues/716 | closed | [] | 2023-07-27T21:12:38Z | 2023-07-28T07:12:53Z | null | zch-cc |
huggingface/text-generation-inference | 711 | How could I know what is wrong when connect refuse happen? | Hi
I try with below command to launch the docker.
```
docker run --rm --name tgi --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=1 -p 8080:80 ghcr.io/huggingface/text-generation-inference:0.9.3 --model-id decapoda-research/llama-7b-hf
```
At this moment, with netstat, I could see in host, 8080 port is already li... | https://github.com/huggingface/text-generation-inference/issues/711 | closed | [] | 2023-07-27T13:59:48Z | 2023-07-27T14:10:46Z | null | leiwen83 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.