repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 33,232 | How to use hugginface for training: google-t5/t5-base | ### Feature request
How to use hugginface for training / 如何使用huggingface来训练:
https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation
#What is the format and how do I write it? / 这个格式是怎么样的,怎么写呢?
def batch_collator(data):
print(data) #?????????????????????????????????????????????... | https://github.com/huggingface/transformers/issues/33232 | open | [
"Usage",
"Feature request"
] | 2024-08-31T07:41:18Z | 2024-09-09T08:45:50Z | null | gg22mm |
huggingface/transformers | 33,228 | How to obtain batch index of validation dataset? | Hi,
I wanted to know how would we fetch the batch id/index of the eval dataset in ```preprocess_logits_for_metrics()``` ?
Thanks in advance! | https://github.com/huggingface/transformers/issues/33228 | closed | [
"Usage"
] | 2024-08-31T00:11:13Z | 2024-10-13T08:04:26Z | null | SoumiDas |
huggingface/transformers | 33,210 | The model's address is https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank... | ### Feature request
hello,The model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can... | https://github.com/huggingface/transformers/issues/33210 | open | [
"Feature request"
] | 2024-08-30T09:33:01Z | 2024-10-22T07:18:15Z | null | pengpengtao |
huggingface/dataset-viewer | 3,054 | Image URL detection | [`is_image_url`](https://github.com/huggingface/dataset-viewer/blob/946b0788fa426007161f2077a70b5ae64b211cf8/libs/libcommon/src/libcommon/utils.py#L131-L134) relies on a filename and extension being present, however, in some cases an image URL does not contain a filename. Example [dataset](https://huggingface.co/datase... | https://github.com/huggingface/dataset-viewer/issues/3054 | open | [
"question",
"improvement / optimization",
"P2"
] | 2024-08-29T23:17:55Z | 2025-07-04T09:37:23Z | null | hlky |
huggingface/transformers.js | 911 | Next.js example breaks with v3 | ### Question
Are there steps documented anywhere for running V3 in your app? I'm trying to test it out via these steps:
1. Pointing to the alpha in my `package.json`: `"@huggingface/transformers": "^3.0.0-alpha.10",`
2. `npm i`
3. `cd node_modules/@hugginface/transformers && npm i`
4. copy the [webpack.config.js... | https://github.com/huggingface/transformers.js/issues/911 | closed | [
"question"
] | 2024-08-29T20:17:03Z | 2025-02-16T12:35:47Z | null | stinoga |
huggingface/diffusers | 9,317 | Finetuning on dataset | dear @thedarkzeno and @patil-suraj
Thank you so much for putting your work out there. I wanted to ask, how would the training be for training on a dataset and not a single instance image as mentioned in train_dreambooth_inpaint. And can I finetune models trained from https://github.com/CompVis/latent-diffusion repo... | https://github.com/huggingface/diffusers/issues/9317 | closed | [
"stale"
] | 2024-08-29T12:20:51Z | 2024-10-23T16:10:47Z | 4 | ultiwinter |
huggingface/optimum-quanto | 300 | How to quantize, save and load Stable Diffusion 3 model. | import torch
from optimum.quanto import qint2, qint4, qint8, quantize, freeze
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.bfloat16)
quantize(pipe.text_encoder, weights=qint4)
freeze(pip... | https://github.com/huggingface/optimum-quanto/issues/300 | closed | [
"Stale"
] | 2024-08-29T06:24:02Z | 2024-10-06T02:06:30Z | null | jainrahul52 |
huggingface/optimum | 2,002 | Is it possible to infer the model separately through encoder.onnx and decoder.onnx | ### Feature request
Is it possible to infer the model separately through encoder.onnx and decoder.onnx
### Motivation
Is it possible to infer the model separately through encoder.onnx and decoder.onnx
### Your contribution
Is it possible to infer the model separately through encoder.onnx and decoder.onnx | https://github.com/huggingface/optimum/issues/2002 | open | [
"onnx"
] | 2024-08-29T03:26:20Z | 2024-10-08T15:28:59Z | 0 | pengpengtao |
huggingface/diffusers | 9,303 | [Add] VEnhancer - the interpolation and upscaler for CogVideoX-5b | ### Model/Pipeline/Scheduler description
VEnhancer, a generative space-time enhancement framework that can improve the existing T2V results.
https://github.com/Vchitect/VEnhancer
### Open source status
- [X] The model implementation is available.
- [X] The model weights are available (Only relevant if addition is... | https://github.com/huggingface/diffusers/issues/9303 | open | [
"stale"
] | 2024-08-28T14:43:32Z | 2024-12-11T15:04:32Z | 3 | tin2tin |
huggingface/text-generation-inference | 2,466 | Guide on how to use TensorRT-LLM Backend | ### Feature request
Does any documentation exist, or would it be possible to add documentation, on how to use the TensorRT-LLM backend? #2458 makes mention that the TRT-LLM backend exists, and I can see that there's a Dockerfile for TRT-LLM, but I don't see any guides on how to build/use it.
### Motivation
I would l... | https://github.com/huggingface/text-generation-inference/issues/2466 | open | [] | 2024-08-28T13:24:26Z | 2025-05-18T16:23:14Z | null | michaelthreet |
huggingface/lerobot | 390 | [Feature Request] Add end effector pos field in lerobot dataset? | Aloha style joint space dataset will limit data set to the specific robot. Can we change joint space data or add a field of end effector to cartesian space data base on the robot URDF file?
It may help robotics community build a more generalized policy. | https://github.com/huggingface/lerobot/issues/390 | closed | [
"question",
"dataset",
"robots"
] | 2024-08-28T13:19:15Z | 2024-08-29T09:55:27Z | null | hilookas |
huggingface/datasets | 7,129 | Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output | In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code:
````
from datasets import Features
features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
... | https://github.com/huggingface/datasets/issues/7129 | closed | [] | 2024-08-28T12:27:48Z | 2024-12-06T11:32:02Z | 0 | sergiopaniego |
huggingface/diffusers | 9,299 | CUDAGRAPHs for Flux position embeddings | @yiyixuxu
Is it possible to refactor the Flux positional embeddings so that we can fully make use of CUDAGRAPHs?
```bash
skipping cudagraphs due to skipping cudagraphs due to cpu device (device_put). Found from :
File "/home/sayak/diffusers/src/diffusers/models/transformers/transformer_flux.py", line 469,... | https://github.com/huggingface/diffusers/issues/9299 | closed | [] | 2024-08-28T11:33:16Z | 2024-08-29T19:37:17Z | 0 | sayakpaul |
huggingface/transformers.js | 906 | Unsupported model type: jais | ### Question
### System Info
macOS, node v20.10, @xenova/transformers 2.17.2
### Environment/Platform
- [ ] Website/web-app
- [ ] Browser extension
- [x] Server-side (e.g., Node.js, Deno, Bun)
- [ ] Desktop app (e.g., Electron)
- [ ] Other (e.g., VSCode extension)
### Description
```
Error: Unsuppor... | https://github.com/huggingface/transformers.js/issues/906 | closed | [
"question"
] | 2024-08-28T09:46:17Z | 2024-08-28T21:01:10Z | null | SherifElfadaly |
huggingface/trl | 1,986 | how to convert dpodata to ktodata | ### Feature request
how to convert dpodata to ktodata
### Motivation
how to convert dpodata to ktodata
### Your contribution
how to convert dpodata to ktodata | https://github.com/huggingface/trl/issues/1986 | closed | [] | 2024-08-28T06:23:13Z | 2024-08-28T09:02:35Z | null | dotsonliu |
huggingface/datasets | 7,128 | Filter Large Dataset Entry by Entry | ### Feature request
I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process.
Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset.... | https://github.com/huggingface/datasets/issues/7128 | open | [
"enhancement"
] | 2024-08-27T20:31:09Z | 2024-10-07T23:37:44Z | 4 | QiyaoWei |
huggingface/huggingface_hub | 2,491 | How to uplaod folders into repo with most effective way - on error continue resume max speed | Hello. I have the below tasks for uploading however I am not sure if they are most effective way of doing
#### This cell is used to upload single file into a repo with certain name
```
from huggingface_hub import HfApi
api = HfApi()
api.upload_file(
path_or_fileobj=r"/home/Ubuntu/apps/stable-diffusion-w... | https://github.com/huggingface/huggingface_hub/issues/2491 | closed | [
"bug"
] | 2024-08-27T16:36:04Z | 2024-08-28T08:24:22Z | null | FurkanGozukara |
huggingface/Google-Cloud-Containers | 73 | Download model files from GCS (Instead of HF Hub) | When deploying an HF model to Vertex AI, I would like to download a fine-tuned model from GCS, instead of from HF Hub, like so:
```
model = aiplatform.Model.upload(
display_name="my-model",
serving_container_image_uri=os.getenv("CONTAINER_URI"),
serving_container_environment_variables={
"AIP... | https://github.com/huggingface/Google-Cloud-Containers/issues/73 | closed | [
"tei",
"question"
] | 2024-08-27T12:14:10Z | 2024-09-16T07:07:11Z | null | rm-jeremyduplessis |
huggingface/chat-ui | 1,436 | MODELS=`[ variable problem when I docker run | Hello,
I want to use Ollama to use Mistral model and I followed the documentation below : https://huggingface.co/docs/chat-ui/configuration/models/providers/ollama
`deploy.sh` :
```sh
#!/bin/bash
sudo docker compose down
sudo docker rm -f mongodb && sudo docker rm -f chat-ui
# nginx and ollama
sudo d... | https://github.com/huggingface/chat-ui/issues/1436 | closed | [
"support"
] | 2024-08-26T14:00:26Z | 2024-08-27T11:04:39Z | 5 | avirgos |
huggingface/diffusers | 9,276 | How can I manually update some of their checkpoints of UNet2/3DConditionModel objects? | ### Discussed in https://github.com/huggingface/diffusers/discussions/9273
<div type='discussions-op-text'>
<sup>Originally posted by **justin4ai** August 26, 2024</sup>
Hello, I'm quite new to diffusers package and trying to implement fine-tuning code that uses the saved checkpoints initialized with ```UNet2/3D... | https://github.com/huggingface/diffusers/issues/9276 | open | [
"stale"
] | 2024-08-26T07:49:23Z | 2024-09-25T15:03:01Z | 1 | justin4ai |
huggingface/transformers | 33,115 | How to get the score of each token when using pipeline | pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1,
output_scores=True
)
The model I use is Qwen2-7B-Instruct. When I try to output the score of each... | https://github.com/huggingface/transformers/issues/33115 | closed | [
"Usage"
] | 2024-08-26T07:00:54Z | 2025-03-06T08:23:58Z | null | xin0623 |
huggingface/diffusers | 9,271 | The different quality between ComfyUI and Diffusers ? | ### Discussed in https://github.com/huggingface/diffusers/discussions/9265
<div type='discussions-op-text'>
<sup>Originally posted by **vuongminh1907** August 25, 2024</sup>
I had a problem using InstantID (https://github.com/instantX-research/InstantID), which uses Diffusers as its base. Additionally, I tried C... | https://github.com/huggingface/diffusers/issues/9271 | closed | [
"stale"
] | 2024-08-26T02:53:23Z | 2024-10-15T18:10:42Z | 3 | vuongminh1907 |
huggingface/diffusers | 9,264 | Could you make an inpainting model for flux? | ### Model/Pipeline/Scheduler description
The [stable-diffusion-xl-1.0-inpainting-0.1](https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1) model helps a lot. Could you make a similar inpainting model for flux?
https://huggingface.co/black-forest-labs/FLUX.1-dev
### Open source status
- [ ] The... | https://github.com/huggingface/diffusers/issues/9264 | closed | [] | 2024-08-24T17:32:32Z | 2024-08-24T17:37:59Z | 2 | snowbedding |
huggingface/transformers | 33,106 | how to fine tune TrOCR on specifique langage guide. | ### Model description
hello , just passed through issues and other , but none of them talked on how to fine-tune TrOCR on specifique langage , like how to pick encoder and decoder , model .. etc ,
can you @NielsRogge , write a simple instructions/guide on this topic ?
### Open source status
- [ ] The model imple... | https://github.com/huggingface/transformers/issues/33106 | closed | [] | 2024-08-24T14:33:02Z | 2025-06-15T08:07:10Z | null | MohamedLahmeri01 |
huggingface/datasets | 7,123 | Make dataset viewer more flexible in displaying metadata alongside images | ### Feature request
To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is th... | https://github.com/huggingface/datasets/issues/7123 | open | [
"enhancement"
] | 2024-08-23T22:56:01Z | 2024-10-17T09:13:47Z | 3 | egrace479 |
huggingface/diffusers | 9,258 | Kohya SS FLUX LoRA training is way faster on Linux than Windows any ideas to debug? Same settings, libraries and GPU | ### Describe the bug
I am using Kohya SS to train FLUX LoRA
On Linux RTX 3090 gets like 5.5 second / it - batch size 1 and 1024x1024 px resolution
On Windows RTX 3090 TI gets 7.7 second / it - has the most powerful CPU 13900 K
This speed dispercany is huge between Windows and Linux for some reason
Torch... | https://github.com/huggingface/diffusers/issues/9258 | closed | [
"bug"
] | 2024-08-23T11:42:53Z | 2024-08-23T11:55:18Z | 1 | FurkanGozukara |
huggingface/datasets | 7,122 | [interleave_dataset] sample batches from a single source at a time | ### Feature request
interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar man... | https://github.com/huggingface/datasets/issues/7122 | open | [
"enhancement"
] | 2024-08-23T07:21:15Z | 2024-08-23T07:21:15Z | 0 | memray |
huggingface/text-generation-inference | 2,452 | How to get the token probability by curl request? | ### Feature request
curl -v -X POST http://.....srv/generate -H "Content-Type: application/json" -d '{"inputs": "xxxxx:","parameters": {"max_new_tokens": 256}}'
user this curl request, get output like
{"generated_text": xxxx}
how to get generated text probability from llm in TGI service?
### Motivation
no
... | https://github.com/huggingface/text-generation-inference/issues/2452 | closed | [] | 2024-08-23T03:01:17Z | 2024-08-27T01:32:44Z | null | TWSFar |
huggingface/speech-to-speech | 37 | [Feature request] How about adding an optional speech to viseme model at the end of our chain? | Hi there,
Thank you so much for your work on this project. It's truly amazing, and I’m excited to see all the innovative tools that people will build based on it. I can already imagine many will integrate your speech-to-speech pipeline with avatar or robot embodiments, where lip sync will be crucial.
To support ... | https://github.com/huggingface/speech-to-speech/issues/37 | open | [] | 2024-08-22T21:32:47Z | 2024-09-09T17:16:45Z | null | fabiocat93 |
huggingface/huggingface_hub | 2,480 | How to use the HF Nvidia NIM API with the HF inference client? | ### Describe the bug
We recently introduced the [Nvidia NIM API](https://huggingface.co/blog/inference-dgx-cloud) for selected models. The recommended use is via the OAI client like this (with a specific fine-grained token for an enterprise org):
```py
from openai import OpenAI
client = OpenAI(
base_url... | https://github.com/huggingface/huggingface_hub/issues/2480 | closed | [
"bug"
] | 2024-08-22T12:32:16Z | 2024-08-26T12:45:55Z | null | MoritzLaurer |
huggingface/transformers.js | 896 | How to use this model: Xenova/bge-reranker-base | ### Question
I see that it supports transformers.js, but I can't find the instructions for use. Please help me with using it. | https://github.com/huggingface/transformers.js/issues/896 | closed | [
"question"
] | 2024-08-22T07:33:42Z | 2024-08-29T00:12:52Z | null | gy9527 |
huggingface/sentence-transformers | 2,900 | how to keep `encode_multi_process` output on the GPU | I saw this [example](https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/semantic-search/semantic_search.py) where we can do the following:
`query_embedding = embedder.encode(query, convert_to_tensor=True)`
`hits = util.semantic_search(query_embedding, corpus_embeddings, top_k=5)`
I r... | https://github.com/huggingface/sentence-transformers/issues/2900 | open | [] | 2024-08-21T21:05:35Z | 2024-08-21T21:07:39Z | null | anshuchen |
huggingface/parler-tts | 116 | How to use italian language? | It is possible use an italian style speaker? I've tried many prompt but all of this are in english style | https://github.com/huggingface/parler-tts/issues/116 | open | [] | 2024-08-21T15:24:57Z | 2025-06-18T13:20:22Z | null | piperino11 |
huggingface/chat-ui | 1,423 | Generated answers with Llama 3 include <|start_header_id|>assistant<|end_header_id|> | ## Bug description
I have set up a local endpoint serving Llama 3. All the answers I get from it start with `<|start_header_id|>assistant<|end_header_id|>`.
## Steps to reproduce
Set up Llama 3 in a local endpoint. In my `.env.local`, it is defined as the following:
```
MODELS=`[
{
"name": "lla... | https://github.com/huggingface/chat-ui/issues/1423 | closed | [
"support"
] | 2024-08-21T11:56:47Z | 2024-08-26T14:31:53Z | 5 | erickrf |
huggingface/trl | 1,955 | How to fine-tune LLaVA using PPO | Does LLaVA support training with PPO?
If not, what modifications do I need to make to enable this support? | https://github.com/huggingface/trl/issues/1955 | open | [
"✨ enhancement",
"👁️ VLM"
] | 2024-08-21T07:34:30Z | 2024-08-26T11:13:46Z | null | Yufang-Liu |
huggingface/diffusers | 9,235 | Is there any way to get diffusers-v0.27.0.dev0? | Is there any way to get diffusers-v0.27.0.dev0? I want to compare the difference between diffusers-v0.27.0.dev0 and branches that develop on it in another project, but I didn't find it on the releases or tags page. | https://github.com/huggingface/diffusers/issues/9235 | closed | [] | 2024-08-21T03:42:11Z | 2024-08-21T05:10:26Z | 2 | D222097 |
huggingface/llm.nvim | 108 | How to use proxy env var | I am unable to communicate with any http endpoints because I am behind a corporate proxy that uses self-signed certificates. Typically we use the http_proxy and https_proxy environment variables for this purpose, but I can't see any obvious configurations that I can add to my lua config to make this work.
I have tri... | https://github.com/huggingface/llm.nvim/issues/108 | open | [] | 2024-08-20T18:52:54Z | 2024-08-20T18:53:36Z | null | SethARhodes |
huggingface/huggingface_hub | 2,468 | How can I modify this repo files downloader jupyter notebook script to improve downloading speed? Perhaps multiple downloads at the same time? | This below code works but it is just slow
How can i speed up? Machine has much bigger speed and i really need to download lots of AI models to test
Thank you
```
import os
import requests
import hashlib
from huggingface_hub import list_repo_files, hf_hub_url, hf_hub_download
from huggingface_hub.utils ... | https://github.com/huggingface/huggingface_hub/issues/2468 | closed | [] | 2024-08-20T15:13:13Z | 2024-08-27T16:22:14Z | null | FurkanGozukara |
huggingface/datasets | 7,116 | datasets cannot handle nested json if features is given. | ### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value... | https://github.com/huggingface/datasets/issues/7116 | closed | [] | 2024-08-20T12:27:49Z | 2024-09-03T10:18:23Z | 3 | ljw20180420 |
huggingface/datasets | 7,113 | Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch) | ### Describe the bug
Hi there,
I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgr... | https://github.com/huggingface/datasets/issues/7113 | closed | [] | 2024-08-20T08:26:40Z | 2024-08-26T04:24:11Z | 1 | memray |
huggingface/diffusers | 9,216 | I made a pipeline that lets you use any number of models at once | ### Model/Pipeline/Scheduler description
Here's how to do it:
from rubberDiffusers import StableDiffusionRubberPipeline
pipe=StableDiffusionRubberPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float32,local_files_only=True,safety_checker=None, requires_safety_checker=False,
)
... | https://github.com/huggingface/diffusers/issues/9216 | open | [
"stale"
] | 2024-08-19T11:46:08Z | 2024-09-21T15:03:31Z | 3 | alexblattner |
huggingface/transformers | 32,873 | How to use 【examples/pytorch/contrastive-image-text】 to inter inference | ### Feature request
I have reviewed the training code for CLIP and successfully executed it. Now, I want to use the obtained model for inference testing.
### Motivation
I would like to test the performance of the model I have trained.
### Your contribution
I hope I can get a example script to inference testing... | https://github.com/huggingface/transformers/issues/32873 | open | [
"Feature request"
] | 2024-08-19T05:54:54Z | 2024-08-19T08:33:50Z | null | rendaoyuan |
huggingface/chat-ui | 1,415 | Bad request: Task not found for this model | Hi all,
I am facing the following issue when using HuggingFaceEndpoint for my custom finetuned model in my repository "Nithish-2001/RAG-29520hd0-1-chat-finetune" which is public with gradio.
llm_name: Nithish-2001/RAG-29520hd0-1-chat-finetune
Traceback (most recent call last):
File "/usr/local/lib/python3.10/... | https://github.com/huggingface/chat-ui/issues/1415 | open | [
"support"
] | 2024-08-18T09:33:10Z | 2024-08-25T22:38:00Z | 1 | NITHISH-Projects |
huggingface/sentence-transformers | 2,893 | how to finetune sentence-transformers with unsupervised methods? | how to finetune sentence-transformers with unsupervised methods? for semantic search | https://github.com/huggingface/sentence-transformers/issues/2893 | closed | [] | 2024-08-17T02:32:09Z | 2024-08-18T02:51:29Z | null | keyuchen21 |
huggingface/diffusers | 9,205 | Can we pass output_attentions=True to DiT model such as pixart to get attention output? | Can we pass output_attentions=True to DiT model such as pixart to get attention output? Like using output_attentions=True in transformer? | https://github.com/huggingface/diffusers/issues/9205 | open | [
"stale"
] | 2024-08-16T17:26:14Z | 2024-09-16T15:02:42Z | 1 | foreverpiano |
huggingface/datatrove | 266 | How to look into the processed data? | Hi,
After running `tokenize_from_hf_to_s3.py`, I would like to inspect the resulting data. But I find that the current data is in a binary file (`.ds`). is there a way to allow me to look into the data?
Thanks! | https://github.com/huggingface/datatrove/issues/266 | open | [] | 2024-08-16T16:54:45Z | 2024-08-29T15:26:35Z | null | shizhediao |
huggingface/trl | 1,934 | How to Save the PPOTrainer? | The previous issue for this question https://github.com/huggingface/trl/issues/1643#issue-2294886330 is closed but remained unanswered. If I do `ppo_trainer.save_pretrained('path/to/a/folder')` and then `ppo_trainer.from_pretrained('path/to/that/folder')`, I get this error:
ValueError: tokenizer must be a PreTrained... | https://github.com/huggingface/trl/issues/1934 | closed | [] | 2024-08-16T09:41:39Z | 2024-10-07T14:57:51Z | null | ThisGuyIsNotAJumpingBear |
huggingface/parler-tts | 109 | How many epoch of training did you do? What is the accuracy? | How many epoch of training did you do? What is the accuracy? | https://github.com/huggingface/parler-tts/issues/109 | open | [] | 2024-08-16T09:35:31Z | 2024-08-16T09:35:31Z | null | xuezhongfei2008 |
huggingface/diffusers | 9,195 | Problem with Flux Schnell bfloat16 multiGPU | ### Describe the bug
Hello! I set device_map='balanced' and get images generated in 2.5 minutes (expected in 12-20 seconds), while in pipe.hf_device_map it shows that the devices are distributed like this:
```
{
"transformer": "cuda:0",
"text_encoder_2": "cuda:2",
"text_encoder": "cuda:0",
"vae": "cuda:1"
... | https://github.com/huggingface/diffusers/issues/9195 | closed | [
"bug"
] | 2024-08-16T06:30:54Z | 2025-12-05T06:38:14Z | 26 | OlegRuban-ai |
huggingface/diffusers | 9,184 | What is the correct way to apply the dictionary with the control strengths (called “scales”) but with blocks? | ### Describe the bug
I have managed to apply the basic dictionary. as the documentation mentions
```
adapter_weight_scales = { "unet": { "down": 1, "mid": 0, "up": 0} }
pipe.set_adapters("Lora1", adapter_weight_scales)
```
and it already works for N number of LORAS that I want to load, for example
```
ada... | https://github.com/huggingface/diffusers/issues/9184 | closed | [
"bug"
] | 2024-08-15T06:05:42Z | 2024-08-17T00:54:28Z | null | Eduardishion |
huggingface/diffusers | 9,180 | Pipeline has no attribute '_execution_device' | ### Describe the bug
Hello, I implemented my own custom pipeline referring StableDiffusionPipeline (RepDiffusionPipeline), but there are some issues
I called "accelerator.prepare" properly, and mapped the models on device (with "to.(accelerator.device)")
But when I call pipeline and the '__call__' function is call... | https://github.com/huggingface/diffusers/issues/9180 | open | [
"bug",
"stale"
] | 2024-08-14T14:43:15Z | 2025-11-18T13:22:52Z | 33 | choidaedae |
huggingface/diffusers | 9,174 | [Quantization] bring quantization to diffusers core | Now that we have a working PoC (#9165) of NF4 quantization through `bitsandbytes` and also [this](https://huggingface.co/blog/quanto-diffusers) through `optimum.quanto`, it's time to bring in quantization more formally in `diffusers` 🎸
In this issue, I want to devise a rough plan to attack the integration. We are g... | https://github.com/huggingface/diffusers/issues/9174 | closed | [
"quantization"
] | 2024-08-14T08:05:34Z | 2024-10-21T04:42:46Z | 15 | sayakpaul |
huggingface/diffusers | 9,172 | why rebuild a vae in inference stage? | Thanks for ur effort for diffusion model.
I want to know why we need to rebuild a vae in inference stage. I think it will introduce extra GPU cost.
https://github.com/huggingface/diffusers/blob/a85b34e7fdc0a5fceb11aa0fa6199bd9afaca396/examples/text_to_image/train_text_to_image_sdxl.py#L1217C16-L1223C24
| https://github.com/huggingface/diffusers/issues/9172 | open | [
"stale"
] | 2024-08-14T05:52:38Z | 2024-11-14T15:03:55Z | 2 | WilliammmZ |
huggingface/candle | 2,413 | How to load multiple safetensors with json format | For such a task:
https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/transformer
how should safetensors be loaded?
| https://github.com/huggingface/candle/issues/2413 | open | [] | 2024-08-14T04:50:37Z | 2025-06-11T19:05:05Z | null | oovm |
huggingface/diffusers | 9,170 | sdxl and contronet must has a GPU memory more than 36G? | ### Describe the bug
https://github.com/huggingface/diffusers/blob/15eb77bc4cf2ccb40781cb630b9a734b43cffcb8/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py
line73---line113
I run the demo with 24G GPU, then OOM everytime.
so I must run SDXl with 48G?
@yiyixuxu @sayakpaul @DN6 tks
### Reprod... | https://github.com/huggingface/diffusers/issues/9170 | closed | [
"bug"
] | 2024-08-14T01:46:35Z | 2024-11-13T08:49:22Z | 3 | henbucuoshanghai |
huggingface/trl | 1,927 | how to use kto_pair loss in the latest version ? | I can see that kto_pair losstype is no longer available in the latest version of dpo trainer. You suggest to use ktotrainer instead.
But kto_pair loss worked much better than kto_trainer on my dataset, so how do I continue to use kto_pair if I'm using the latest version of the trl library?
thanks a lot! | https://github.com/huggingface/trl/issues/1927 | closed | [
"🏋 DPO",
"🏋 KTO"
] | 2024-08-13T15:59:25Z | 2024-10-20T16:56:21Z | null | vincezengqiang |
huggingface/autotrain-advanced | 728 | [BUG] Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. How to mitigate this? | ### Prerequisites
- [X] I have read the [documentation](https://hf.co/docs/autotrain).
- [X] I have checked other issues for similar problems.
### Backend
Local
### Interface Used
CLI
### CLI Command
```
!autotrain --config path-to.yml
```
```
task: llm-sft
base_model: teknium/OpenHermes-2.... | https://github.com/huggingface/autotrain-advanced/issues/728 | closed | [
"bug"
] | 2024-08-13T05:00:10Z | 2024-08-13T12:31:19Z | null | jackswl |
huggingface/diffusers | 9,164 | the dog example of train_dreambooth_lora_flux.py can not convergence | ### Describe the bug
```
export MODEL_NAME="black-forest-labs/FLUX.1-dev"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="trained-flux-lora"
accelerate launch train_dreambooth_lora_flux.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
-... | https://github.com/huggingface/diffusers/issues/9164 | closed | [
"bug"
] | 2024-08-13T03:08:10Z | 2024-08-13T10:23:23Z | 7 | chongxian |
huggingface/text-embeddings-inference | 380 | How do i deploy to vertex ? | How do i deploy to vertex ? I think i saw some feature=google setting in code which supports compatibility with vertex . Please guide. | https://github.com/huggingface/text-embeddings-inference/issues/380 | closed | [] | 2024-08-12T17:15:30Z | 2024-10-17T10:19:02Z | null | pulkitmehtaworkmetacube |
huggingface/trl | 1,916 | How to Add PEFT to PPO Trainer or PPO Config | I am trying to realize RLHF through PPO.
May I ask how can I realize PEFT in RLHF/PPO. I can see this parameter in DPOTrainer. However, I cannot see that in PPOTrainer.
| https://github.com/huggingface/trl/issues/1916 | closed | [
"✨ enhancement",
"🧒 good second issue",
"🏋 PPO"
] | 2024-08-12T01:02:07Z | 2024-11-18T10:54:10Z | null | ZhichaoWang970201 |
huggingface/trl | 1,915 | How to dpo llava? | Thank you for great work!
I do dpo llava using raw `/trl/examples/scripts/dpo_visual.py` code by using a command
`CUDA_VISIBLE_DEVICES=0 accelerate launch examples/scripts/dpo_visual.py --dataset_name HuggingFaceH4/rlaif-v_formatted --model_name_or_path llava-hf/llava-1.5-7b-hf --per_device_train_batch_... | https://github.com/huggingface/trl/issues/1915 | closed | [] | 2024-08-11T00:57:38Z | 2024-08-11T01:23:16Z | null | ooooohira |
huggingface/transformers.js | 887 | VSCode Interpolation | ### Question
I'm finding that VSCode is extremely slow when reading type definitions from the `@xenova/transformers` path. Is there anything I might be doing wrong? I've noticed that it uses JS comments to define the types instead of a type definition file, is the issue I am having a known issue with using that type o... | https://github.com/huggingface/transformers.js/issues/887 | closed | [
"question"
] | 2024-08-11T00:08:30Z | 2024-08-25T01:55:36Z | null | lukemovement |
huggingface/diffusers | 9,140 | Diffusers model not working as good as repo ckpt model | Hi,
When I try to run the models stable diffusion v1-5 or Instructpix2pix through the diffusers pipeline and use .from_pretrained() it downloads the models from hugging face and I'm using the code to run inference given in hugging face, the results are not good at all in the sense that there is still noise in the gener... | https://github.com/huggingface/diffusers/issues/9140 | closed | [
"stale"
] | 2024-08-09T09:34:30Z | 2024-12-14T12:13:15Z | 6 | kunalkathare |
huggingface/diffusers | 9,136 | IP adapter output on some resolutions suffers in quality? | ### Describe the bug
I am running IP adapter for 768x1344 which is one of the sdxl listed resolutions. I find that the output quality is much less than say regular 768x768 generations. I've attached sample images and code below. In this experiment 1080x768 seemed to get best output, but its not one of the supported re... | https://github.com/huggingface/diffusers/issues/9136 | open | [
"bug",
"stale"
] | 2024-08-09T06:36:39Z | 2024-09-14T15:03:17Z | 2 | darshats |
huggingface/transformers.js | 885 | TimeSformer on the web | ### Question
Glad to see this repo! If I want to use TimeSformer on the web, any suggestion or guide for it? Where can I learn from this repo or it's a totally different things? Thanks in advance! | https://github.com/huggingface/transformers.js/issues/885 | open | [
"question"
] | 2024-08-08T17:59:13Z | 2024-08-11T09:02:47Z | null | tomhsiao1260 |
huggingface/cookbook | 163 | Incorrect markdown table rendering in Colab in "How to use Inference Endpoints to Embed Documents" | There is an issue with the rendering of the Inference Endpoints table in Colab in [How to use Inference Endpoints to Embed Documents](https://huggingface.co/learn/cookbook/automatic_embedding_tei_inference_endpoints). Although the table correctly renders on HF cookbook webpage:
<img width="610" alt="image" src="http... | https://github.com/huggingface/cookbook/issues/163 | closed | [] | 2024-08-08T11:16:40Z | 2024-08-08T16:22:48Z | null | sergiopaniego |
huggingface/alignment-handbook | 192 | Constant training loss in the model adapter card | Hello,
I could fine-tune a model using a small dataset and I see that the validation loss decreases, while the training loss remains the same in the model card.
I don't think this is normal, even though the new task I try to teach the model is similar to what it already does, I think it should be able to learn fr... | https://github.com/huggingface/alignment-handbook/issues/192 | closed | [] | 2024-08-08T09:35:40Z | 2024-08-08T13:29:00Z | 1 | Michelet-Gaetan |
huggingface/optimum | 1,985 | Correct example to use TensorRT? | ### System Info
```shell
optimum: 1.20.0
os: ubuntu 20.04 with RTX 2080TI
python: 3.10.14
```
### Who can help?
@michaelbenayoun @JingyaHuang @echarlaix
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `exam... | https://github.com/huggingface/optimum/issues/1985 | open | [
"bug"
] | 2024-08-08T08:46:14Z | 2024-08-29T11:24:35Z | 2 | sherlcok314159 |
huggingface/diffusers | 9,127 | flux.1-dev device_map didn't work | I try to use device_map to use multiple gpu's, but it not worked, how can I use all my gpus?
| https://github.com/huggingface/diffusers/issues/9127 | closed | [] | 2024-08-08T08:30:33Z | 2024-11-26T02:11:03Z | 33 | hznnnnnn |
huggingface/diffusers | 9,120 | [ar] Translating docs to Arabic (العربية) | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐.
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/m... | https://github.com/huggingface/diffusers/issues/9120 | closed | [] | 2024-08-07T21:04:54Z | 2024-10-29T08:14:24Z | 2 | AhmedAlmaghz |
huggingface/chat-ui | 1,394 | I need to reload to get the response | 
i am using LLama 3.1 70B to chat, but it is so slow to get response and i need to reload to get response , is it because the model is overload ? | https://github.com/huggingface/chat-ui/issues/1394 | closed | [
"support"
] | 2024-08-07T09:31:03Z | 2024-08-15T06:56:59Z | 2 | renaldy-therry |
huggingface/chat-ui | 1,393 | Generation Error with Ollama - Inconsistent Output Generation | Hi,
I'm experiencing issues while running GEMMA2 on Ollama. Specifically, I'm encountering the following problems:
Error on Message Generation:
Whenever a new chat is created, every message results in the error:
Error: Generation failed, in the back end
No output is generated,on the front end.
... | https://github.com/huggingface/chat-ui/issues/1393 | open | [
"support"
] | 2024-08-07T09:02:19Z | 2024-08-07T11:05:19Z | 1 | juanjuanignacio |
huggingface/chat-ui | 1,392 | Cannot send the message and get response in hugging chat | I cannot send message and get a response from llm, and i cannot click "activate" to change model in huggingchat (https://huggingface.co/chat/) | https://github.com/huggingface/chat-ui/issues/1392 | closed | [
"support",
"huggingchat"
] | 2024-08-07T08:37:01Z | 2024-08-07T09:06:59Z | 4 | renaldy-therry |
huggingface/text-embeddings-inference | 371 | how to support a SequenceClassification model | ### Feature request
I have a model can be run by transformers.AutoModelForSequenceClassification.from_pretrained, how can i serve it in TEI
### Motivation
to support more models
### Your contribution
YES | https://github.com/huggingface/text-embeddings-inference/issues/371 | closed | [] | 2024-08-06T10:45:00Z | 2024-10-17T10:24:09Z | null | homily707 |
huggingface/chat-ui | 1,387 | CopyToClipBoardBtn in ChatMessage.svelte has a bug? | https://github.com/huggingface/chat-ui/blob/6de97af071c69aa16e8f893adebb46f86bdeeaff/src/lib/components/chat/ChatMessage.svelte#L378-L384
When compared to other components, classNames is the only difference here.
When rendered, the icon appears faint in the browser.
Is there a reason for this, or is it a bug?
h... | https://github.com/huggingface/chat-ui/issues/1387 | closed | [
"bug",
"good first issue",
"front"
] | 2024-08-06T04:59:45Z | 2024-08-12T09:35:21Z | 5 | calycekr |
huggingface/diffusers | 9,092 | Fluxpipeline report model_index.json not found | ### Describe the bug
I use the Fluxpipeline and report no file model_index.json.
I read other issue and set the `revision="refs/pr/3"`,but it doesn't work, how can i do to solve this problem and how to use the T5xxl as text encoder? thanks for your help
### Reproduction
```
import torch
from diffusers impor... | https://github.com/huggingface/diffusers/issues/9092 | closed | [
"bug"
] | 2024-08-06T01:48:40Z | 2024-08-06T02:25:03Z | 3 | chongxian |
huggingface/trl | 1,900 | How to speed up PPOTrainer .generate()? | During PPO, I'm finding that `.generate()` is extremely slow. The following call takes ~3 and a half minutes for batch size of 64 with a 1.4B parameter policy LM:
```
ppo_trainer.generate(
input_token_ids_list,
pad_token_id=policy_model_tokenizer.eos_token_id,
retu... | https://github.com/huggingface/trl/issues/1900 | closed | [] | 2024-08-05T18:35:31Z | 2024-10-01T06:35:50Z | null | RylanSchaeffer |
huggingface/chat-ui | 1,386 | System role problem running Gemma 2 on vLLM | Hello,
In running chat ui and trying some models, with phi3 and llama i had no problem but when I run gemma2 in vllm Im not able to make any good api request,
in env.local:
{
"name": "google/gemma-2-2b-it",
"id": "google/gemma-2-2b-it",
"chatPromptTemplate": "{{#each messages}}{{#ifUser}}<start_of_turn>us... | https://github.com/huggingface/chat-ui/issues/1386 | closed | [
"support"
] | 2024-08-05T13:22:10Z | 2024-11-07T21:39:47Z | 5 | juanjuanignacio |
huggingface/optimum | 1,981 | [GPTQQuantizer] How to use multi-GPU for GPTQQuantizer? | ### System Info
```shell
hello:
I encountered an out-of-memory error while attempting to quantize a model using GPTQQuantizer. The error seems to be related to the large size of the model weights. Below is the quantization code I used:
from optimum.gptq import GPTQQuantizer
quantizer = GPTQQuantizer(
bi... | https://github.com/huggingface/optimum/issues/1981 | closed | [
"bug"
] | 2024-08-05T07:58:11Z | 2024-08-08T02:19:18Z | null | RunTian1 |
huggingface/datasets | 7,087 | Unable to create dataset card for Lushootseed language | ### Feature request
While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering la... | https://github.com/huggingface/datasets/issues/7087 | closed | [
"enhancement"
] | 2024-08-04T14:27:04Z | 2024-08-06T06:59:23Z | 2 | vaishnavsudarshan |
huggingface/diffusers | 9,076 | Add a better version of 'callback_on_step_end' for FluxPipeline | **Is your feature request related to a problem? Please describe.**
There is a huge delay before starting the inference and once the 4th step is complete and there is no callback for that and it feels like it is stuck, just want a more responsive version.
```
prompt = "A cat holding a sign that says hello world"
ima... | https://github.com/huggingface/diffusers/issues/9076 | closed | [
"stale"
] | 2024-08-04T10:34:04Z | 2024-11-23T00:24:14Z | 3 | nayan-dhabarde |
huggingface/diffusers | 9,069 | TypeError: expected np.ndarray (got numpy.ndarray) | ### Describe the bug
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
# Depending on the variant being used, the pipeline call will slightly ... | https://github.com/huggingface/diffusers/issues/9069 | closed | [
"bug"
] | 2024-08-03T12:45:03Z | 2024-10-27T06:43:32Z | 11 | xiangyumou |
huggingface/evaluate | 611 | How to customize my own evaluator and metrics? | I'm facing a task on VQA, where I need to compute [VQA](https://visualqa.org/evaluation.html) accuracy](https://visualqa.org/evaluation.html) as follows:
```math
\text{Acc}(ans) = \min{ \left\{ \frac{\text{\# humans that said } ans }{3}, 1 \right\} }
```
I have following questions:
1. Do I need to customize my o... | https://github.com/huggingface/evaluate/issues/611 | closed | [] | 2024-08-02T08:37:47Z | 2024-08-15T02:26:30Z | null | Kamichanw |
huggingface/diffusers | 9,055 | ImportError: cannot import name 'StableDiffusionLoraLoaderMixin' from 'diffusers.loaders' | ### Describe the bug
I get this error in diffusers versions 25,26,27,28,29, how can I solve it?
### Reproduction
import ast
import gc
import inspect
import math
import warnings
from collections.abc import Iterable
from typing import Any, Callable, Dict, List, Optional, Union
import torch
import torch.nn.... | https://github.com/huggingface/diffusers/issues/9055 | closed | [
"bug"
] | 2024-08-02T07:58:16Z | 2024-08-02T09:32:12Z | 2 | MehmetcanTozlu |
huggingface/optimum | 1,980 | Issue converting moss-moon-003-sft-int4 model to ONNX format | ### System Info
```shell
I've been working with the owlv2 model and have encountered an issue while attempting to convert it into ONNX format using the provided command:
optimum-cli export onnx --task text-generation -m"/HDD/cz/tools/moss/" --trust-remote-code "HDD/cz/moss_onnx/"
Unfortunately, I'm facing the follow... | https://github.com/huggingface/optimum/issues/1980 | open | [
"bug",
"onnx"
] | 2024-08-02T01:18:46Z | 2024-10-08T15:51:12Z | 0 | ZhiChengWHU |
huggingface/transformers | 32,376 | AutoModel how to modify config? | ```
config = AutoConfig.from_pretrained(
**self.params, trust_remote_code=True
)
config.vision_config.use_flash_attn = False
print(config.vision_config)
self.model = AutoModel.from_pretrained(
**self.params, t... | https://github.com/huggingface/transformers/issues/32376 | closed | [] | 2024-08-01T12:40:44Z | 2024-08-02T02:30:22Z | null | lucasjinreal |
huggingface/diffusers | 9,039 | how to load_lora_weights in FlaxStableDiffusionPipeline | ### Describe the bug
how to load lora in FlaxStableDiffusionPipeline, there are no load_lora_weights in FlaxStableDiffusionPipeline
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9039 | closed | [
"bug",
"stale"
] | 2024-08-01T11:23:52Z | 2024-10-15T03:23:54Z | null | ghost |
huggingface/diffusers | 9,038 | how to use prompt weight in FlaxStableDiffusionPipeline | ### Describe the bug
I can see there are prompt_embeds in StableDiffusionPipeline to support Prompt weighting, But how to do that in FlaxStableDiffusionPipeline? there are not prompt_embeds in StableDiffusionPipeline
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_... | https://github.com/huggingface/diffusers/issues/9038 | closed | [
"bug",
"stale"
] | 2024-08-01T10:44:37Z | 2024-10-14T18:25:55Z | null | ghost |
huggingface/diffusers | 9,032 | how to get the minimun working example of FlaxStableDiffusionPipeline in google colab with tpu runtime | ### Describe the bug
I try the code in google colab with tpu runtime
```
! python3 -m pip install -U diffusers[flax]
import diffusers, os
pipeline = diffusers.StableDiffusionPipeline.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/chilloutMix-Ni.safetensors')
pipeline.save_pretrained('chilloutMi... | https://github.com/huggingface/diffusers/issues/9032 | open | [
"bug",
"stale"
] | 2024-08-01T03:58:34Z | 2024-11-04T15:04:13Z | null | ghost |
huggingface/diffusers | 9,031 | how to disable safty_checker in FlaxStableDiffusionPipeline | ### Describe the bug
```
! python3 -m pip install -U tensorflow-cpu
import diffusers, os
pipeline = diffusers.StableDiffusionPipeline.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/chilloutMix-Ni.safetensors')
pipeline.save_pretrained('chilloutMix', safe_serialization=False)
pipeline, params = ... | https://github.com/huggingface/diffusers/issues/9031 | open | [
"bug",
"stale"
] | 2024-08-01T03:48:27Z | 2024-10-13T15:03:54Z | null | ghost |
huggingface/llm.nvim | 106 | How to use openai api? | I read the code, and it seems support real openai api. But When I set it up something is wrong.
Just make sure if this supports open ai api? I mean realy openai api. | https://github.com/huggingface/llm.nvim/issues/106 | closed | [] | 2024-07-31T23:51:42Z | 2024-10-18T13:49:11Z | null | 4t8dd |
huggingface/diffusers | 9,025 | how to use FlaxStableDiffusionPipeline with from_single_file in kaggle tpu vm | ### Describe the bug
I have single safetensors file and work on diffusers.StableDiffusionPipeline.from_single_file
Now I want to use FlaxStableDiffusionPipeline but there are not .from_single_file member function in FlaxStableDiffusionPipeline
I need to
```
pipeline = diffusers.StableDiffusionPipeline.from_single_... | https://github.com/huggingface/diffusers/issues/9025 | closed | [
"bug"
] | 2024-07-31T10:44:48Z | 2024-08-01T03:59:51Z | null | ghost |
huggingface/transformers.js | 873 | Absolute speaker diarization? | ### Question
I've just managed to integrate the new speaker diarization feature into my project. Very cool stuff. My goal is to let people record meetings, summarize them, and then also list per-speaker tasks. This seems to be a popular feature.
One thing I'm running into is that I don't feed Whisper a single lon... | https://github.com/huggingface/transformers.js/issues/873 | closed | [
"question"
] | 2024-07-30T15:09:23Z | 2024-08-12T12:12:07Z | null | flatsiedatsie |
huggingface/transformers.js | 872 | Please provide extensive examples of how to use langchain... | Here's an example script I'm using, which I believes leverages the ```recursivecharactertextsplitter``` from Langchain. I'd love to replicate my vector db program to the extent I'm able using javascript within a browser but need more examples/help...
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset... | https://github.com/huggingface/transformers.js/issues/872 | closed | [] | 2024-07-30T02:39:43Z | 2024-08-26T00:47:12Z | null | BBC-Esq |
huggingface/diffusers | 9,009 | UNET slower by a factor of batch_size | ### Describe the bug
I was expecting to get faster inferences by batching images together. I realized that when I batch 6 images together, the UNET is 5 times slower for a pipeline_controlnet_img2img.py...
Is it possible or normal ? Do I miss anything ? Thanks for your help
### Reproduction
Image dim 1024.... | https://github.com/huggingface/diffusers/issues/9009 | closed | [
"bug"
] | 2024-07-29T21:01:25Z | 2024-07-30T07:37:51Z | 2 | christopher5106 |
huggingface/transformers.js | 869 | PLEASE provide examples of how to use for vector/embeddings using non-"pipeline" syntax. | I'm accustomed (and most people use) non-"pipeline" syntax with ```transformers``` - e.g. ```AutoModelFromCausalLM``` and ```from_pretained``` and so on?
Also, is there a way to use the ```sentence-transformers``` library with ```transformers.js``` in a similar fashion. You'll notice at [this link](https://huggingf... | https://github.com/huggingface/transformers.js/issues/869 | closed | [] | 2024-07-29T11:55:51Z | 2024-07-30T02:37:40Z | null | BBC-Esq |
huggingface/chat-ui | 1,377 | Use refresh tokens for OAuth | Currently we use long-lived sessions that get extended when the user performs an action. In order to better manage sessions, we could switch to an OAuth flow where we have a short lived session with an access token cookie and a refresh token that we can use to refresh the sessions, since HuggingFace now supports refres... | https://github.com/huggingface/chat-ui/issues/1377 | open | [
"enhancement",
"back"
] | 2024-07-29T10:55:11Z | 2024-09-13T20:08:45Z | 4 | nsarrazin |
huggingface/datasets | 7,080 | Generating train split takes a long time | ### Describe the bug
Loading a simple webdataset takes ~45 minutes.
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M")
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebD... | https://github.com/huggingface/datasets/issues/7080 | open | [] | 2024-07-29T01:42:43Z | 2024-10-02T15:31:22Z | 2 | alexanderswerdlow |
huggingface/chat-ui | 1,375 | Chat-UI is not following prompt - producing unknown completely unrelated text? Hacked? | Oogabooga text-generation-web-ui engine used for inference (prompts directly input into the oogabooga ui produce normal results but chat-ui is doing something weird as below), Mongodb setup
_**Prompt:**_ bake a cake
_**Assistant:**_
```
I'm trying to install Ubuntu on my laptop, but it's not detecting the lang... | https://github.com/huggingface/chat-ui/issues/1375 | open | [
"support"
] | 2024-07-28T00:49:56Z | 2025-01-30T18:45:59Z | 10 | cody151 |
huggingface/chat-ui | 1,374 | Help with .env.local for AWS as an endpoint for llama3 on huggingface cloud | there seems to be no configuration for .env.local that I can get to work to connect to a Llama3 inference endpoint hosted by HuggingFace cloud (and I can find no examples).
```
MONGODB_URL=mongodb://localhost:27017
HF_TOKEN=hf_*******
MODELS=`[
{
"name": "AWS meta-llama-3-8b-pdf",
"chatPromptTem... | https://github.com/huggingface/chat-ui/issues/1374 | open | [
"support"
] | 2024-07-27T23:27:11Z | 2024-07-30T05:28:48Z | 1 | thams |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.