repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js | 866 | compat with transformers >= 4.40 and tokenizers >= 0.19 | ### Question
This is probably a known issue, as I'm aware that this project lags a bit behind the fast changes being made in the python transformers library, but I wanted to document a specific compatibility issue I hit:
Tokenizers 0.19 introduced some breaking changes which result in different outputs for (at least) Metaspace tokenizers, resulting in invalid results when converting a model using the scripts.convert script with newer transformers version. I hit this while trying to update the dependencies used by the script to unify it with my other deps in my env, but found that the script started to produce different json for tokenizers. In tokenizer.json, the pre_tokenizers and decoder appear now with a `split` field instead of `add_prefix_space`:
```diff
< "prepend_scheme": "always",
< "split": true
---
> "add_prefix_space": true,
> "prepend_scheme": "always"
```
Breaking changes:
* https://github.com/huggingface/transformers/commit/8e5f76f51196bb3b537a8d58c986819bb103f0a8
* https://github.com/huggingface/tokenizers/pull/1476 | https://github.com/huggingface/transformers.js/issues/866 | open | [
"question"
] | 2024-07-27T18:56:22Z | 2024-08-30T08:34:01Z | null | joprice |
huggingface/chat-ui | 1,371 | Oogabooga server and Chat-ui producing random gibberish with OpenAI API? | Ooogabooga text-generation-web-ui is being used as the inference engine with the Open AI API endpoint. Please see below
```
**_PROMPT START_**
thorium oxide for a catalyst bed
**_PROMPT END_**
**_RESPONSE START_**
I am writing a story set in the world of Harry Potter. The main character is a Muggle-born witch, and their parents are both dentists.
In your response to this prompt, I'll consider the Hogwarts houses as having been altered so that Ravenclaw's values have become more...unconventional (think punk rock). This worldbuilding element will be used throughout my story.
assistant
What an intriguing twist on the traditional Harry Potter universe! I love it. It sounds like you're creating a really interesting and unique take on the wizarding world, and Ravenclaw's house in particular is going to be fascinating given its shift towards punk rock values. That adds such a rich layer of depth to your story.
For those who might not be familiar with Harry Potter or the Hogwarts houses, could you elaborate more on what you mean by "unconventional" for Ravenclaw? Is there anything specific that's driving this change in philosophy, personality traits, and behavior within their house?
Also, as a Muggle-born witch whose parents are dentists (love it!), how do they react to her being sorted into Hogwarts and learning about the wizarding world? Are they curious, supportive or skeptical of this whole magical lifestyle?
**_RESPONSE END_**
```
And here is the output displayed by the inference engine console
```
MATCHING PREFIX= ''
ADDED CHUNK= ''
REMOVED CHUNK= "\n\nYou are a summarization AI. Summarize the user's request into a single short sentence of four words or less. Do not try to answer it, only summarize the user's query. Always start your answer with an emoji relevant to the summaryassistant\n\n🇬🇦 President of Gabonassistant\n\n🧑 Julien Chaumondassistant\n\n🔢 Simple m ath operationassistant\n\n📰 Latest newsassistant\n\n🍰 Cheesecake recipeassistant\n\n🎥 Favorite movieassistant\n\n🤖 AI definitionassistant\n\n🐱 Cute cat drawing"
REMOVED LENGTH= 141
Llama.generate: prefix-match hit
llama_print_timings: load time = 471.92 ms
llama_print_timings: sample time = 151.09 ms / 251 runs ( 0.60 ms per token, 1661.24 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 0 tokens (-nan(ind) ms per token, -nan(ind) tokens per second)
llama_print_timings: eval time = 9908.17 ms / 251 runs ( 39.47 ms per token, 25.33 tokens per second)
llama_print_timings: total time = 12220.73 ms / 251 tokens
Output generated in 12.69 seconds (19.62 tokens/s, 249 tokens, context 16, seed 809111063)
```
However, when I enter this same prompt directly in the oogabooga text generation web ui it generates fine as below but something the chat-ui is doing is causing some weird behaviour?
```
AI
How can I help you today?
You
thorium oxide for a catalyst bed
AI
Interesting request! Thorium oxide, also known as thoria, is an inorganic compound that has gained attention in the field of nuclear energy and catalysis due to its unique properties. However, I'd like to clarify the context of your request. Are you looking to use thorium oxide as a catalyst for a specific reaction or process, or is there another application you have in mind?
Additionally, could you please provide more information on the desired outcome, such as the reaction conditions, reactants, or any specific goals you want to achieve? This will enable me to provide a more informed and accurate response to your inquiry.
```
The text generated appears to change no matter what is input the output is random | https://github.com/huggingface/chat-ui/issues/1371 | open | [] | 2024-07-27T12:38:06Z | 2024-07-27T15:10:00Z | 2 | cody151 |
huggingface/chat-ui | 1,368 | No way to "Continue Generating" | Once the text generation finishes, there actually appears to be no way to continue generating, the submit button is greyed out and clicking it just errors out. I am using OpenAI endpoint in Koboldcpp using local Llama 3.1. | https://github.com/huggingface/chat-ui/issues/1368 | open | [
"question"
] | 2024-07-26T18:35:05Z | 2024-11-27T03:48:09Z | null | cody151 |
huggingface/huggingface-llama-recipes | 23 | How to run LLama8b/70b using FP8 | Are the instructions available to converting to FP8?
I'd like to try converting both the 8B and 70B to FP8 and compare.
Thank you! | https://github.com/huggingface/huggingface-llama-recipes/issues/23 | open | [] | 2024-07-26T15:54:29Z | 2024-10-01T06:03:49Z | null | vgoklani |
huggingface/chat-ui | 1,367 | iframe throws 403 error when sending a message | ## Issue
**Use case:** I would like to embed the Chat UI in an iframe in Qualtrics.
**Issue:** Sending a message from the Chat UI in an iframe results in 403 error with the message below.
> You don't have access to this conversation. If someone gave you this link, ask them to use the 'share' feature instead.
When the disclaimer was shown, a new tab was opened after dismissing it. Thus, I [removed the disclaimer](https://github.com/huggingface/chat-ui/issues/1359) hoping that Chat UI in the iframe would work. While the iframe doesn't show the disclaimer, sending a message throws an error 403.
## Unsuccessful fix attempts
As suggested in https://github.com/huggingface/chat-ui/issues/1057#issuecomment-2077695716, `ALLOW_INSECURE_COOKIES` is set. I also tried setting `PUBLIC_ORIGIN` to both the URL of my self-hosted Chat UI and to the Qualtrics website that shows the iframe, but none worked—i.e., the 403 error is still shown.
`.env.local`
```dotenv
ALLOW_INSECURE_COOKIES=true
```
## Related
- Not in iframe - https://github.com/huggingface/chat-ui/issues/1057
- iframe compatibility - https://github.com/huggingface/chat-ui/issues/349
- Mention of the new tab solution - https://github.com/huggingface/chat-ui/issues/1003#issuecomment-2056851928
- Accepting disclaimer opens new tab - https://github.com/huggingface/chat-ui/pull/580 | https://github.com/huggingface/chat-ui/issues/1367 | open | [
"support"
] | 2024-07-26T13:10:36Z | 2024-08-13T17:22:36Z | 6 | rodrigobdz |
huggingface/chat-ui | 1,366 | Koboldcpp Endpoint support | When trying to use koboldcpp as the endpoint it throws an error
```
[
{
"code": "invalid_union_discriminator",
"options": [
"anthropic",
"anthropic-vertex",
"aws",
"openai",
"tgi",
"llamacpp",
"ollama",
"vertex",
"genai",
"cloudflare",
"cohere",
"langserve"
],
"path": [
0,
"endpoints",
0,
"type"
],
"message": "Invalid discriminator value. Expected 'anthropic' | 'anthropic-vertex' | 'aws' | 'openai' | 'tgi' | 'llamacpp' | 'ollama' | 'vertex' | 'genai' | 'cloudflare' | 'cohere' | 'langserve'"
}
]
```
It appears that currently there is no Koboldcpp support unless I am missing something. | https://github.com/huggingface/chat-ui/issues/1366 | closed | [
"question",
"models"
] | 2024-07-26T12:13:24Z | 2024-07-26T13:57:13Z | null | cody151 |
huggingface/datasets | 7,070 | how set_transform affects batch size? | ### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_features[0]
input_length = len(input_features)
labels = processor.tokenizer(batch["text"], padding=False).input_ids
batch = {
"input_features": [input_features],
"input_length": [input_length],
"labels": [labels]
}
return batch
train_ds.set_transform(prepare_dataset)
val_ds.set_transform(prepare_dataset)
```
After this, I also had to change the DataCollatorCTCWithPadding class like this:
```
@dataclass
class DataCollatorCTCWithPadding:
processor: Wav2Vec2BertProcessor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# Separate input_features and labels
input_features = [{"input_features": feature["input_features"][0]} for feature in features]
labels = [feature["labels"][0] for feature in features]
# Pad input features
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
# Pad and process labels
label_features = self.processor.tokenizer.pad(
{"input_ids": labels},
padding=self.padding,
return_tensors="pt",
)
labels = label_features["input_ids"]
attention_mask = label_features["attention_mask"]
# Replace padding with -100 to ignore these tokens during loss calculation
labels = labels.masked_fill(attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
```
But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake?
### Steps to reproduce the bug
i can share my code if needed
### Expected behavior
Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch.
### Environment info
all updated versions | https://github.com/huggingface/datasets/issues/7070 | open | [] | 2024-07-25T15:19:34Z | 2024-07-25T15:19:34Z | 0 | VafaKnm |
huggingface/chat-ui | 1,361 | Unhandled error event upon start with Koboldcpp | I have mongodb set up as well as koboldcpp running Llama 3.1 8b on windows for inference but chat-ui will not start
```
yas@zen:~/chat-ui$ npm run dev -- --open
> chat-ui@0.9.1 dev
> vite dev --open
VITE v4.5.3 ready in 2735 ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
➜ press h to show help
node:events:497
throw er; // Unhandled 'error' event
^
Error: spawn xdg-open ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:286:19)
at onErrorNT (node:internal/child_process:484:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
at ChildProcess._handle.onexit (node:internal/child_process:292:12)
at onErrorNT (node:internal/child_process:484:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -2,
code: 'ENOENT',
syscall: 'spawn xdg-open',
path: 'xdg-open',
spawnargs: [ 'http://localhost:5173/' ]
}
Node.js v21.4.0
```
Furthermore sometimes at random times this error log also appears
```
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [chan receive]:
github.com/evanw/esbuild/internal/helpers.(*ThreadSafeWaitGroup).Wait(...)
github.com/evanw/esbuild/internal/helpers/waitgroup.go:36
main.runService.func2()
github.com/evanw/esbuild/cmd/esbuild/service.go:114 +0x59
main.runService(0x1)
github.com/evanw/esbuild/cmd/esbuild/service.go:160 +0x4ed
main.main()
github.com/evanw/esbuild/cmd/esbuild/main.go:240 +0xa29
goroutine 20 [chan receive]:
main.runService.func1()
github.com/evanw/esbuild/cmd/esbuild/service.go:98 +0x4a
created by main.runService
github.com/evanw/esbuild/cmd/esbuild/service.go:97 +0x1e5
goroutine 21 [chan receive]:
main.(*serviceType).sendRequest(0xc0000a7ec0, {0x915100, 0xc0004f1380})
github.com/evanw/esbuild/cmd/esbuild/service.go:192 +0xfa
main.runService.func3()
github.com/evanw/esbuild/cmd/esbuild/service.go:125 +0x39
created by main.runService
github.com/evanw/esbuild/cmd/esbuild/service.go:122 +0x31c
goroutine 52 [chan receive]:
github.com/evanw/esbuild/internal/bundler.(*scanner).scanAllDependencies(0xc0002e9200)
github.com/evanw/esbuild/internal/bundler/bundler.go:1857 +0x232
github.com/evanw/esbuild/internal/bundler.ScanBundle(_, {_, _, _, _, _, _}, {_, _}, 0xc0001a6540, ...)
github.com/evanw/esbuild/internal/bundler/bundler.go:1262 +0xb36
github.com/evanw/esbuild/pkg/api.rebuildImpl({0xc0001a6540, {0xc000191b78, 0x1, 0x1}, {0x0, 0x0, 0x0}, {0x0, 0x1, 0x2, ...}, ...}, ...)
github.com/evanw/esbuild/pkg/api/api_impl.go:1501 +0x2e5
github.com/evanw/esbuild/pkg/api.(*internalContext).rebuild(_)
github.com/evanw/esbuild/pkg/api/api_impl.go:1031 +0x2a5
github.com/evanw/esbuild/pkg/api.(*internalContext).Rebuild(0xc0004a4f00?)
github.com/evanw/esbuild/pkg/api/api_impl.go:1092 +0x58
main.(*serviceType).handleIncomingPacket.func5()
github.com/evanw/esbuild/cmd/esbuild/service.go:293 +0xd5
created by main.(*serviceType).handleIncomingPacket
github.com/evanw/esbuild/cmd/esbuild/service.go:290 +0x118d
goroutine 43 [chan receive]:
main.(*serviceType).sendRequest(0xc0000a7ec0, {0x915100, 0xc0004b1710})
github.com/evanw/esbuild/cmd/esbuild/service.go:192 +0xfa
main.(*serviceType).convertPlugins.func2.3({{0xc0000de480, 0x48}, {0x985a9c, 0x7}, {0x0, 0x0}, {0x0, 0x0}, 0x2, {0x0, ...}})
github.com/evanw/esbuild/cmd/esbuild/service.go:973 +0x768
github.com/evanw/esbuild/pkg/api.(*pluginImpl).onResolve.func1({{0xc0000de480, 0x48}, {0x0, 0x0}, {0x0, 0x0}, {{0x985a9c, 0x7}, {0x0, 0x0}, ...}, ...})
github.com/evanw/esbuild/pkg/api/api_impl.go:1936 +0x1f5
github.com/evanw/esbuild/internal/bundler.RunOnResolvePlugins({_, _, _}, _, {0xc0003dc070, 0xc0003ce8a0, 0xc0003ce8b8, 0xc000396ac0, 0x6, 0xc000195b60}, ...)
github.com/evanw/esbuild/internal/bundler/bundler.go:831 +0x8d5
github.com/evanw/esbuild/internal/bundler.parseFile({{0xa6f198, 0xc000396ae0}, {0xc0003dc070, 0xc0003ce8a0, 0xc0003ce8b8, 0xc000396ac0, 0x6, 0xc000195b60}, 0xc0002e8d80, 0xc0001a6540, ...})
github.com/evanw/esbuild/internal/bundler/bundler.go:397 +0x3187
created by github.com/evanw/esbuild/internal/bundler.(*scanner).maybeParseFile
github.com/evanw/esbuild/internal/bundler/bundler.go:1385 +0xab6
``` | https://github.com/huggingface/chat-ui/issues/1361 | closed | [
"support"
] | 2024-07-25T14:32:44Z | 2024-07-26T12:11:50Z | 1 | cody151 |
huggingface/lighteval | 238 | What is `qem` for gsm8k evaluation? | As titled.
Thank you! | https://github.com/huggingface/lighteval/issues/238 | closed | [] | 2024-07-25T14:30:44Z | 2024-09-15T02:19:57Z | null | shizhediao |
huggingface/optimum | 1,972 | Whisper-large-v3 transcript is trimmed | ### System Info
```shell
optimum 1.21.2
Ubuntu 22.04.4 LTS
CUDA 12.3
cuda-toolkit 11.7
onnxruntime 1.18.1
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
```
import os
from transformers import WhisperForConditionalGeneration, WhisperProcessor, PretrainedConfig
import torch
import torchaudio
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
model_name = 'openai/whisper-large-v3'
model_path = 'whisper-large-v3'
processor = WhisperProcessor.from_pretrained(model_name)
model = WhisperForConditionalGeneration.from_pretrained(model_name)
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_config = PretrainedConfig.from_pretrained(model_name)
sessions = ORTModelForSpeechSeq2Seq.load_model(
os.path.join(model_path, 'encoder_model.onnx'),
os.path.join(model_path, 'decoder_model.onnx'),
)
model = ORTModelForSpeechSeq2Seq(
sessions[0],
sessions[1],
model_config,
model_path,
use_cache=False,
).to(device)
audio, sr = torchaudio.load("example.ogg")
audio = torchaudio.functional.resample(audio[0], sr, 16000)
input_features = processor(audio.cpu(), return_tensors="pt", sampling_rate=16000, max_new_tokens=1000).input_features.to(device)
predicted_ids = model.generate(input_features)[0]
transcription = processor.decode(predicted_ids)
print(transcription)
```
### Expected behavior
For some reason a final transcript is incomplete and is trimmed in the middle of the speech.
I've tried to change max_tokens and max_new_tokens parameter, but nothing has changed.
Also I didn't understand how to pass compute type and batch size as parameters.
PretrainedConfig and GenerationConfig don't have such parameters. Could anyone help me? | https://github.com/huggingface/optimum/issues/1972 | open | [
"bug"
] | 2024-07-25T12:04:18Z | 2024-07-31T08:05:02Z | 4 | yv0vaa |
huggingface/lerobot | 341 | question: expected performance of vq-bet? | Hi,
Thank you to the LeRobot community for maintaining such a fantastic codebase. My research group and I have greatly benefited from your efforts. In my current project, I am using the repository primarily for analyzing algorithms across different environments. I wanted to raise an issue I am encountering with VQ-BeT. I have been using the model on PushT and I want to ensure that the results I am obtaining align with community expectations. If not, I might be using the VQ-BeT repository incorrectly and would appreciate any guidance.
I used the following command: python lerobot/scripts/train.py vqbet pusht
For VQ-BeT, it seems like the maximum success rate is exactly 60%, whereas for Diffusion Policy the maximum success rate is 74%. Below, I have attached the wandb figures for the success rate vs training steps (left is for VQ-BeT and right is for Diffusion Policy):
<img width="350" alt="Screenshot 2024-07-24 at 9 33 00 PM" src="https://github.com/user-attachments/assets/e280066d-b24d-4e4b-a980-374edf485763">
<img width="350" alt="Screenshot 2024-07-24 at 9 33 14 PM" src="https://github.com/user-attachments/assets/bada456f-9be2-450e-9990-802ff117205c">
Are these results expected for the algorithm? If not, am I running the wrong commands to reproduce the SOTA results?
Thank you for your assistance. | https://github.com/huggingface/lerobot/issues/341 | closed | [
"question",
"policies",
"stale"
] | 2024-07-25T04:35:06Z | 2025-10-07T02:27:24Z | null | Jubayer-Hamid |
huggingface/text-generation-inference | 2,302 | how to use the model's checkpoint in local fold? | ### System Info
ghcr.io/huggingface/text-generation-inference 2.0.4
platform windows10
Docker version 27.0.3
llm model:lllyasviel/omost-llama-3-8b-4bits
cuda 12.3
gpu nvidia rtx A6000
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
C:\Users\Administrator>docker run --gpus all -p 8080:80 -v ./data:/data ghcr.io/huggingface/text-generation-inference:2.0.4 --model-id "F:\Omost-main\checkpoints\models--lllyasviel--omost-llama-3-8b-4bits" --max-total-tokens 9216 --cuda-memory-fraction 0.8
### Expected behavior
eventhought i set the model-id =<my local path/>, docker raise a error.

| https://github.com/huggingface/text-generation-inference/issues/2302 | open | [
"Stale"
] | 2024-07-25T04:26:44Z | 2024-08-25T01:57:54Z | null | zk19971101 |
huggingface/diffusers | 8,957 | StableDiffusionSafetyChecker ignores `attn_implementation` load kwarg | ### Describe the bug
`transformers` added `sdpa` and FA2 for CLIP model in https://github.com/huggingface/transformers/pull/31940. It now initializes the vision model like https://github.com/huggingface/transformers/blob/85a1269e19af022e04bc2aad82572cd5a9e8cdd9/src/transformers/models/clip/modeling_clip.py#L1143.
However, `StableDiffusionSafetyChecker` uses https://github.com/huggingface/diffusers/blob/2c25b98c8ea74cfb5ec56ba49cc6edafef0b26af/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L41 so it always gets initialized with sdpa attention.
### Reproduction
```python
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
model = StableDiffusionSafetyChecker.from_pretrained(
"runwayml/stable-diffusion-v1-5",
subfolder="safety_checker",
attn_implementation="eager"
)
print(type(model.vision_model.vision_model.encoder.layers[0].self_attn))
```
Expected `transformers.models.clip.modeling_clip.CLIPAttention` but got `transformers.models.clip.modeling_clip.CLIPSdpaAttention`.
### Logs
_No response_
### System Info
diffusers 0.29.0
transformers 4.43.1
### Who can help?
@sayakpaul @dn | https://github.com/huggingface/diffusers/issues/8957 | closed | [
"bug",
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-07-24T19:38:23Z | 2024-11-19T21:06:53Z | 8 | jambayk |
huggingface/transformers.js | 862 | how to retain spiece token markers | ### Question
When evaluating a model that uses sentencepiece using transformer.js, I do not get the `▁` marker included in the output as I do when running from python. I'm using the qanastek/pos-french-camembert model with to do POS tagging and have situations where a single word such as a verb with a tense suffix is returned as two or more tokens. I'd like to process the group of tokens and decide how to handle the different labels. I see the `pre_tokenizer` and `decoder` fields of the model's `tokenizer.json` include references to the `Metaspace` parameter, but I'm unsure if it's possible to configure it to retain the space placeholder token. | https://github.com/huggingface/transformers.js/issues/862 | open | [
"question"
] | 2024-07-24T16:01:44Z | 2024-07-24T17:14:58Z | null | joprice |
huggingface/transformers | 32,186 | callback to implement how the predictions should be stored | https://github.com/huggingface/transformers/issues/32186 | closed | [] | 2024-07-24T11:36:26Z | 2024-07-24T11:39:13Z | null | Imran-imtiaz48 | |
huggingface/optimum | 1,969 | Latest Optimum library does not compatible with latest Transformers | ### System Info
```shell
Any system that can install those libraries
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
Try to install transformers along with optimum
### Expected behavior
Fix here? https://github.com/huggingface/optimum/blob/main/setup.py#L18 | https://github.com/huggingface/optimum/issues/1969 | closed | [
"bug"
] | 2024-07-24T06:49:07Z | 2024-08-20T09:06:19Z | 1 | lanking520 |
huggingface/diffusers | 8,953 | Why loading a lora weights so low? | I used diffusers to load lora weights but it much slow to finish.
diffusers version: 0.29.2
I test another version of diffusers 0.23.0 without peft installation, and the time is decent.
```
t1 = time.time()
pipe.load_lora_weights("/data/**/lora_weights/lcm-lora-sdxl/", weight_name="pytorch_lora_weights.safetensors")
print(f"load lcm lora weights cost: {time.time()- t1}")
```


And If I use low version of diffusers, much of code need to be modified which cost much work.
Anyone who can help me will be appreciate.
| https://github.com/huggingface/diffusers/issues/8953 | closed | [
"peft"
] | 2024-07-24T06:16:42Z | 2024-10-15T15:23:34Z | 18 | zengjie617789 |
huggingface/accelerate | 2,956 | How to run Vision Model(Like llava) based on pippy? | Currently I tried to apply model parallelism based on pippy and I refer to the given example,
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from accelerate import PartialState, prepare_pippy
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-chat-hf", low_cpu_mem_usage=True, attn_implementation="sdpa"
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
prompts = ("I would like to", "I really like to", "The weather is pretty") # bs = 3
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer(prompts, return_tensors="pt", padding=True)
model = prepare_pippy(model, split_points="auto", example_kwargs=inputs)
inputs = inputs.to(0)
with torch.no_grad():
output = model(**inputs)
if PartialState().is_last_process:
next_token_logits = output[0][:, -1, :]
next_token = torch.argmax(next_token_logits, dim=-1)
print(tokenizer.batch_decode(next_token))
```
But I don't know how to convert it to the vision-model sample. Currently, my code is:
```
import requests
from PIL import Image
from accelerate import Accelerator, load_checkpoint_and_dispatch, init_empty_weights
import torch
import torch.distributed as dist
from transformers import AutoProcessor, LlavaForConditionalGeneration, LlavaNextForConditionalGeneration, LlavaNextProcessor
from accelerate import PartialState, prepare_pippy
from transformers.models.auto.tokenization_auto import AutoTokenizer
if __name__ == "__main__":
model_id = "llava-hf/llava-v1.6-mistral-7b-hf"
prompt = "USER: <image>\nWhat are these?\nASSISTANT:"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
model = LlavaNextForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True
)
model.eval()
processor = LlavaNextProcessor.from_pretrained(model_id)
inputs = processor(prompt, raw_image, return_tensors='pt')
model = prepare_pippy(model, split_points="auto", example_kwargs=inputs)
inputs = inputs.to(0)
with torch.no_grad():
output = model(**inputs)
if PartialState().is_last_process:
print(processor.decode(output[0][2:], skip_special_tokens=True))
```
and I get the error below:
accelerate launch --num_processes 2 llava/accelerate/pipeline_inference.py
```
The following values were not passed to `accelerate launch` and had defaults used instead:
More than one GPU was found, enabling multi-GPU training.
If this was unintended please pass in `--num_processes=1`.
`--num_machines` was set to a value of `1`
`--mixed_precision` was set to a value of `'no'`
`--dynamo_backend` was set to a value of `'no'`
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 9.15it/s]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.01it/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/zhenghao-lu/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1764, in run_node
[rank0]: return node.target(*args, **kwargs)
[rank0]: File "/home/zhenghao-lu/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/__init__.py", line 470, in sym_int
[rank0]: return math.floor(a) if a >= 0 else math.ceil(a) # type: ignore[arg-type, call-overload]
[rank0]: File "/home/zhenghao-lu/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/__init__.py", line 376, in __bool__
[rank0]: return self.node.bool_()
[rank0]: File "/home/zhenghao-lu/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 432, in bool_
[rank0]: return self.guard_bool("", 0)
[rank0]: File "/home/zhenghao-lu/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/fx/experimental/sym_node.py", line 374, in guard_bool
[rank0]: r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
[rank0]: File | https://github.com/huggingface/accelerate/issues/2956 | closed | [] | 2024-07-24T03:13:21Z | 2024-09-13T15:06:32Z | null | JerryLu991223 |
huggingface/transformers.js | 859 | JavaScript code completion model | ### Question
Currently we have two Python code completion models:
https://github.com/xenova/transformers.js/blob/7f5081da29c3f77ee830269ab801344776e61bcb/examples/code-completion/src/App.jsx#L9-L13
And since we are doing JavaScript here, I would like a model optimized on JavaScript. Does anyone have a JavaScript code completion model? | https://github.com/huggingface/transformers.js/issues/859 | open | [
"question"
] | 2024-07-23T13:51:58Z | 2024-07-23T13:51:58Z | null | kungfooman |
huggingface/dataset-viewer | 2,994 | Compute leaks between splits? | See https://huggingface.co/blog/lbourdois/lle
Also: should we find the duplicate rows? | https://github.com/huggingface/dataset-viewer/issues/2994 | open | [
"question",
"feature request",
"P2"
] | 2024-07-23T13:00:39Z | 2025-06-24T11:39:37Z | null | severo |
huggingface/datasets | 7,066 | One subset per file in repo ? | Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
├── train0.jsonl
├── train1.jsonl
└── train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
├── animals.jsonl
├── trees.jsonl
└── metadata.jsonl
```
It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ? | https://github.com/huggingface/datasets/issues/7066 | open | [] | 2024-07-23T12:43:59Z | 2025-06-26T08:24:50Z | 1 | lhoestq |
huggingface/transformers | 32,145 | callback to implement how the predictions should be stored. | I am exploring distributed inference capabilities with the Hugging Face Trainer for transformers. I need to do distributed inference across multiple devices or nodes and save the predictions to a file. However, after reviewing the available callbacks, I did not find any that facilitate this specific task. Furthermore, when using the trainer.predict method, I noticed that it returns only the labels and predictions, without including the original input batches used for inference.
PyTorch Lightning offers a flexible mechanism for handling prediction outputs using custom callbacks. For example, the following PyTorch Lightning code snippet demonstrates how a custom **BasePredictionWriter** callback can be implemented to save predictions to files:
```import torch
import os
from lightning.pytorch.callbacks import BasePredictionWriter
class CustomWriter(BasePredictionWriter):
def __init__(self, output_dir, write_interval):
super().__init__(write_interval)
self.output_dir = output_dir
def write_on_batch_end(
self, trainer, pl_module, prediction, batch_indices, batch, batch_idx, dataloader_idx
):
torch.save(prediction, os.path.join(self.output_dir, str(dataloader_idx), f"{batch_idx}.pt"))
def write_on_epoch_end(self, trainer, pl_module, predictions, batch_indices):
torch.save(predictions, os.path.join(self.output_dir, "predictions.pt"))
pred_writer = CustomWriter(output_dir="pred_path", write_interval="epoch")
trainer = Trainer(callbacks=[pred_writer])
model = BoringModel()
trainer.predict(model, return_predictions=False)
``` | https://github.com/huggingface/transformers/issues/32145 | open | [
"Feature request"
] | 2024-07-22T21:32:22Z | 2024-07-24T09:23:07Z | null | sachinya00 |
huggingface/diffusers | 8,930 | StableDiffusionXLControlNetImg2ImgPipeline often fails to respect "pose" control images | ### Describe the bug
Hello,
Using [StableDiffusionXLControlNetImg2ImgPipeline](https://huggingface.co/docs/diffusers/en/api/pipelines/controlnet_sdxl#diffusers.StableDiffusionXLControlNetImg2ImgPipeline), and passing a "pose" control image often fails to produce an output image that maintains the pose.
I couldn't find much info about this pipeline used with a pose image; I'd like to know whether the problem comes from the underlying pipe not being able to run an inference with this conditioning or if I'm doing something wrong, eg haven't found the right params.
Note that on the link above the example snippet uses a canny image; and that the [controlnet model I'm using](https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0) uses a pose image but the `StableDiffusionXLControlNetPipeline` pipeline instead of `StableDiffusionXLControlNetImg2ImgPipeline`.
In the snippet, belows, [the control image used](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png) works, however most control images will fail in my expeirence.
How to get this pipeline to consistently respect the pose from `control_image` ?
Thanks,
### Reproduction
Using this image as reference image for the img2img part:
<img src="https://github.com/user-attachments/assets/7ed0d32c-78ab-40d8-b448-03755bb6095" width="200" height="200">
```python
import torch
from controlnet_aux import OpenposeDetector
from diffusers import ControlNetModel
from diffusers import StableDiffusionXLControlNetImg2ImgPipeline
from diffusers.utils import load_image
from PIL import Image
controlnet = ControlNetModel.from_pretrained(
"thibaud/controlnet-openpose-sdxl-1.0",
torch_dtype=torch.float16,
)
pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
variant="fp16",
use_safetensors=True,
torch_dtype=torch.float16,
)
pipe.enable_model_cpu_offload()
openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet")
pose_image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png",
)
control_image = openpose(pose_image).resize((1024, 1024))
control_image.save('control.png')
prompt = "daiton style, daiton, A brave sheriff with a star badge, wearing a cowboy hat and spurs, standing confidently, illustration style, minimalist, illustration style, minimalist, solid color background"
negative_prompt = "blurry, ugly, messy weird"
image = Image.open(
<above image>,
).resize((1024, 1024))
controlnet_conditioning_scale = 1.0
images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
image=image,
control_image=control_image,
strength=1.0,
num_inference_steps=30,
controlnet_conditioning_scale=controlnet_conditioning_scale,
).images
images[0].save("from_diffusers.png")
```
Things I have tried:
- various params (eg guidance scale, more steps)
- other pose image
- thicker edges in the pose image
- other image sizes
### Logs
```shell
/home/ubuntu/anaconda3/envs/inference_v2/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Loading pipeline components...: 100%|█| 7/7 [00:00<
100%|██████████████| 30/30 [00:14<00:00, 2.12it/s]
```
### System Info
- `diffusers` version: 0.27.2
- Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31
- Python version: 3.10.13
- PyTorch version (GPU?): 2.1.0 (True)
- Huggingface_hub version: 0.23.1
- Transformers version: 4.39.3
- Accelerate version: 0.25.0
- xFormers version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help?
@yiyixuxu @sayakpaul @DN6 | https://github.com/huggingface/diffusers/issues/8930 | open | [
"bug",
"stale"
] | 2024-07-22T13:48:48Z | 2024-09-21T07:48:04Z | 14 | Clement-Lelievre |
huggingface/diffusers | 8,924 | Adding Differential Diffusion to Kolors, Auraflow, HunyuanDiT | Diffusers recently added support for the following models:
- [x] [Kolors](https://github.com/huggingface/diffusers/pull/8812) (@tuanh123789)
- [x] [AuraFlow](https://github.com/huggingface/diffusers/pull/8796)
- [x] [HunyuanDiT](https://github.com/huggingface/diffusers/pull/8240) (@MnCSSJ4x)
A few weeks ago, we also added community pipelines for [Differential Diffusion](https://arxiv.org/abs/2306.00950) utilizing [SDXL](https://github.com/huggingface/diffusers/pull/7550) and [SD3](https://github.com/huggingface/diffusers/pull/8679). You can search for "diff diff" comments in the PR files to find the required changes.
We would like to extend DiffDiff support for Kolors, AuraFlow and Hunyuan. Feel free to pick and iterate on one of the models that has not been assigned to someone else already 🤗
- You will have to create a community pipeline in [`examples/community`](https://github.com/huggingface/diffusers/tree/main/examples/community) folder.
- The pipeline may need to be an Image-to-Image variant of the original Text-to-Image implementation to start off, since it would then be easier to add the required changes for DiffDiff. It should, hopefully, not be too difficult and can be created by following the changes in our Img2Img pipelines. For example, [Kolors](https://github.com/huggingface/diffusers/blob/1a8b3c2ee86c09d0d3e066f7e9ea2ab69e8e78fa/src/diffusers/pipelines/kolors/pipeline_kolors.py) and [KolorsImg2Img](https://github.com/huggingface/diffusers/blob/1a8b3c2ee86c09d0d3e066f7e9ea2ab69e8e78fa/src/diffusers/pipelines/kolors/pipeline_kolors_img2img.py). Feel free to ping me for help regarding anything and mentioning what you tried
- Add the pipeline name, description, reference link, colab (if any), and your name to the [Community README](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md). Also, add a section with the necessary inference code and your cool image showcases 😎
- Create a PR posting a reproducible code example or link to a colab notebook. Also post a few generation results with all required input images for the code to be runnable.
When opening a PR, you can tag me and @DN6. For a more critical review of your generations, you can also tag @asomoza.
Make sure to read the Diffusers [contributing guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md) if you're a first-time contributor.
You can find some more informational content here:
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/custom_pipeline_overview
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/contribute_pipeline
Keep diffusing 🧨
**Edit: If you're working on this, it is better to follow the implementation of [Stable Diffusion 3](https://github.com/huggingface/diffusers/pull/8679). Make sure to not add any additional pre-processing code to the pipelines using external libraries or torchvision. You can follow the changes in [this commit](https://github.com/huggingface/diffusers/pull/8679/commits/c947fb6f91be954b8ba0daf2a3d5d806feb81571). Please make sure to go through all the links shared here**
Thanks to @MnCSSJ4x for adding support to HunyuanDiT! | https://github.com/huggingface/diffusers/issues/8924 | closed | [
"good first issue",
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2024-07-22T07:17:58Z | 2024-10-31T19:18:32Z | 28 | a-r-r-o-w |
huggingface/candle | 2,349 | What is the equivalent of interpolate from torch.nn | Hi,
I need some help with translating things written in Python:
f.e. I have such a statement:
```
import torch.nn.functional as F
result[mask] = result[mask] + F.interpolate(cur_result.permute(3,0,1,2).unsqueeze(0).contiguous(), (H, W, D), mode='trilinear', align_corners=False).squeeze(0).permute(1,2,3,0).contiguous()[mask]
```
What is the interpolate equivalent. I've seen that Tensor have methods like intepolate1d and interpolate2d, but they have only dimension sizes of tensor to pass.
Also, It would be great to know how to reassign indices, and how to know what's dim, because in torch, there is no dim argument in most of the functions.
Btw. Didn't ask that previously, but is this(Python/torch):
`D = indices.shape[-1]`
equivalent to(Rust/candle):
`let d = indices.dim(D::Minus1).unwrap();`
| https://github.com/huggingface/candle/issues/2349 | open | [] | 2024-07-21T22:14:33Z | 2024-07-21T22:14:33Z | null | wiktorkujawa |
huggingface/candle | 2,347 | how to specify generator for randn function | pytorch
```python
noise = torch.randn(x_start.size(), dtype=x_start.dtype, layout=x_start.layout, generator=torch.manual_seed(seed)).to(x_start.device)
```
how to specify seed in candle? | https://github.com/huggingface/candle/issues/2347 | closed | [] | 2024-07-21T10:30:35Z | 2024-07-21T12:33:23Z | null | jk2K |
huggingface/chat-ui | 1,354 | How do I use chat ui with RAG(RETRIEVAL AUGMENTED GENERATOR) | I currently applied the rag technique to the "HuggingFaceH4/zephyr-7b-beta" model and used mongo atlas as a knowledge base, but I didn't find anything about how to connect the chat ui to pass the top k documents to the model so that it can use context to answer questions | https://github.com/huggingface/chat-ui/issues/1354 | open | [] | 2024-07-21T01:19:37Z | 2024-08-22T11:25:50Z | 1 | pedro21900 |
huggingface/chat-ui | 1,353 | Llama-3-70b - Together.ai failure | 
This config used to work on the older hugging chat 0.8.2
All my other models (OpenAI, Anthropic) work fine, its just the Llama-3-70b from Together that fails.
```
{
"name" : "meta-llama/Meta-Llama-3-70B-Instruct-Lite",
"displayName": "Meta-Llama-3-70B-Instruct",
"description": "Generation over generation, Meta Llama 3 demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including improved reasoning.",
"logoUrl": "https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/meta-logo.png",
"modelUrl": "https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct",
"websiteUrl": "https://llama.meta.com/llama3/",
"tokenizer" : "philschmid/meta-llama-3-tokenizer",
"promptExamples" : [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"stop": ["<|eot_id|>","<|end_of_text|>"],
"truncate": 6144,
"max_new_tokens": 2047
},
"endpoints" : [{
"type": "openai",
"baseURL": "https://api.together.xyz/v1",
"apiKey": 'TOGETHER_API_KEY_HERE'
}],
},
``` | https://github.com/huggingface/chat-ui/issues/1353 | open | [
"support",
"models"
] | 2024-07-20T19:30:16Z | 2024-07-25T13:45:54Z | 4 | gururise |
huggingface/diffusers | 8,907 | [Tests] Improve transformers model test suite coverage | Currently, we have different variants of transformers: https://github.com/huggingface/diffusers/tree/main/src/diffusers/models/transformers/. However, we don't have test suites for each of them: https://github.com/huggingface/diffusers/tree/main/tests/models/transformers/.
We are seeking contributions from the community to improve this situation. Below is a list of the model for which we would really appreciate test suites for:
- [x] [Hunyuan DiT](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/hunyuan_transformer_2d.py)
- [x] [Latte](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/latte_transformer_3d.py)
- [x] [Lumina](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/lumina_nextdit2d.py)
- [x] [Temporal Transformer](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_temporal.py)
## How to approach the process?
* Take the configuration object for each of these models from their respective pipeline tests suites. For example, for the Latte Transformer, it would be here: https://github.com/huggingface/diffusers/blob/3f1411767bc0f1837adb6f289713807f18599db3/tests/pipelines/latte/test_latte.py#L57
* Derive the expected inputs. You can do so by adding print statements to the corresponding pipeline file. For example, for the Latte Transformer, you could add print statements right here: https://github.com/huggingface/diffusers/blob/3f1411767bc0f1837adb6f289713807f18599db3/src/diffusers/pipelines/latte/pipeline_latte.py#L801 to investigate the shapes of the outputs and then use that information accordingly.
* Then it should be just about defining the test suite like so: https://github.com/huggingface/diffusers/blob/main/tests/models/transformers/test_models_transformer_sd3.py.
## Points to keep in mind when opening PRs
* Mention this issue and tag @DN6 and myself.
* Target only one modeling test at a time.
| https://github.com/huggingface/diffusers/issues/8907 | closed | [
"Good second issue",
"contributions-welcome"
] | 2024-07-19T10:14:34Z | 2024-08-19T03:00:12Z | 6 | sayakpaul |
huggingface/diffusers | 8,906 | there is no qk_norm in SD3Transformer2DModel. Is that right? | ### Describe the bug
there is no qk_norm in SD3Transformer2DModel. Is that right?
self.attn = Attention(
query_dim=dim,
cross_attention_dim=None,
added_kv_proj_dim=dim,
dim_head=attention_head_dim // num_attention_heads,
heads=num_attention_heads,
out_dim=attention_head_dim,
context_pre_only=context_pre_only,
bias=True,
processor=processor,
)
### Reproduction
1.
### Logs
_No response_
### System Info
29.2
### Who can help?
dukunpeng | https://github.com/huggingface/diffusers/issues/8906 | closed | [
"bug"
] | 2024-07-19T09:18:05Z | 2024-10-31T19:19:24Z | 3 | heart-du |
huggingface/lerobot | 334 | where to set the initial joint (position + angle) information when controlling real aloha robot? | ### System Info
```Shell
ubuntu 20
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
Hi Guys, I am using the pr #316 written by Cadene to control the real aloha robot, when running cmd : python control_robot.py teleoperate --robot aloha, I found the follower move fast to HORIZONTAL_POSITION, all arms in HORIZONTAL_POSITION like a line. When I control the follower arm with the master arm, I find that the movement direction of the follower arm is exactly opposite to that of the master arm.
I thinke there may be some bug in code, or my own problem. I tried to figure out the following:
1. where to set the initial pose info of follower and leader in code
2. how to solve the opposite moving problem, have you guys met the same problem ?
Thx
### Expected behavior
^^ | https://github.com/huggingface/lerobot/issues/334 | closed | [
"question",
"stale"
] | 2024-07-19T08:53:39Z | 2025-10-23T02:29:22Z | null | cong1024 |
huggingface/distil-whisper | 145 | How to load a fine-tuned model for inference? | @sanchit-gandhi
I used the script from https://github.com/huggingface/distil-whisper/tree/main/training/flax/finetuning_scripts to fine-tune a model and obtained a model named flax_model.msgpack. How can I load this model for inference? Additionally, why did the size of the fine-tuned model increase? | https://github.com/huggingface/distil-whisper/issues/145 | open | [] | 2024-07-19T02:21:10Z | 2024-10-21T17:13:45Z | null | xinliu9451 |
huggingface/diffusers | 8,900 | How to load sd_xl_refiner_1.0.safetensors use from_single_file | ### Describe the bug
```
Traceback (most recent call last):
File "/workspace/work/private/TensorRT/demo/Diffusion/st_base.py", line 300, in <module>
A1111(local_dir, 'sd_xl_base_1.0.safetensors', steps=50, cfs_scale=8)
File "/workspace/work/private/TensorRT/demo/Diffusion/st_base.py", line 235, in A1111
refiner = StableDiffusionXLPipeline.from_single_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/tensorrt/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/tensorrt/lib/python3.11/site-packages/diffusers/loaders/single_file.py", line 503, in from_single_file
loaded_sub_model = load_single_file_sub_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/tensorrt/lib/python3.11/site-packages/diffusers/loaders/single_file.py", line 113, in load_single_file_sub_model
loaded_sub_model = create_diffusers_clip_model_from_ldm(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/tensorrt/lib/python3.11/site-packages/diffusers/loaders/single_file_utils.py", line 1411, in create_diffusers_clip_model_from_ldm
unexpected_keys = load_model_dict_into_meta(model, diffusers_format_checkpoint, dtype=torch_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/tensorrt/lib/python3.11/site-packages/diffusers/models/model_loading_utils.py", line 154, in load_model_dict_into_meta
raise ValueError(
ValueError: Cannot load because text_model.embeddings.position_embedding.weight expected shape tensor(..., device='meta', size=(77, 768)), but got torch.Size([77, 1280]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.
```
### Reproduction
I have add `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True` but not work
```
def download_config(local_dir):
# 检查是否存在指定的子目录
sub_dir = '.huggingface'
path = os.path.join(local_dir, sub_dir)
# 判断目录是否存在
if not os.path.isdir(path):
if 'base' in path:
local_config_path = snapshot_download(
repo_id="stabilityai/stable-diffusion-xl-base-1.0",
allow_patterns=["*.json", "**/*.json", "*.txt", "**/*.txt"],
local_dir=local_dir,
)
elif 'refiner' in path:
local_config_path = snapshot_download(
repo_id="stabilityai/stable-diffusion-xl-refiner-1.0",
allow_patterns=["*.json", "**/*.json", "*.txt", "**/*.txt"],
local_dir=local_dir,
)
return local_dir
def A1111(local_dir, model_name, steps, cfs_scale, dir=''):
pipe = StableDiffusionXLPipeline.from_single_file(
f'{local_dir}/{model_name}',
config=download_config(local_dir),
local_files_only=True,
torch_dtype=torch.float16,
).to("cuda")
# refiner model
refiner_path = '/workspace/work/private/hf_models/stable-diffusion-xl-refiner-1.0'
refiner = StableDiffusionXLPipeline.from_single_file(
f'{refiner_path}/sd_xl_refiner_1.0.safetensors',
text_encoder_2=pipe.text_encoder_2,
vae=pipe.vae,
config=download_config(local_dir),
local_files_only=True,
torch_dtype=torch.float16,
low_cpu_mem_usage=False,
ignore_mismatched_sizes=True,
).to("cuda")
# lora
lora_dir = '/data/modeldata/aigc-fg-gen/v1.9/Lora'
# adapter name 不能有点
pipe.load_lora_weights(lora_dir, weight_name="fix_hands.pt", adapter_name="fix_hands")
pipe.load_lora_weights(lora_dir, weight_name="sdxl_lora_fg_v1.2_colorv2_shirt_mp.safetensors",
adapter_name="sdxl_lora_fg_v1_2_colorv2_shirt_mp")
pipe.set_adapters(["fix_hands", "sdxl_lora_fg_v1_2_colorv2_shirt_mp"], adapter_weights=[1.5, 0.8])
# sample name https://huggingface.co/docs/diffusers/v0.26.2/en/api/schedulers/overview#schedulers
scheduler = DPMSolverSinglestepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas=True)
pipe.scheduler = scheduler
# 提示参数
prompt = "xxxxx"
negative_prompt = "xxxxx"
generator = torch.Generator(device="cuda").manual_seed(1227346489)
num_images = 4
width, height = 1024, 1024
steps = steps
cfg_scale = cfs_scale
# if 'step' in model_name:
normal_optimization(pipe, infer=False)
params = {
'prompt': prompt,
'height': height,
'width': width,
'num_inference_steps': steps,
'guidance_scale': cfg_scale,
'negative_prom | https://github.com/huggingface/diffusers/issues/8900 | closed | [
"bug"
] | 2024-07-19T01:58:05Z | 2024-07-26T10:39:07Z | null | 631068264 |
huggingface/transformers.js | 854 | How do you delete a downloaded model? | ### Question
How do you delete a downloaded model that was downloaded to the IndexDB?
Thanks,
Ash | https://github.com/huggingface/transformers.js/issues/854 | closed | [
"question"
] | 2024-07-18T22:10:51Z | 2024-07-19T16:23:21Z | null | AshD |
huggingface/candle | 2,341 | how to use system prompt with the llama example? | Hi, I'm trying to pass a chat dialog in the [LLama3 format](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py#L222) to the [llama example](https://github.com/huggingface/candle/tree/main/candle-examples/examples/llama) via -prompt, the string is as follows:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
Why is the sky blue?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
This seems to confuse the model and, depending on the user prompt can cause the model to generate gibberish characters (see also https://github.com/evilsocket/cake/issues/9):
(i've made a small change to load the prompt from a file if passed with @)
```sh
/path/to/compiled/llama3/example --model-id "meta-llama/Meta-Llama-3-8B" --prompt @hf-llama-test/prompt.txt
loading the model weights from meta-llama/Meta-Llama-3-8B
loading prompt from @hf-llama-test/prompt.txt ...
starting the inference loop
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
Why is the sky blue?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
By: David Cope (2022, October 23)
14 tokens generated (16.831015425660595 token/s)
```
| https://github.com/huggingface/candle/issues/2341 | open | [] | 2024-07-18T10:44:54Z | 2024-07-18T14:35:09Z | null | evilsocket |
huggingface/text-generation-inference | 2,246 | can't start server with small --max-total-tokens. But works fine with big stting | when I try to run CUDA_VISIBLE_DEVICES=0,1,2,3 text-generation-launcher --port 6634 --model-id /models/ --max-concurrent-requests 128 --max-input-length 64--max-total-tokens 128 --max-batch-prefill-tokens 128 --cuda-memory-fraction 0.95. It says
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB. GPU has a total capacity of 44.53 GiB of which 1.94 MiB is free. Process 123210 has 44.52 GiB memory in use. Of the allocated memory 40.92 GiB is allocated by PyTorch, and 754.08 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management
But for sitting big max tokens. CUDA_VISIBLE_DEVICES=0,1,2,3 text-generation-launcher --port 6634 --model-id /models/ --max-concurrent-requests 128 --max-input-length 1024 --max-total-tokens 2048 --max-batch-prefill-tokens 2048 --cuda-memory-fraction 0.95. it works fine.
i don't get it why small max tokens cause CUDA out of memory but large max tokens works fine. Can someone answer my questions?
| https://github.com/huggingface/text-generation-inference/issues/2246 | closed | [
"question",
"Stale"
] | 2024-07-18T07:03:31Z | 2024-08-24T01:52:30Z | null | rooooc |
huggingface/diffusers | 8,881 | How to Generate Multiple Image Inference in Instruct Pix2Pix | Hello, I am currently working on how to utilize Instruct Pix2Pix for augmentation.
For this purpose, I want to generate images by putting a Tensor of shape [64,3,84,84] (batch,channel,width,height)shape into the Instruct Pix2Pix pipeline, but the Instruct Pix2Pix provided by diffusers can only edit for one image.
Is it possible to edit multiple images at the same time? It seems that it works only with 1 batch size.
Is there way generate images with multiple batch size? | https://github.com/huggingface/diffusers/issues/8881 | closed | [] | 2024-07-17T07:47:09Z | 2024-09-02T00:45:15Z | null | E-SJ |
huggingface/transformers.js | 849 | AutoModel.from_pretrained - Which model is loaded | ### Question
I am using AutoModel.from_pretrained("Xenova/yolos-tiny") to load the Yolos model for object detection. Does transformers.js load the model_quantized.onnx by default? Would I be able to load model.onnx?
A related question: Is there a way to check which model is loaded once the model is loaded? | https://github.com/huggingface/transformers.js/issues/849 | open | [
"question"
] | 2024-07-16T22:45:15Z | 2024-08-09T09:45:37Z | null | mram0509 |
huggingface/text-generation-inference | 2,239 | Can I somehow change attention type from 'FlashAttention' in the text-server-launcher? | https://github.com/huggingface/text-generation-inference/issues/2239 | closed | [
"question",
"Stale"
] | 2024-07-16T18:37:45Z | 2024-08-24T01:52:31Z | null | wasifmasood | |
huggingface/diarizers | 13 | How to solve `CUDA error: out of memory while doing inference for my diarization model` | ERROR - An error occurred: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
I'm using a `12GB NVIDIA GeForce RTX 2050` with Cuda compilation tools, release 11.8
How to Solve this or how to use batching/ batch_size while doing inference? | https://github.com/huggingface/diarizers/issues/13 | open | [] | 2024-07-16T06:23:28Z | 2024-08-18T04:20:16Z | null | Ataullha |
huggingface/datasets | 7,051 | How to set_epoch with interleave_datasets? | Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples.
I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch)
Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start.
How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset...
Something like
```
dataset_a = load_dataset(...)
dataset_b = load_dataset(...)
def epoch_shuffled_dataset(ds):
# How to make this maintain the number of shards in ds??
for epoch in itertools.count():
ds.set_epoch(epoch)
yield from iter(ds)
shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a})
interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted')
``` | https://github.com/huggingface/datasets/issues/7051 | closed | [] | 2024-07-15T18:24:52Z | 2024-08-05T20:58:04Z | null | jonathanasdf |
huggingface/accelerate | 2,933 | How to apply model parallel on multi machines? | Currently, I want to do llm inference on multi machines. Due to limited memory, I hope to use all machines to load the model and I'm blocked with this point. I only find that based on device_map, I can do model parallel on single machine with multi cards.
May I have some ideas about how to use Accelerate to realize? Or may I get some other useful suggestions?
Thanks so much. | https://github.com/huggingface/accelerate/issues/2933 | closed | [] | 2024-07-15T14:09:10Z | 2025-03-08T06:48:09Z | null | JerryLu991223 |
huggingface/chat-ui | 1,344 | Ollama chatPromptTemplate and parameters | Hi,
I have tried adding phi3-3.8b, as an ollama model, hosted on my own prem ollama server.
I have basically copied the prompt template and parameters from microsoft/Phi-3-mini-4k-instruct used in hugging face - but it does not seem to work, I always get "no output was generated".
sending a generate/chat http request to the ollama server works using phi3-3.8b works.
In general how can I generate prompt template and parameters for models hosted on ollama?
For instance llama3, or any other - did not find any instructions for that. | https://github.com/huggingface/chat-ui/issues/1344 | open | [
"support"
] | 2024-07-15T12:38:12Z | 2024-09-18T17:57:30Z | 7 | ran-haim |
huggingface/transformers | 31,963 | How to manually stop the LLM output? | I'm using `TextIteratorStreamer` for streaming output.
Since LLM may repeat its output indefinitely, I would like to be able to have LLM stop generating when it receives a request to cancel.
Is there any way to accomplish this?
model: glm-4-9b-chat
```python
async def predict(messages, model_id: str, raw_request: Request, gen_kwargs: Dict):
global model, tokenizer
choice_data = ChatCompletionResponseStreamChoice(index=0, delta=DeltaMessage(role='assistant'), finish_reason=None)
chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object='chat.completion.chunk')
yield '{}'.format(_dump_json(chunk, exclude_unset=True))
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors='pt')
inputs = inputs.to(model.device)
streamer = TextIteratorStreamer(tokenizer=tokenizer, skip_prompt=True, timeout=60.0, skip_special_tokens=True)
generation_kwargs = dict(input_ids=inputs, streamer=streamer)
generation_kwargs.update(gen_kwargs)
thread = Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()
for new_text in streamer:
print(new_text)
if raw_request is not None and await raw_request.is_disconnected():
print("disconnected")
# todo stop generate
choice_data = ChatCompletionResponseStreamChoice(index=0, delta=DeltaMessage(content=new_text), finish_reason=None)
chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object='chat.completion.chunk')
yield '{}'.format(_dump_json(chunk, exclude_unset=True))
choice_data = ChatCompletionResponseStreamChoice(index=0, delta=DeltaMessage(content=''), finish_reason='stop')
chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object='chat.completion.chunk')
yield '{}'.format(_dump_json(chunk, exclude_unset=True))
yield '[DONE]'
``` | https://github.com/huggingface/transformers/issues/31963 | closed | [] | 2024-07-15T07:09:43Z | 2024-07-16T00:34:41Z | null | invokerbyxv |
huggingface/chat-ui | 1,343 | vllm 400 status code (no body) error | Hello everyone, I use the vllm openapi service, but I encountered a 400 status code (no body) error. How can I change it? Thanks
vllm:
```
python -m vllm.entrypoints.openai.api_server --model /home/rickychen/桌面/llm/models/Infinirc-Llama3-8B-5G-v1.0 --dtype auto --worker-use-ray --tensor-parallel-size 2 --port 8001 --served-model-name Infinirc-Llama3-8B-5G-v1.0
```
huggingface chatui:
```
MODELS=`[
{
"name": "Infinirc-Llama3-8B-5G-v1.0",
"id": "Infinirc-Llama3-8B-5G-v1.0",
"endpoints": [{
"type": "openai",
"baseURL": "http://10.0.9.5:8001/v1",
"apiKey": "a"
}],
"chatPromptTemplate": "{{#each messages}}{{#ifUser}}Human: {{content}}\n{{/ifUser}}{{#ifAssistant}}Assistant: {{content}}\n{{/ifAssistant}}{{/each}}Human: ",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
},
{
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
},
{
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"max_new_tokens": 1024
}
}
]`
```
error:
```
BadRequestError: 400 status code (no body)
at APIError.generate (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/error.mjs:41:20)
at OpenAI.makeStatusError (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/core.mjs:256:25)
at OpenAI.makeRequest (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/core.mjs:299:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/endpoints/openai/endpointOai.ts:111:36)
at async Module.generate (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/textGeneration/generate.ts:8:30)
at async textGenerationWithoutTitle (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/textGeneration/index.ts:56:3)
at async Module.mergeAsyncGenerators (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/utils/mergeAsyncGenerators.ts:13:34)
at async Module.textGeneration (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/textGeneration/index.ts:24:3)
at async Object.start (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/routes/conversation/[id]/+server.ts:325:26) {
status: 400,
headers: {
'content-length': '297',
'content-type': 'application/json',
date: 'Sun, 14 Jul 2024 12:47:33 GMT',
server: 'uvicorn'
},
request_id: undefined,
error: undefined,
code: undefined,
param: undefined,
type: undefined
}
[20:47:33.972] ERROR (31253): 400 status code (no body)
err: {
"type": "BadRequestError",
"message": "400 status code (no body)",
"stack":
Error: 400 status code (no body)
at APIError.generate (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/error.mjs:41:20)
at OpenAI.makeStatusError (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/core.mjs:256:25)
at OpenAI.makeRequest (file:///Volumes/MacPro/LLM/ChatUI/chat-ui-main/node_modules/openai/core.mjs:299:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/endpoints/openai/endpointOai.ts:111:36)
at async Module.generateFromDefaultEndpoint (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/generateFromDefaultEndpoint.ts:11:23)
at async generateTitle (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/textGeneration/title.ts:54:10)
at async Module.generateTitleForConversation (/Volumes/MacPro/LLM/ChatUI/chat-ui-main/src/lib/server/textGeneration/title.ts:17:19)
"status": 400,
"headers": {
"content-length": "1748",
"content-type": "application/json",
"date": "Sun, 14 Jul 2024 12:47:33 GMT",
"server": "uvicorn"
}
}
``` | https://github.com/huggingface/chat-ui/issues/1343 | open | [
"support"
] | 2024-07-14T12:49:59Z | 2024-09-19T12:26:36Z | 3 | rickychen-infinirc |
huggingface/chat-ui | 1,342 | undeclared node version depedancy | Using the current chat-ui dockerhub image I am unable to connect to localhost:3000 to run a simple instance of chat ui. The webservice returns 'Not Found for all routes'. Included below is my docker-compose file. if I change the chat-ui image to build with node 22 as the version everything works as expected. Does chat-ui have an undocumented dependency on a particular version of the node? There is no 'engine' field in package.json. Should there be one? Should we be using node >= 22? Is there a way to debug this or identify which package is causing the issue?
```dockercompose
version: '3.1'
services:
mongo:
image: docker.io/library/mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
ports:
- 27017:27017
mongo-express:
image: docker.io/library/mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
ME_CONFIG_MONGODB_URL: mongodb://root:example@mongo:27017/
ME_CONFIG_BASICAUTH: "false"
depends_on:
- mongo
chat-ui:
image: chat-ui:20
restart: always
ports:
- 3000:3000
- 5173:5173
volumes:
- type: bind
source: .env.local
target: /app/.env.local
depends_on:
- mongo
```
```docker
# syntax=docker/dockerfile:1
# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
# you will also find guides on how best to write your Dockerfile
ARG INCLUDE_DB=false
# stage that install the dependencies
FROM node:22 as builder-production
WORKDIR /app
COPY --link --chown=1000 package-lock.json package.json ./
RUN --mount=type=cache,target=/app/.npm \
npm set cache /app/.npm && \
npm ci --omit=dev
FROM builder-production as builder
ARG APP_BASE=
ARG PUBLIC_APP_COLOR=blue
ENV BODY_SIZE_LIMIT=15728640
RUN --mount=type=cache,target=/app/.npm \
npm set cache /app/.npm && \
npm ci
COPY --link --chown=1000 . .
RUN npm run build
# mongo image
FROM mongo:latest as mongo
# image to be used if INCLUDE_DB is false
FROM node:22-slim as local_db_false
# image to be used if INCLUDE_DB is true
FROM node:22-slim as local_db_true
RUN apt-get update
RUN apt-get install gnupg curl -y
# copy mongo from the other stage
COPY --from=mongo /usr/bin/mongo* /usr/bin/
ENV MONGODB_URL=mongodb://localhost:27017
RUN mkdir -p /data/db
RUN chown -R 1000:1000 /data/db
# final image
FROM local_db_${INCLUDE_DB} as final
# build arg to determine if the database should be included
ARG INCLUDE_DB=false
ENV INCLUDE_DB=${INCLUDE_DB}
# svelte requires APP_BASE at build time so it must be passed as a build arg
ARG APP_BASE=
# tailwind requires the primary theme to be known at build time so it must be passed as a build arg
ARG PUBLIC_APP_COLOR=blue
ENV BODY_SIZE_LIMIT=15728640
# install dotenv-cli
RUN npm install -g dotenv-cli
# switch to a user that works for spaces
RUN userdel -r node
RUN useradd -m -u 1000 user
USER user
ENV HOME=/home/user \
PATH=/home/user/.local/bin:$PATH
WORKDIR /app
# add a .env.local if the user doesn't bind a volume to it
RUN touch /app/.env.local
# get the default config, the entrypoint script and the server script
COPY --chown=1000 package.json /app/package.json
COPY --chown=1000 .env /app/.env
COPY --chown=1000 entrypoint.sh /app/entrypoint.sh
COPY --chown=1000 gcp-*.json /app/
#import the build & dependencies
COPY --from=builder --chown=1000 /app/build /app/build
COPY --from=builder --chown=1000 /app/node_modules /app/node_modules
RUN npx playwright install
USER root
RUN npx playwright install-deps
USER user
RUN chmod +x /app/entrypoint.sh
CMD ["/bin/bash", "-c", "/app/entrypoint.sh"]
| https://github.com/huggingface/chat-ui/issues/1342 | closed | [
"support"
] | 2024-07-13T21:06:53Z | 2024-07-16T14:53:34Z | 2 | slmagus |
huggingface/diffusers | 8,858 | how to know variants=fp16 beforehand | ### Describe the bug
In some diffusion checkponts, some are fp16 and some are not.
```
pipe = DiffusionPipeline.from_pretrained(
'model_id_1',
torch_dtype=torch.float16,
variant='fp16'
)
```
```
pipe = DiffusionPipeline.from_pretrained(
'model_id_2',
torch_dtype=torch.float16,
)
```
How to know beforehand if the model supports variant='fp16' version? Is it possible to know from the checkpont, maybe with associated config file? This is required in order for consistency of the model loading with various model id.
### Reproduction
Given above.
### Logs
_No response_
### System Info
Diffusers
### Who can help?
maybe @DN6 | https://github.com/huggingface/diffusers/issues/8858 | closed | [
"bug",
"stale"
] | 2024-07-13T08:52:13Z | 2025-01-27T01:45:50Z | null | pure-rgb |
huggingface/dataset-viewer | 2,986 | Include code snippets for other libraries? | For example, in https://github.com/huggingface/huggingface.js/pull/797, we add `distilabel`, `fiftyone` and `argilla` to the list of libraries the Hub knows. However, the aim is only to handle the user-defined tags better, not to show code snippets.
In this issue, I propose to discuss if we should expand the list of dataset libraries for which we show code snippets. For now, we support pandas, HF datasets, webdatasets, mlcroissant and dask.
We already mentioned polars as a potential new lib, I think. Maybe duckdb too? | https://github.com/huggingface/dataset-viewer/issues/2986 | open | [
"question",
"P2"
] | 2024-07-12T11:57:43Z | 2024-07-12T14:39:59Z | null | severo |
huggingface/trl | 1,830 | How to use `predict` function in `DPOTrainer` | I want to get the logp and reward of the data through `predict`, but the prediction seems only include one data.
What is the correct usage of `predict`?

| https://github.com/huggingface/trl/issues/1830 | closed | [
"❓ question"
] | 2024-07-12T06:30:20Z | 2024-10-07T12:13:22Z | null | AIR-hl |
huggingface/datatrove | 248 | solved: how to launch a slurm executor from an interactive slurm job | I forget where I saw it in the docs/code where it said not to launch a slurm executor from an `srun` interactive session - which is not quite always possible.
There is a simple workaround - unset `SLURM_*` env vars and then launch and it works just fine.
```
unset $(printenv | grep SLURM | sed -E 's/(.*)=.*/\1/' | xargs)
./my_datatrove_slurm.py
```
Of course, your `srun` session will now be w/o its env vars - which you may or may not care for.
To help others to find the solution, the error is likely to be:
```
srun: error: CPU binding outside of job step allocation, allocated CPUs are: 0x0000000000000FFF80000000000000000000000FFF8000000000.
srun: error: Task launch for StepId=120986.0 failed on node xxx-yyy-11: Unable to satisfy cpu bind request
srun: error: Application launch failed: Unable to satisfy cpu bind request
srun: Job step aborted
```
There is also [this discussion](https://groups.google.com/g/slurm-users/c/mp_JRutKmCc) that proposes to unset just `SLURM_CPU_BIND_*` env vars, so you'd then:
```
unset $(printenv | grep SLURM_CPU_BIND | sed -E 's/(.*)=.*/\1/' | xargs)
./my_datatrove_slurm.py
```
If you want to unset them just for the datatrove launcher use this one-liner syntax
```
SLURM_CPU_BIND= SLURM_CPU_BIND_VERBOSE= SLURM_CPU_BIND_LIST= SLURM_CPU_BIND_TYPE= ./my_datatrove_slurm.py
```
or you could of course unset them inside your script as well, which would make the launching even simpler.
That way all `SLURM_*` env vars will remain intact in your shell environment if you need them for something else.
edit:
I added:
```
import os
# datatrove fails to start slurm jobs from an interactive slurm job,
# so hack to pretend we aren't inside an interactive slurm job by removing SLURM env vars
for key in os.environ.keys():
if key.startswith("SLURM_"):
os.environ.pop(key)
```
on top of my script to make it always work.
| https://github.com/huggingface/datatrove/issues/248 | open | [] | 2024-07-12T04:08:02Z | 2024-07-13T01:15:56Z | null | stas00 |
huggingface/diffusers | 8,843 | variable (per frame) IP Adapter weights in video | is there a (planned or existing) way to have variable IP Adapter weights for videos (e.g. with AnimateDiff)?
that means setting different values for different frames, as both scaling and masking currently seem to work with the whole generation at once (be it video or still image). | https://github.com/huggingface/diffusers/issues/8843 | open | [
"stale",
"low-priority",
"consider-for-modular-diffusers"
] | 2024-07-11T16:49:43Z | 2024-12-13T15:05:24Z | 6 | eps696 |
huggingface/transformers.js | 846 | range error: array buffer allocation failed <- how to catch this error? | ### Question
While Transformers.js rocks on Desktop, My Pixel with 6Gb of ram almost always crashes the webpage when trying to run things like Whisper or TTS.
<img width="531" alt="Screenshot 2024-07-11 at 14 27 08" src="https://github.com/xenova/transformers.js/assets/805405/f8862561-7618-4c80-87e2-06c86f262698">
Is there a way to more gracefully anticipate/handle this?
| https://github.com/huggingface/transformers.js/issues/846 | open | [
"question"
] | 2024-07-11T12:32:46Z | 2024-07-11T12:32:46Z | null | flatsiedatsie |
huggingface/diffusers | 8,834 | Will the training code of SD3 Controlnet be released? | **Is your feature request related to a problem? Please describe.**
Training code of SD3 ControlNet
**Describe the solution you'd like.**
Could you please release training code of SD3 controlnet? I tried to train it but failed so I want to check whats the reason
| https://github.com/huggingface/diffusers/issues/8834 | closed | [] | 2024-07-11T03:32:55Z | 2024-09-11T01:34:38Z | 3 | ChenhLiwnl |
huggingface/optimum | 1,953 | Export AWQ models to ONNX | ### System Info
```shell
python==3.10
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
None
### Expected behavior
Hello, I am new and want to try converting models to Onnx format and I have the following issue. I have a model that has been quantized to 4-bit, and then I converted this model to Onnx. My quantized model has a weight size of 7GB, but when I run the conversion to Onnx, my resulting model.onnx_data has a size of 34GB. Is there anything wrong here?
Below is my code:
```
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("SorawitChok/SeaLLM-7B-v2.5-AWQ")
ort_model = ORTModelForCausalLM.from_pretrained(
"SorawitChok/SeaLLM-7B-v2.5-AWQ",
# "/content/SeaLLM-7B-v2.5_4b",
use_io_binding=True,
export=True,
use_cache=True,
from_transformers=True,
# provider="CUDAExecutionProvider", # Change this to "CPUExecutionProvider" using CPU for inference
provider="CPUExecutionProvider", # Change this to "CPUExecutionProvider" using CPU for inference
)
print('=====Save Model====')
ort_model.save_pretrained("./SeaLLM-7B-v2.5-AWQ_onnx")
tokenizer.save_pretrained("./SeaLLM-7B-v2.5-AWQ_onnx")
```
Thanks for any help | https://github.com/huggingface/optimum/issues/1953 | closed | [
"feature-request",
"onnx"
] | 2024-07-11T02:18:56Z | 2024-07-25T12:42:38Z | 1 | Toan-it-mta |
huggingface/optimum | 1,951 | how can I get a onnx format int4 model? | ### System Info
```shell
Could you please tell me how I can obtain an int type model in ONNX format?
I’ve used the following code to quantize an ONNX model into QUINT8, but when I tried to quantize it into INT4, I found there were no relevant parameters to choose. As far as I know, GPTQ allows selecting n-bit quantization. Could you advise me on what steps I should take?
Thanks for your help!
first step:
optimum-cli export onnx --model /dataset/zhangy34/ss_qwen2/ onnx_model/ --trust-remote-code --task text-generation
second step:
optimum-cli onnxruntime quantize \
--avx512 \
--onnx_model ./qwen2_ori_model \
-o ./onnx_model/qwen2_rtn_model
```
### Who can help?
@mi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
none
### Expected behavior
get a int4 onnx model | https://github.com/huggingface/optimum/issues/1951 | open | [
"bug"
] | 2024-07-10T14:00:19Z | 2024-07-10T14:00:19Z | 0 | zhangyu68 |
huggingface/diffusers | 8,824 | [Solved] How to make custom datasets for instruct-pix2pix? | ### Describe the bug
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/builder.py", line 1750, in _prepare_split_single
[rank0]: for key, record in generator:
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 315, in _generate_examples
[rank0]: raise ValueError(
[rank0]: ValueError: image at [image name].jpg doesn't have metadata in [my_metadata_path]metadata.jsonl.
```
### Reproduction
I want to make custom datasets for local.
### dataset
- datasets
- input_images
- image.jpg
- image.jpg
- ...
- edited_images
- edited_image.jpg
- edited_image.jpg
- ...
- metadata.jsonl
### metadata.jsonl
```
{"file_name": "input_images/image.jpg", "edited_images/edited_image": "edited_image.jpg", "edit_prompt": "sample"}
```
### train script
```
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export TRAIN_DIR="datasets/"
export OUTPUT_DIR="weights/"
accelerate launch --mixed_precision="fp16" --multi_gpu train_instruct_pix2pix.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--train_data_dir=$TRAIN_DIR \
--use_ema \
--resolution=512 --random_flip \
--train_batch_size=2 --gradient_accumulation_steps=4 --gradient_checkpointing \
--max_train_steps=15000 \
--checkpointing_steps=5000 --checkpoints_total_limit=1 \
--learning_rate=5e-05 --lr_warmup_steps=0 \
--conditioning_dropout_prob=0.05 \
--mixed_precision=fp16 \
--seed=42 \
--output_dir=${OUTPUT_DIR}
```
### error log
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/builder.py", line 1750, in _prepare_split_single
[rank0]: for key, record in generator:
[rank0]: File "/opt/venv/lib/python3.10/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 315, in _generate_examples
[rank0]: raise ValueError(
[rank0]: ValueError: image at [image name].jpg doesn't have metadata in [my_metadata_path]metadata.jsonl.
```
how to make custom datasets for local?
I didn't find any solution.
### Logs
_No response_
### System Info
diffusers 0.30.0.dev0
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/8824 | closed | [
"bug"
] | 2024-07-10T05:35:38Z | 2024-07-11T02:18:40Z | null | jeonga0303 |
huggingface/optimum | 1,949 | ValueError: Trying to export a florence2 model | Hello,
I am attempting to export and quantize the Florence-2 model for CPU usage but encountered the following error:
```
ValueError: Trying to export a florence2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type florence2 to be supported natively in the ONNX export.
i am trying to quntize florence 2 model for cpu but its show this error
```
Based on the error message, it seems that the Florence-2 model is not natively supported for ONNX export, and a custom configuration is required.
Could you please provide guidance or support for exporting and quantizing the Florence-2 model using a custom ONNX configuration? It would be highly beneficial for my project to have native support for this model type, or at least detailed instructions on how to proceed with the custom export. | https://github.com/huggingface/optimum/issues/1949 | open | [
"feature-request",
"onnx"
] | 2024-07-10T04:59:06Z | 2024-10-23T10:07:05Z | 1 | ghost |
huggingface/transformers.js | 842 | Trying to run the Modnet example with nodejs on macOS result in Unknown model class "modnet", attempting to construct from base class. Model type for 'modnet' not found, assuming encoder-only architecture. | ### Question
Hello,
How one can run the modnet example ?
```
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model = await AutoModel.from_pretrained('Xenova/modnet', { quantized: false });
const processor = await AutoProcessor.from_pretrained('Xenova/modnet');
// Load image from URL
const url = 'https://images.pexels.com/photos/5965592/pexels-photo-5965592.jpeg?auto=compress&cs=tinysrgb&w=1024';
const image = await RawImage.fromURL(url);
// Pre-process image
const { pixel_values } = await processor(image);
// Predict alpha matte
const { output } = await model({ input: pixel_values });
// Save output mask
const mask = await RawImage.fromTensor(output[0].mul(255).to('uint8')).resize(image.width, image.height);
mask.save('mask.png');
```
Thanks for the amazing work ! | https://github.com/huggingface/transformers.js/issues/842 | closed | [
"question"
] | 2024-07-09T16:19:22Z | 2025-03-27T18:58:03Z | null | gabrielstuff |
huggingface/chat-ui | 1,335 | [v0.9.1] Switch the LLM model mid-conversation? | ## Description
Currently, **chat-ui** does not support changing the language model once a conversation has started. For example, if I begin a chat with _Llama 3_, I cannot switch to _Gemini 1.5_ mid-conversation, even if I change the setting in the UI.
## Steps to Reproduce
* Start a conversation with one language model (e.g., _Llama 3_).
* Go to settings and change the model to a different one (e.g., _Gemini 1.5_).
* Observe that the model used in the conversation does not change.
## Expected Behavior
The language model should switch to the newly selected model, even mid-conversation.
## Additional Questions
* Is this a known limitation or a potential bug?
* If this is intended behavior, are there any plans to implement model-switching in the future?
* If not, could you provide guidance or resources on how to achieve this functionality? I can start implementing and raise a PR!
## Environment
* **OS**: macOS Sonoma
* **Browser**: Chrome, Safari, Arc
* **chat-ui** version: v0.9.1
cc: @nsarrazin | https://github.com/huggingface/chat-ui/issues/1335 | open | [] | 2024-07-09T13:43:16Z | 2024-09-13T16:45:23Z | 3 | adhishthite |
huggingface/transformers.js | 841 | Support opus-mt-mul-en translation in WebGPU | ### Question
I've been having some trouble where translation sometimes wasn't working. For example, I just tried translating Polish into English using `opus-mt-mul-en`. But if outputs empty strings.
So I started looking for what could be wrong, and in the Transformers.js source code I found this `marian.py` file:
https://github.com/xenova/transformers.js/blob/7f5081da29c3f77ee830269ab801344776e61bcb/scripts/extra/marian.py#L18
It lists the supported Opus MT models, and while the model is available on Huggingface (https://huggingface.co/Xenova/opus-mt-mul-en), I'm guessing it isn't actually supported (yet)?
Do I understand correctly?
Related: is there a setting with the `mul` models that I need to set to select which language is translated into?
For completeness, here's some of my code:
Constructing the model:
```
const hf_model_url = 'Xenova/opus-mt-mul-en';
pipeline('translation', hf_model_url, {
progress_callback: progressCallback,
dtype: dtype_settings,
device: self.device
},
)
.then((pipe) => {
etc
```
And getting a translation out:
```
.pipe(sentence)
.then((translation) => {
etc
```
.. which already begs the question: as `oput-mt-en-mul` _is_ supported according to that file, ...then how would that multi-model know what language to output to?
I'll continue searching to see if I can answer my own question :-)
| https://github.com/huggingface/transformers.js/issues/841 | closed | [
"question"
] | 2024-07-09T11:52:12Z | 2024-10-07T15:34:54Z | null | flatsiedatsie |
huggingface/parler-tts | 83 | How big a dataset is needed to train the model? | I used 560+ hours of libritts_R data to train the model (187M) from scratch, but the audio synthesized by the model is not correct.
Is this because the size od the dataset is not enough? | https://github.com/huggingface/parler-tts/issues/83 | open | [] | 2024-07-09T03:56:42Z | 2024-09-21T10:46:39Z | null | zyy-fc |
huggingface/datatrove | 242 | how to postpone filter init till it's running | So it appears that currently I can't instantiate a model on a gpu because the filter object is created by the launcher, which either doesn't have a gpu, or it is most likely the wrong gpu even if it has one, since we would need a dedicated gpu(s) for each task.
Is it possible to add a 2nd init which would be the user init that will run on the actual job?
The filter task is simple - instantiate a model on a gpu and then run filter using it - of course we don't want model to be re-instantiated on every filter call.
Needing to `import torch` inside the `filter` is super-weird as well, but I get that it's due to pickle - but perhaps we can have two inits - one of the framework - and then another of the user.
So when a job is launched the first thing the framework runs is user defined `init` if any, and then proceeds normally.
I guess I will try to overcome this meanwhile using `@functools.cache` or something similar.
Thank you!
tag: @guipenedo | https://github.com/huggingface/datatrove/issues/242 | open | [] | 2024-07-09T01:11:13Z | 2024-07-10T01:36:02Z | null | stas00 |
huggingface/hub-docs | 1,328 | Document how to filter and save searches on the hub (e.g. by model format, only LoRAs, by date range etc...) | **Doc request**
I'd really like to see documentation that clarifies how users can filter searches and when browsing models on the Hub.
Things I can't seem to find that I would expect / would make our lives better:
- A selection list or drop down to filter by popular model formats (GGUF, EXL2 etc...)
- A filter or 'explore by category' for original models, fine-tunes, quantisations, adapters etc...
- Filter by date created within (e.g. the last 2 months)
- How to save the filter/search so you can bookmark, share and come back to it later
**Additional context**
- Discussion about this on r/LocalLLaMA recently - https://www.reddit.com/r/LocalLLaMA/comments/1dyjh6m/comment/lc9dhjp/
If there actually isn't a way to do this on the hub a present, I would really love it if something like my shitty mock here could be considered:

| https://github.com/huggingface/hub-docs/issues/1328 | open | [] | 2024-07-08T22:51:55Z | 2024-07-10T19:17:42Z | null | sammcj |
huggingface/candle | 2,323 | How to do freeze VarMap Vars? | Hello everybody,
Is there away to freeze all Var Tensors in the VarMap like the below snippet ?
means something like implement the `Iterator` trait and detach the contained tensors from the graph and add a Var which can be trained !!!
```
# Freeze all the pre-trained layers
for param in model.parameters():
param.requires_grad = False
```
_Originally posted by @mohamed-180 in https://github.com/huggingface/candle/issues/891#issuecomment-2214407719_
| https://github.com/huggingface/candle/issues/2323 | open | [] | 2024-07-08T15:14:54Z | 2024-07-08T15:14:54Z | null | mohamed-180 |
huggingface/trl | 1,815 | How to use DoRA with ORPO | Hi! I'm running experiments where I'm comparing SFT to ORPO.
For SFT I currently initialize a `trl.SFTTrainer`, and pass `args=transformers.TrainingArguments(..., use_dora=True, ...)`.
For ORPO I'm supposed to pass `args=trl.ORPOConfig`, but according to the documentation this doesn't seem to support passing `use_dora` as an argument.
What's the best way to combine DoRA with ORPO? In theory this should of course be possible to combine. Can I just pass `transformers.TrainingArguments` to `trl.ORPOTrainer` or would this (silently) break things? | https://github.com/huggingface/trl/issues/1815 | closed | [] | 2024-07-08T11:12:48Z | 2024-07-08T15:39:42Z | null | julianstastny |
huggingface/text-generation-inference | 2,200 | How to clean the TGI guidance cache? | I use TGI guidance to enforce LLM choose a tool.
However, when I change the description of the tool, I find TGI does not re-compile the new grammar.
Therefore, I want to know how to clean the compiled grammar. | https://github.com/huggingface/text-generation-inference/issues/2200 | closed | [] | 2024-07-08T05:37:55Z | 2024-07-18T15:01:07Z | null | EdisonE3 |
huggingface/transformers.js | 837 | Model downloads or running on server? | ### Question
Hey there,
I am using simple hosting with cPanel view as the admin. If I upload the ONNX model files to the file manager as well as the JS script to run the model, will it still need to download the model or will it not, since the file is uploaded there, along with the script. Provided of course that I disable automatic huggingface loading and add the directory to the models in the file manager through .env.
Your help will be highly appreciated.
Cheers. | https://github.com/huggingface/transformers.js/issues/837 | closed | [
"question"
] | 2024-07-06T23:07:15Z | 2025-01-20T19:50:12Z | null | moses-mbaga |
huggingface/lerobot | 305 | how to eval the policy trained by lerobot in real env? | ### System Info
```Shell
how to eval the policy trained by lerobot in real env?
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
in the code, i have not found any solution to transfer policy rollout to real env, please help me figure it out
### Expected behavior
how to infer the policy trained by lerobot in real env? | https://github.com/huggingface/lerobot/issues/305 | closed | [] | 2024-07-05T03:23:01Z | 2024-07-23T09:08:27Z | null | cong1024 |
huggingface/transformers.js | 836 | How do I free up memory after transliteration | ### Question
After I executed the translation in the worker, it seems that the memory could not be reclaimed when I called pipely. dispose(), and the memory would be reclaimed only when the woker was closed. Can you help me with this question? | https://github.com/huggingface/transformers.js/issues/836 | closed | [
"question"
] | 2024-07-04T15:16:33Z | 2024-07-05T07:19:31Z | null | raodaqi |
huggingface/transformers | 31,790 | How to implement bind_tools to custom LLM from huggingface pipeline(Llama-3) for a custom agent |
Example Code
```
name = "meta-llama/Meta-Llama-3-8B-Instruct"
auth_token = ""
tokenizer = AutoTokenizer.from_pretrained(name,use_auth_token=auth_token)
bnb_config = BitsAndBytesConfig(
load_in_8bit=True,
)
model_config = AutoConfig.from_pretrained(
name,
use_auth_token=auth_token,
tempreature=0.1,
)
model = AutoModelForCausalLM.from_pretrained(
name,
trust_remote_code=True,
config=model_config,
quantization_config=bnb_config,
device_map='auto',
use_auth_token=auth_token,
)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=4096, device_map="auto", streamer = streamer)
llm = HuggingFacePipeline(pipeline=pipe)
@tool
def some_custom_tool(input_string: str) -> str:
"""Executes some work and returns a success message if successfull else it return the error message"""
return "SUCCESS"
tools = [some_custom_tool]
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
f"""
You are an Assistant......
""",
),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = llm.bind_tools(tools)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
}
| prompt
| llm
| JsonOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, return_intermediate_steps= True)
```
Description
I am trying to bind a custom tool with the LLM just like ChatOpenAI but i am getting the following error. It looks like the bind_tools does exist in HuggingFacePipeline. Is there a way to bind a custom tool to an LLM from HuggingFacePipeline?
AttributeError: 'HuggingFacePipeline' object has no attribute 'bind_tools'
System Info:
```
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.11
langchain-openai==0.1.14
langchain-text-splitters==0.2.2
Python 3.10.13
```
I am doing this on Kaggle GPU t4x2
| https://github.com/huggingface/transformers/issues/31790 | closed | [] | 2024-07-04T08:59:38Z | 2024-08-13T08:04:24Z | null | talhaty |
huggingface/diffusers | 8,788 | VAE Tiling not supported with SD3 for non power of 2 images? | ### Describe the bug
VAE tiling works for SD3 with power of 2 images, but for no other alignments.
The mentioned issues with VAE tiling are due to: [vae/config.json](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/blob/main/vae/config.json)
Having:
```
"use_post_quant_conv": false,
"use_quant_conv": false
```
Which causes the method used here:
https://github.com/huggingface/diffusers/blob/589931ca791deb8f896ee291ee481070755faa26/src/diffusers/models/autoencoders/autoencoder_kl.py#L363
And Here:
https://github.com/huggingface/diffusers/blob/589931ca791deb8f896ee291ee481070755faa26/src/diffusers/models/autoencoders/autoencoder_kl.py#L412
To be `None`
Perhaps at the moment, the model is simply not entirely compatible with the tiling in ``AutoEncoderKL``, as the state dict does not possess the keys `post_quant_conv.bias, quant_conv.weight, post_quant_conv.weight, quant_conv.bias`
Is this intended?
### Reproduction
```python
import diffusers
import PIL.Image
import os
os.environ['HF_TOKEN'] = 'your token'
cn = diffusers.SD3ControlNetModel.from_pretrained('InstantX/SD3-Controlnet-Canny')
pipe = diffusers.StableDiffusion3ControlNetPipeline.from_pretrained(
'stabilityai/stable-diffusion-3-medium-diffusers',
controlnet=cn)
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_tiling()
width = 1376
height = 920
# aligned by 16, but alignment by 64 also fails
output_size = (width-(width % 16), height-(height % 16))
not_pow_2 = PIL.Image.new('RGB', output_size)
args = {
'guidance_scale': 8.0,
'num_inference_steps': 30,
'width': output_size[0],
'height': output_size[1],
'control_image': not_pow_2,
'prompt': 'test prompt'
}
pipe(**args)
```
### Logs
```shell
REDACT\venv\Lib\site-packages\diffusers\models\attention_processor.py:1584: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
hidden_states = F.scaled_dot_product_attention(
Traceback (most recent call last):
File "REDACT\test.py", line 35, in <module>
pipe(**args)
File "REDACT\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "REDACT\venv\Lib\site-packages\diffusers\pipelines\controlnet_sd3\pipeline_stable_diffusion_3_controlnet.py", line 912, in __call__
control_image = self.vae.encode(control_image).latent_dist.sample()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "REDACT\venv\Lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "REDACT\venv\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 258, in encode
return self.tiled_encode(x, return_dict=return_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "REDACT\venv\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 363, in tiled_encode
tile = self.quant_conv(tile)
^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not callable
```
### System Info
Windows
diffusers 0.29.2
### Who can help?
@yiyixuxu @sayakpaul @DN6 @asomoza | https://github.com/huggingface/diffusers/issues/8788 | closed | [
"bug"
] | 2024-07-04T03:52:54Z | 2024-07-11T20:41:37Z | 2 | Teriks |
huggingface/diffusers | 8,785 | adding PAG Support for Hunyuan-DIT and Pixart-Sigma | we recently added PAG support for SDXL. Is Anyone interested in extending PAG support to Hunyuan-DIT and Pixart-Sigma?
There is no implementation available, so it is a bit of a research-oriented project (= fun!!). and you can get directly feedbacks from the authors @sunovivid @HyoungwonCho
to add PAG support to new models:
* I think you should be able to use `PAGMixin` as it is (or with some modification)(https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pag_utils.py#L27)
* you will need to make PAG attention processors for the new model https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2564 based on the attention processor that the model uses, e.g. for Hunyuan-DIT, you need to make a `HunyuanPAGIdentitySelfAttnProcessor2_0` and `HunyuanPAGCFGIdentitySelfAttnProcessor2_0` based on `HunyuanAttnProcessor2_0` https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1499
* you will need to make a `HunyuanPAGPipeline` /`PixartSigmaPAGPipeline` under the `pag` folder (for now!) | https://github.com/huggingface/diffusers/issues/8785 | closed | [
"help wanted",
"contributions-welcome",
"advanced"
] | 2024-07-03T18:17:32Z | 2024-08-30T11:09:04Z | 4 | yiyixuxu |
huggingface/diffusers | 8,780 | Model and input data type is not same | **Is your feature request related to a problem? Please describe.**
Hi, when I trained sdv1.5 model with fp16 mode by using the `examples/text_to_image/train_text_to_image.py` file, I found there is a mismatch between unet model and input data. Specificaly, In this [line](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py#L993) , the `unet` model has float32 dtype, but the `noisy_latents` has the float16 dtype. Although it will not raise an error in cuda , I use my custom device it will raise an error, I wonder how can I change this code to use float16.
**Describe the solution you'd like.**
To avoid get a wrong model, I would like you give a right code to match model and input.
**Describe alternatives you've considered.**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context.**
Add any other context or screenshots about the feature request here.
| https://github.com/huggingface/diffusers/issues/8780 | open | [
"stale"
] | 2024-07-03T06:57:44Z | 2024-09-14T15:07:36Z | 1 | andyjiang1116 |
huggingface/peft | 1,903 | How to use multiple GPUs | ### System Info
peft=0.11.1
python=3.10
### Who can help?
When I run this script, there is no problem with a single GPU. When I try to run 2 GPUs, the system resources show that the utilization rate of each GPU is only half. When I try to increase per-device_train_batch_size and gradient-accumulation_steps, there is a situation of memory overflow. What should I do?
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from datasets import load_dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
logging,
)
from peft import LoraConfig, peft_model, TaskType
from trl import SFTTrainer, SFTConfig
# fix random sequence
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(
model_id,
# use_fast=False,
add_eos_token=True,
#trust_remote_code=True,
)
#tokenizer.pad_token = tokenizer.unk_token
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.padding_side = "right"
# Generate Llama 3 instruction
def generate_supervised_chat(row):
chat = [
{ 'role': 'system',
'content': '你是一位优秀的翻译专家。请把给定的中文文本翻译为日语,只回复翻译后的文本。'},
{ 'role': 'user',
'content': f'''请把下面的中文文本翻译为日语文本。
中文文本: {row["Ch"]}''' },
{ 'role': 'assistant',
'content': f'''此文本翻译后的结果如下。
日语翻译文本: {row["Ja"]}
以上。'''},
]
instruction = tokenizer.apply_chat_template(chat, tokenize=False)
# instruction = instruction + "<|end_of_text|>"
return instruction
def add_text(row):
row['text'] = generate_supervised_chat(row)
return row
# load dataset
jjs_dataset_dir = "wccjc-dataset"
dataset = load_dataset(
jjs_dataset_dir,
data_files={'train': 'train.tsv', 'test': 'test.tsv', 'valid': 'valid.tsv'},
sep='\t',
names=['Ch', 'Ja']
)
dataset = dataset["train"]
dataset = dataset.map(add_text)
print(dataset)
print(dataset[0]["text"])
# Quantization Config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16, # or float16
bnb_4bit_use_double_quant=True,
)
import datetime
# Load pretrained model
now = datetime.datetime.now()
print('Loading base model:', model_id, now)
print('Train epochs:', n_epochs)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=bnb_config,
device_map="auto", #{"": 0},
)
now = datetime.datetime.now()
print('Loading ended', now)
model.config.use_cache = False
model.config.pretraining_tp = 1
# LoRA Config
lora_config = LoraConfig(
r=8,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type=TaskType.CAUSAL_LM, # "CAUSUAL_LM",
target_modules=["q_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "k_proj", "v_proj"],
)
per_device_train_batch_size = 4
gradient_accumulation_steps = 4
print("per_device_train_batch_size:", per_device_train_batch_size)
print("gradient_accumulation_steps:", gradient_accumulation_steps)
# Training arguments
sft_config = SFTConfig(
output_dir="./train_logs",
fp16=True,
seed=42,
# max_steps=13200, # 300,
num_train_epochs=n_epochs,
per_device_train_batch_size=per_device_train_batch_size, #4,
gradient_accumulation_steps=gradient_accumulation_steps, # 1,
optim="paged_adamw_32bit",
learning_rate=2e-4,
lr_scheduler_type="cosine",
max_grad_norm=0.3,
warmup_ratio=0.03,
weight_decay=0.001,
save_steps=1000, #25,
logging_steps=25,
group_by_length=True,
report_to="tensorboard",
max_seq_length=512, #None
dataset_text_field="text",
)
# SFT arguments
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=dataset,
peft_config=lora_config,
# args=training_arguments,
args=sft_config,
packing=False,
)
```
### Expected behavior
run 2 GPUs | https://github.com/huggingface/peft/issues/1903 | closed | [] | 2024-07-03T02:25:36Z | 2024-08-11T15:03:29Z | null | Lihwnlp |
huggingface/text-embeddings-inference | 320 | how to deploy bge-reranker-v2-m3 on Text-embeddings-inference | https://github.com/huggingface/text-embeddings-inference/issues/320 | closed | [] | 2024-07-02T15:18:48Z | 2024-07-08T10:20:05Z | null | kennard520 | |
huggingface/text-embeddings-inference | 318 | How to deploy bge-reranker-v2-m3 for multiple threads? | https://github.com/huggingface/text-embeddings-inference/issues/318 | closed | [] | 2024-07-02T14:56:33Z | 2024-07-08T10:20:01Z | null | kennard520 | |
huggingface/diffusers | 8,771 | Removing LoRAAttnProcessor causes many dependencies to fail | ### Describe the bug
https://github.com/huggingface/diffusers/pull/8623 removed obsolete `LoRAAttnProcessor` which in principle is a good thing, but it was done without considerations where is that feature currently in-use so it breaks many (and i mean many) community pipelines
it also breaks some core libraries such as huggingface's own <https://github.com/huggingface/optimum> library which is used to export model to onnx and also to enable use of olive backend.
suggestion is to add a dummy class `LoRAAttnProcessor` so it results in no-op for packages that import it.
### Reproduction
N/A
### Logs
```shell
> Failed to import optimum.onnxruntime.modeling_diffusion because of the following error (look up to see its traceback):
> Failed to import optimum.exporters.onnx.__main__ because of the following error (look up to see its traceback):
> cannot import name 'LoRAAttnProcessor' from 'diffusers.models.attention_processor' (/home/vlado/dev/sdnext/venv/lib/python3.12/site-packages/diffusers/models/attention_processor.py)
```
### System Info
diffusers==0.30.0.dev0
### Who can help?
@yiyixuxu @sayakpaul @DN6 | https://github.com/huggingface/diffusers/issues/8771 | closed | [
"bug"
] | 2024-07-02T13:11:33Z | 2024-07-03T16:37:08Z | 1 | vladmandic |
huggingface/candle | 2,307 | How to get all layers attentions? | I only see that candle returns last_hidden_state, but not all_hidden_states and attentions. I want to get attentions. Can I submit a PR to do this? I originally wanted to define the Model myself, but I found that all its methods are private | https://github.com/huggingface/candle/issues/2307 | open | [] | 2024-07-02T02:16:52Z | 2024-07-02T02:16:52Z | null | kitty-eu-org |
huggingface/diffusers | 8,760 | Clarification Needed on Hardcoded Value in Conditional Statement in LeditPP | Hello @manuelbrack,
I was reviewing the source code and came across a line that seems to have a hardcoded value in a conditional statement. The line in question is:
https://github.com/huggingface/diffusers/blob/0bae6e447cba0459456c4f7e7e87d7db141d3235/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion.py#L1053
I understand that this condition decides whether cross_attention_mask, intersect_mask, or noise_mask is going to be used in the diffusion step, but any clarification on this this condition about the following questions will be appreciated:
- What is the significance of the value 800?
- Is this value based on empirical data, theoretical calculations, or an arbitrary choice?
- Are there specific scenarios or conditions under which this threshold was determined?
- Would it be possible to include a comment or documentation explaining this choice for future reference?
Thank you for your help! | https://github.com/huggingface/diffusers/issues/8760 | open | [
"stale"
] | 2024-07-01T20:12:20Z | 2024-12-13T15:05:35Z | 3 | ardofski |
huggingface/diffusers | 8,748 | SD3 cannot finetunes a better model (hand and face deformation)? | ### Describe the bug
I want to finetune sd3 to improve its human generation quality with 3million high-quality human datasets (which has been proven useful on sdxl and other models). But hand and face deformation doesn't improve much after two days of training.
I am using [train](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_sd3.py) script
What I have been done so far:
1. regular training with 3 million data with batch size 2x24(V100) for 2 epochs with lr 5e-6 and adamw optimizer
2. prodigy optimizer training with same setting
3. Add q,k RMS norm to each attention layer
4. only train several blocks
All of my training gives me nearly the same deformation results, where the hands are never normal like human.
Could you some provide more experiments about sd3 training? There seems no easy way to adapt sd3 for human generation
### Reproduction
Has described in bug part
### Logs
_No response_
### System Info
V100 24GPU, batchsize 2 for each card, 3 million human data with aesthetic score > 4.5
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/8748 | closed | [
"bug"
] | 2024-07-01T07:21:19Z | 2024-07-17T06:01:31Z | 4 | KaiWU5 |
huggingface/transformers.js | 833 | convert.py has errors when i use yolov9 | ### Question
your repo
https://huggingface.co/Xenova/gelan-c
is really good and helpful for me
but i need to use the gelan-t, gelan-s edition , coz of mobile phone depoyment
when i u convert.py to convert to onnx edition , errors happen
The checkpoint you are trying to load has model type `yolov9` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date | https://github.com/huggingface/transformers.js/issues/833 | open | [
"question"
] | 2024-07-01T03:51:53Z | 2024-07-18T07:04:10Z | null | jifeng632 |
huggingface/transformers | 31,722 | how to generate router_logits in moe models using model.generate()? | ### System Info
- `transformers` version: 4.41.2
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.3
- Accelerate version: 0.31.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <yes>
- Using distributed or parallel set-up in script?: <yes>
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"/localssd/swlu/Qwen1.5-MoE-A2.7B-Chat",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("/localssd/swlu/Qwen1.5-MoE-A2.7B-Chat")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512,
return_dict_in_generate = True,
output_router_logits = True
)
print("outputs:", generated_ids.router_logits)
### Expected behavior
I want to get router_logits of moe models using model.generate() with the code above.
But got:
AttributeError: 'GenerateDecoderOnlyOutput' object has no attribute 'router_logits' | https://github.com/huggingface/transformers/issues/31722 | closed | [
"Generation"
] | 2024-07-01T03:48:09Z | 2024-09-13T08:07:40Z | null | Jimmy-Lu |
huggingface/transformers.js | 832 | How to load version 3 from CDN? | ### Question
The [README.md file on v3 branch](https://github.com/xenova/transformers.js/tree/v3?tab=readme-ov-file#installation) has a html snippet to import transformers version 3 from a CDN.
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@3.0.0-alpha.0';
</script>
```
That URL is unresolved by the CDN.
Is version 3 available on any CDN? If so what is the URL? If not is there an alternative to import from browser?
| https://github.com/huggingface/transformers.js/issues/832 | closed | [
"question"
] | 2024-06-30T23:39:08Z | 2024-10-10T12:23:41Z | null | geoffroy-noel-ddh |
huggingface/transformers | 31,717 | how to remove kv cache? | ### Feature request
When I use the generate() function of a language model for inference, the kv-cache is also stored in the GPU memory. Is there any way to clear this kv-cache before continuing to call generate()?
### Motivation
I have a lot of text to process, so I use a for loop to call generate(). To avoid OOM, I need to clear the kv-cache before the end of each loop iteration.
### Your contribution
none | https://github.com/huggingface/transformers/issues/31717 | closed | [
"Feature request",
"Generation",
"Cache"
] | 2024-06-30T12:09:48Z | 2024-11-05T01:34:42Z | null | TuuSiwei |
huggingface/accelerate | 2,904 | How to merge Qlora FSDP weights with an LLM and save model. | https://github.com/huggingface/accelerate/issues/2904 | closed | [] | 2024-06-30T07:00:50Z | 2024-07-01T14:20:53Z | null | Minami-su | |
huggingface/transformers.js | 830 | Error while using the library in nextjs (app based route) | ### Question
Hello
I was going through the issues section to find out an solution for the issue i am facing.. I did tried some of the solutions provided by xenova but it seems like I am getting some wasm fallback error which I have no idea whats happening.. I doubt its on webpack but I wanted a clarity.
The error I see is like this while running `npm run dev`
```
✓ Compiled /api/openai in 1500ms (3656 modules)
TypeError: Cannot read properties of undefined (reading 'create')
at constructSession (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/models.js:436:39)
at async Promise.all (index 1)
at async BertModel.from_pretrained (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/models.js:1007:20)
at async AutoModel.from_pretrained (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/models.js:5026:20)
at async Promise.all (index 1)
at async loadItems (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/pipelines.js:2838:5)
at async pipeline (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/pipelines.js:2790:21)
at async HuggingFaceEmbedding.getExtractor (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/HuggingFaceEmbedding.js:37:30)
at async HuggingFaceEmbedding.getTextEmbedding (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/HuggingFaceEmbedding.js:44:27)
at async HuggingFaceEmbedding.getTextEmbeddings (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:30:31)
at async batchEmbeddings (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:61:32)
at async HuggingFaceEmbedding.getTextEmbeddingsBatch (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:40:16)
at async HuggingFaceEmbedding.transform (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:44:28)
at async VectorStoreIndex.getNodeEmbeddingResults (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:474:17)
at async VectorStoreIndex.insertNodes (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:571:17)
at async VectorStoreIndex.buildIndexFromNodes (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:486:9)
at async VectorStoreIndex.init (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:436:13)
at async VectorStoreIndex.fromDocuments (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:514:16)
at async getOpenAIModelRequest (webpack-internal:///(rsc)/./src/actions/openai.ts:62:23)
at async POST (webpack-internal:///(rsc)/./src/app/api/openai/route.ts:11:21)
at async /Users/jino.jose/rakuten/git/rr-services-version-dashboard/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:63809
at async eU.execute (/Users/jino.jose/rakuten/git/rr-services-version-dashboard/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:53964)
at async eU.handle (/Users/jino.jose/rakuten/git/rr-services-version-dashboard/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:65062)
at async doRender (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1333:42)
at async cacheEntry.responseCache.get.routeKind (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1555:28)
at async DevServer.renderToResponseWithComponentsImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1463:28)
at async DevServer.renderPageComponent (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1856:24)
at async DevServer.renderToResponseImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1894:32)
at async DevServer.pipeImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:911:25)
at async NextNodeServer.handleCatchallRenderRequest (/opt/homebrew/lib/node_modules/next/dist/server/next-server.js:271:17)
at async DevServer.handleRequestImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:807:17)
at async /opt/homebrew/lib/node_modules/next/dist/server/dev/next-dev-server.js:331:20
at async Span.traceAsyncFn (/opt/homebrew/lib/node_modules/next/dist/trace/trace.js:151:20)
at async DevServer.handleRequest (/opt/homebrew/lib/node_modules/next/dist/server/dev/next-dev-server.js:328:24)
at async invokeRender (/opt/homebrew/lib/node_modules/next/dist/server/lib/router-server.js:163:21)
at async handleRequest (/opt/homebrew/lib/node_modules/next/dist/server/lib/router-server.js:342:24)
at async requestHandlerImpl (/opt/homebrew/lib/node_modules/next/dist/server/lib/router-server.js:366:13)
at async Server.requestListener (/opt/homebrew/lib/node_modules/next/dist/server/lib/start | https://github.com/huggingface/transformers.js/issues/830 | closed | [
"question"
] | 2024-06-29T15:00:09Z | 2025-02-10T02:00:25Z | null | rr-jino-jose |
huggingface/candle | 2,294 | How to get raw tensor data? | I am trying to implement an adaptive avg pool in candle. However, I guess my implementation will require an API to get the raw data/storage (storaged in plain/flatten array format).
Wondering if there is such an API for that?
Thanks! | https://github.com/huggingface/candle/issues/2294 | open | [] | 2024-06-28T19:19:45Z | 2024-06-28T21:51:57Z | null | WenheLI |
huggingface/diffusers | 8,730 | Implementation of DDIM, why taking Xt and (t-1) as input? | ### Describe the bug
I have tried to infer a diffusion model with DDIM with the number of timesteps = 10 and maximize timesteps as 1000.
I have printed the t in the for-loop, and the result is 901, 801, 801, 701, 601, 501, 401, 301, 201, 101, 1. It's really weird to me why 801 appears two times, and why we start from t=901 instead of t=1000. If we use t=901, we are trying to input x_1000 (the pure noise) and t_901 to the noise predictor, right? It seems weird because when we train the diffusion model, we feed (x_t, t). I mean, the timestep t should correspond to the version of images x_t.
I think the implementation may be right and some of my thoughts are wrong. Please kindly tell me the reason. Thank you!!!
### Reproduction
Just add a print in the forward for loop in DDIMPipeline.
### Logs
_No response_
### System Info
I believe this problem is not relevant to the system info.
### Who can help?
@yiyixuxu | https://github.com/huggingface/diffusers/issues/8730 | closed | [
"bug"
] | 2024-06-28T18:45:55Z | 2024-07-01T17:24:49Z | 1 | EPIC-Lab-sjtu |
huggingface/safetensors | 490 | How to save model checkpoint from a distributed training from multiple nodes? | Hello,
When I use accelerator and deepspeed Zero3 to train the model in one node with 8 GPUs, the following code smoothly saves the model checkpoint
```
ds_state_dict = model._zero3_consolidated_16bit_state_dict() # here model is sharded
if self.accelerator.is_main_process:
save_file(ds_state_dict, f"{output_dir}/full_model.safetensors")
```
However, when I move the code to two nodes with each node 8 GPUs, this code does not work.
The error is like:
```Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.```
Then I thought maybe I should not call main process only because there are two nodes, so I call the local rank 0 to save
```
ds_state_dict = model._zero3_consolidated_16bit_state_dict() # here model is sharded
if self.accelerator.local_process_index == 0:
save_file(ds_state_dict, f"{output_dir}/full_model.safetensors")
```
And the error becomes:
```
save_file(ds_state_dict, f"{output_dir}/full_model.safetensors")
File "/opt/conda/lib/python3.10/site-packages/safetensors/torch.py", line 284, in save_file
serialize_file(_flatten(tensors), filename, metadata=metadata)
File "/opt/conda/lib/python3.10/site-packages/safetensors/torch.py", line 457, in _flatten
raise ValueError(f"Expected a dict of [str, torch.Tensor] but received {type(tensors)}")
ValueError: Expected a dict of [str, torch.Tensor] but received <class 'NoneType'>
```
I am not sure in this case, what is the right way to use safetensors to save? | https://github.com/huggingface/safetensors/issues/490 | closed | [
"Stale"
] | 2024-06-28T04:59:45Z | 2024-07-31T11:46:06Z | null | Emerald01 |
huggingface/diffusers | 8,728 | Using `torchsde.BrownianInterval` instead of `torchsde.BrownianTree` in class `BatchedBrownianTree` | **Is your feature request related to a problem? Please describe.**
When I was doing some optimization for my pipeline, i found that the BrownianTree somehow took a bit more time.
**Describe the solution you'd like.**
I further dig into torchsde document, and found that they encouraged to use `BrownianInterval` to have best benefits for underlying structure utilization. The `BrownianTree` is actually just an abstraction layer of the `BrownianInterval` and as we all know, python function calls take time!
Code:
```
#diffusers/src/diffusers/schedulers/scheduling_dpmsolver_sde.py:41
self.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, **kwargs) for s in seed]
# Modified
self.trees = [torchsde.BrownianInterval(t0, t1, size=w0.shape, dtype=w0.dtype, device=w0.device, cache_size=None, entropy=s, **kwargs) for s in seed]
```
**Additional context.**
[torchsde doc link](https://github.com/google-research/torchsde/blob/master/DOCUMENTATION.md) | https://github.com/huggingface/diffusers/issues/8728 | closed | [] | 2024-06-28T04:33:55Z | 2024-09-12T08:46:54Z | 5 | dianyo |
huggingface/transformers.js | 826 | Support for GLiNER models? | ### Question
is there a reason why models from the GLiNER family can't be supported?
I see they use a specialized library, does it take a lot of code to make them work? | https://github.com/huggingface/transformers.js/issues/826 | open | [
"question"
] | 2024-06-28T01:54:37Z | 2024-10-04T07:59:16Z | null | Madd0g |
huggingface/diffusers | 8,721 | how to unload a pipeline | how to unload a pipeline and release the gpu memory | https://github.com/huggingface/diffusers/issues/8721 | closed | [] | 2024-06-27T10:04:39Z | 2024-07-02T14:40:39Z | null | nono909090 |
huggingface/transformers.js | 825 | Are there any examples on how to use paligemma model with transformer.js | ### Question
First of all, thanks for this amazing library!
So my questions is, I happened to see this model available on transformers.js:
https://huggingface.co/Xenova/paligemma-3b-mix-224
But unfortunately I can't find any example on how to run the `image-text-to-text` pipeline. Are there are resources you could kindly point me to? Thanks in advance! 🙏🏻 | https://github.com/huggingface/transformers.js/issues/825 | open | [
"question"
] | 2024-06-27T09:49:22Z | 2024-06-29T02:39:27Z | null | alextanhongpin |
huggingface/lerobot | 294 | after training using lerobot framework,how to infer the trained policy directly in real environment(ep. aloha code)? i have not found a solution yet | ### System Info
```Shell
os ubuntu20.04,
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
not yet
### Expected behavior
how to directly eval the policy trained by lerobot in aloha ? | https://github.com/huggingface/lerobot/issues/294 | closed | [
"question",
"policies",
"robots",
"stale"
] | 2024-06-27T03:16:19Z | 2025-10-23T02:29:25Z | null | cong1024 |
huggingface/chat-ui | 1,312 | [v0.9.1] Error: "Cannot resolve directory $env" | ## Issue
For all client-side components, I get this:
```
"Cannot resolve directory $env"
```
<img width="589" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/26fa2eef-dbff-44f6-bb86-7700387abdf2">
<img width="837" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/e3668b40-396b-4244-9c78-4aaf805220ae">
This issue prevents a Docker run, because PUBLIC_ASSETS is not found.
@nsarrazin Please help.
| https://github.com/huggingface/chat-ui/issues/1312 | open | [
"support"
] | 2024-06-26T13:24:42Z | 2024-06-26T15:14:48Z | 2 | adhishthite |
huggingface/chat-ui | 1,311 | 400 (no body) trying to reach openai compatible server | Hi everyone,
I have the following setup (containers are on the same device):
- Container 1: Nvidia NIM (openai-compatible) with Llama3 8B Instruct, port 8000;
- Container 2: chat-ui, port 3000.
This is the content of the `.env` file:
```
MONGODB_URL=mongodb://localhost:27017
MONGODB_DB_NAME=chat-ui
MODELS=`[{"name":"Llama3-8B-Instruct","id":"Llama3-8B-Instruct","endpoints":[{"type":"openai","baseURL":"http://192.168.120.240:8000/v1","extraBody":{"repetition_penalty":1.1}}]}]`
LOG_LEVEL=debug
ALLOW_INSECURE_COOKIES=true
```
And this is the error I get when I try to run inference from browser:
```
{"level":50,"time":1719403859826,"pid":31,"hostname":"592d634d7447","err":{"type":"BadRequestError","message":"400 status code (no body)","stack":"Error: 400 status code (no body)\n at APIError.generate (file:///app/build/server/chunks/index-3aabce5f.js:4400:20)\n at OpenAI.makeStatusError (file:///app/build/server/chunks/index-3aabce5f.js:5282:25)\n at OpenAI.makeRequest (file:///app/build/server/chunks/index-3aabce5f.js:5325:30)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async file:///app/build/server/chunks/models-e8725572.js:98846:36\n at async generateFromDefaultEndpoint (file:///app/build/server/chunks/index3-2417d430.js:213:23)\n at async generateTitle (file:///app/build/server/chunks/_server.ts-2c825ade.js:213:10)\n at async generateTitleForConversation (file:///app/build/server/chunks/_server.ts-2c825ade.js:177:19)","status":400,"headers":{"content-length":"1980","content-type":"application/json","date":"Wed, 26 Jun 2024 12:10:59 GMT","server":"uvicorn"}},"msg":"400 status code (no body)"}
BadRequestError: 400 status code (no body)
at APIError.generate (file:///app/build/server/chunks/index-3aabce5f.js:4400:20)
at OpenAI.makeStatusError (file:///app/build/server/chunks/index-3aabce5f.js:5282:25)
at OpenAI.makeRequest (file:///app/build/server/chunks/index-3aabce5f.js:5325:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async file:///app/build/server/chunks/models-e8725572.js:98846:36
at async generate (file:///app/build/server/chunks/_server.ts-2c825ade.js:426:30)
at async textGenerationWithoutTitle (file:///app/build/server/chunks/_server.ts-2c825ade.js:487:3) {
status: 400,
headers: {
'content-length': '543',
'content-type': 'application/json',
date: 'Wed, 26 Jun 2024 12:10:59 GMT',
server: 'uvicorn'
},
request_id: undefined,
error: undefined,
code: undefined,
param: undefined,
type: undefined
}
```
Is there something wrong with the .env file, or is Nvidia NIM simply not supported for some strange reason? | https://github.com/huggingface/chat-ui/issues/1311 | open | [
"support"
] | 2024-06-26T12:34:44Z | 2024-07-22T13:03:18Z | 2 | edesalve |
huggingface/diffusers | 8,710 | Add PAG support to SD1.5 | We recently integrated PAG into diffusers! See this PR [here] (https://github.com/huggingface/diffusers/pull/7944) we added PAG to SDXL
we also want to add PAG support to SD1.5 pipelines! we will need:
- [x] StableDiffusionPAGPipeline (assigned to @shauray8, PR https://github.com/huggingface/diffusers/pull/8725)
- [ ] StableDiffusionPAGImg2ImgPipeline https://github.com/huggingface/diffusers/pull/9463
- [ ] StableDiffusionPAGInpaintPipeline
- [ ] StableDiffusionControlNetPAGInpaintPipeline (https://github.com/huggingface/diffusers/pull/8875)
- [x] StableDiffusionControlNetPAGPipeline (assigned to @tuanh123789 )
- [ ] StableDiffusionControlNetPAGImg2ImgPipeline (assigned to @Bhavay-2001 https://github.com/huggingface/diffusers/pull/8864)
1. You should put it under the [pag folder](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/pag)
2. you can use the implementation of SDXL PAG pipelines as a reference (see this PRhttps://github.com/huggingface/diffusers/pull/7944 and you can find all the sdxl pag pipelines here https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/pag)
3. you need to add AutoPipeline so that you can use this API to create it
```python
AutoPipelineForImage2Image.from_pretrained(repo_id, controlnet=controlnet, enable_pag=True ...)
```
4. tests and docs
If you are interested in working on this, Let me know which pipeline(s) you want to work on:) | https://github.com/huggingface/diffusers/issues/8710 | closed | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-06-26T08:23:17Z | 2024-10-09T20:40:59Z | 17 | yiyixuxu |
huggingface/chat-ui | 1,309 | "404 Resource Not Found" when using Azure OpenAI model endpoint | I run `chat-ui` with the `chat-ui-db` docker image. I would like to connect it to my Azure OpenAI API endpoint.
I have setup the `env.local` file as stated in your docs and binded it with the docker container:
```bash
MODELS=`[{
"id": "gpt-4-1106-preview",
"name": "gpt-4-1106-preview",
"displayName": "gpt-4-1106-preview",
"parameters": {
"temperature": 0.5,
"max_new_tokens": 4096,
},
"endpoints": [
{
"type": "openai",
"baseURL": "https://{resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions",
"defaultHeaders": {
"api-key": "{api-key}"
},
"defaultQuery": {
"api-version": "{api-version}"
}
}
]
}]`
```
When sending a message in `chat-ui`, I get a message `404 Resource Not Found` on the top right of the interface.
When I manually send an HTTP request to the Azure OpenAI API endpoint with the same parameters, I get a valid response.
How can I solve this? | https://github.com/huggingface/chat-ui/issues/1309 | open | [
"support"
] | 2024-06-26T07:16:54Z | 2024-06-26T18:53:51Z | 2 | gqoew |
huggingface/chat-ui | 1,308 | Warning: To load an ES module in Azure environment | Hi Team,
We are currently facing issues deploying our Chat UI solution in Azure Web App. The error encountered in the console log is as follows:
```
npm http fetch GET 200 https://registry.npmjs.org/npm 141ms
(node:124) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
(Use `node --trace-warnings ...` to show where the warning was created)
/home/site/wwwroot/node_modules/.bin/vite:2
import { performance } from 'node:perf_hooks'
^^^^^^
SyntaxError: Cannot use import statement outside a module
at internalCompileFunction (node:internal/vm:77:18)
at wrapSafe (node:internal/modules/cjs/loader:1288:20)
at Module._compile (node:internal/modules/cjs/loader:1340:27)
at Module._extensions..js (node:internal/modules/cjs/loader:1435:10)
at Module.load (node:internal/modules/cjs/loader:1207:32)
at Module._load (node:internal/modules/cjs/loader:1023:12)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:135:12)
at node:internal/main/run_main_module:28:49
Node.js v20.11.1
npm notice
npm notice New minor version of npm available! 10.5.0 -> 10.8.1
npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.8.1
npm notice Run npm install -g npm@10.8.1 to update!
npm notice
```
It appears to be a Node.js issue, and I believe there might be an error in my package.json configuration. I have tried using both Node.js 18 and 20 without success.
Could you please provide me with the correct configuration for package.json to resolve this issue?
| https://github.com/huggingface/chat-ui/issues/1308 | open | [
"support"
] | 2024-06-26T06:04:45Z | 2024-06-27T09:07:35Z | 3 | pronitagrawalvera |
huggingface/transformers.js | 823 | How to export q4f16.onnx | ### Question
Thanks for providing such a great project, but I have a problem converting the model.
```
For example:
model_q4f16.onnx
```
What command is used to create and export such a q4/f16.onnx model?
Can you give me more tips or help? Thank you | https://github.com/huggingface/transformers.js/issues/823 | closed | [
"question"
] | 2024-06-26T05:36:47Z | 2024-06-26T07:46:57Z | null | juntaosun |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.