repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 33,232 | How to use hugginface for training: google-t5/t5-base | ### Feature request
How to use hugginface for training / 如何使用huggingface来训练:
https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation
#What is the format and how do I write it? / 这个格式是怎么样的,怎么写呢?
def batch_collator(data):
print(data) #?????????????????????????????????????????????
return {
'pixel_values': torch.stack([x for x in pixel_values]),
'labels': torch.tensor([x for x in labels])
}
trainer = Trainer(
model=model,
args=training_args,
data_collator=batch_collator,//这个需要怎么写?
train_dataset=dataset['train'],
)
### Motivation
无
### Your contribution
无
我已经试了可以用: https://www.kaggle.com/code/weililong/google-t5-t5-base
不知道有没有什么坑 | https://github.com/huggingface/transformers/issues/33232 | open | [
"Usage",
"Feature request"
] | 2024-08-31T07:41:18Z | 2024-09-09T08:45:50Z | null | gg22mm |
huggingface/transformers | 33,228 | How to obtain batch index of validation dataset? | Hi,
I wanted to know how would we fetch the batch id/index of the eval dataset in ```preprocess_logits_for_metrics()``` ?
Thanks in advance! | https://github.com/huggingface/transformers/issues/33228 | closed | [
"Usage"
] | 2024-08-31T00:11:13Z | 2024-10-13T08:04:26Z | null | SoumiDas |
huggingface/transformers | 33,210 | The model's address is https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you | ### Feature request
hello,The model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you
### Motivation
hello,The model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you
### Your contribution
hello,The model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you | https://github.com/huggingface/transformers/issues/33210 | open | [
"Feature request"
] | 2024-08-30T09:33:01Z | 2024-10-22T07:18:15Z | null | pengpengtao |
huggingface/dataset-viewer | 3,054 | Image URL detection | [`is_image_url`](https://github.com/huggingface/dataset-viewer/blob/946b0788fa426007161f2077a70b5ae64b211cf8/libs/libcommon/src/libcommon/utils.py#L131-L134) relies on a filename and extension being present, however, in some cases an image URL does not contain a filename. Example [dataset](https://huggingface.co/datasets/bigdata-pw/SteamScreenshots) and example [URL](https://steamuserimages-a.akamaihd.net/ugc/910172100453203507/062F4787060B2E4E93EFC4631E96183B027A860B/). This could be improved by checking the `content-type` header of the response or checking for strings like "image" in the URL. | https://github.com/huggingface/dataset-viewer/issues/3054 | open | [
"question",
"improvement / optimization",
"P2"
] | 2024-08-29T23:17:55Z | 2025-07-04T09:37:23Z | null | hlky |
huggingface/transformers.js | 911 | Next.js example breaks with v3 | ### Question
Are there steps documented anywhere for running V3 in your app? I'm trying to test it out via these steps:
1. Pointing to the alpha in my `package.json`: `"@huggingface/transformers": "^3.0.0-alpha.10",`
2. `npm i`
3. `cd node_modules/@hugginface/transformers && npm i`
4. copy the [webpack.config.js](https://github.com/xenova/transformers.js/blob/main/webpack.config.js) from the repo into the node_modules/@hugginface/transformers dir.
5. `npm run build` in node_modules/@hugginface/transformers dir.
I then run my app, and get the following error:
```
ERROR in ../../node_modules/@huggingface/transformers/dist/transformers.js 42256:34-64
Module not found: Error: Can't resolve './' in '/node_modules/@huggingface/transformers/dist'
webpack compiled with 1 error
```
Thanks, I'm excited to test out the latest and greatest! | https://github.com/huggingface/transformers.js/issues/911 | closed | [
"question"
] | 2024-08-29T20:17:03Z | 2025-02-16T12:35:47Z | null | stinoga |
huggingface/diffusers | 9,317 | Finetuning on dataset | dear @thedarkzeno and @patil-suraj
Thank you so much for putting your work out there. I wanted to ask, how would the training be for training on a dataset and not a single instance image as mentioned in train_dreambooth_inpaint. And can I finetune models trained from https://github.com/CompVis/latent-diffusion repository?
Thanks in advance | https://github.com/huggingface/diffusers/issues/9317 | closed | [
"stale"
] | 2024-08-29T12:20:51Z | 2024-10-23T16:10:47Z | 4 | ultiwinter |
huggingface/optimum-quanto | 300 | How to quantize, save and load Stable Diffusion 3 model. | import torch
from optimum.quanto import qint2, qint4, qint8, quantize, freeze
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.bfloat16)
quantize(pipe.text_encoder, weights=qint4)
freeze(pipe.text_encoder)
quantize(pipe.text_encoder_3, weights=qint4)
freeze(pipe.text_encoder_3)
quantize(pipe.transformer, weights=qint8, exclude="proj_out")
freeze(pipe.transformer)
pipe = pipe.to("cuda")
pipe.save_pretrained("/content/drive/MyDrive/quantized_Stable_diffusion_1")
after saving how can i load this model from this directory and perform text to image generation | https://github.com/huggingface/optimum-quanto/issues/300 | closed | [
"Stale"
] | 2024-08-29T06:24:02Z | 2024-10-06T02:06:30Z | null | jainrahul52 |
huggingface/optimum | 2,002 | Is it possible to infer the model separately through encoder.onnx and decoder.onnx | ### Feature request
Is it possible to infer the model separately through encoder.onnx and decoder.onnx
### Motivation
Is it possible to infer the model separately through encoder.onnx and decoder.onnx
### Your contribution
Is it possible to infer the model separately through encoder.onnx and decoder.onnx | https://github.com/huggingface/optimum/issues/2002 | open | [
"onnx"
] | 2024-08-29T03:26:20Z | 2024-10-08T15:28:59Z | 0 | pengpengtao |
huggingface/diffusers | 9,303 | [Add] VEnhancer - the interpolation and upscaler for CogVideoX-5b | ### Model/Pipeline/Scheduler description
VEnhancer, a generative space-time enhancement framework that can improve the existing T2V results.
https://github.com/Vchitect/VEnhancer
### Open source status
- [X] The model implementation is available.
- [X] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
_No response_ | https://github.com/huggingface/diffusers/issues/9303 | open | [
"stale"
] | 2024-08-28T14:43:32Z | 2024-12-11T15:04:32Z | 3 | tin2tin |
huggingface/text-generation-inference | 2,466 | Guide on how to use TensorRT-LLM Backend | ### Feature request
Does any documentation exist, or would it be possible to add documentation, on how to use the TensorRT-LLM backend? #2458 makes mention that the TRT-LLM backend exists, and I can see that there's a Dockerfile for TRT-LLM, but I don't see any guides on how to build/use it.
### Motivation
I would like to run TensorRT-LLM models using TGI.
### Your contribution
I'm willing to test any builds/processes/pipelines that are available. | https://github.com/huggingface/text-generation-inference/issues/2466 | open | [] | 2024-08-28T13:24:26Z | 2025-05-18T16:23:14Z | null | michaelthreet |
huggingface/lerobot | 390 | [Feature Request] Add end effector pos field in lerobot dataset? | Aloha style joint space dataset will limit data set to the specific robot. Can we change joint space data or add a field of end effector to cartesian space data base on the robot URDF file?
It may help robotics community build a more generalized policy. | https://github.com/huggingface/lerobot/issues/390 | closed | [
"question",
"dataset",
"robots"
] | 2024-08-28T13:19:15Z | 2024-08-29T09:55:27Z | null | hilookas |
huggingface/datasets | 7,129 | Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output | In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code:
````
from datasets import Features
features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
features
````
which expects to output (as stated in the documentation):
````
{'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)}
````
but it generates the following
````
{'label': ClassLabel(names=['bad', 'ok', 'good'], id=None)}
````
If my understanding is correct, this happens because although num_classes is used during the init of the object, it is afterward ignored:
https://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/features/features.py#L975
I would like to work on this issue if this is something needed 😄
| https://github.com/huggingface/datasets/issues/7129 | closed | [] | 2024-08-28T12:27:48Z | 2024-12-06T11:32:02Z | 0 | sergiopaniego |
huggingface/diffusers | 9,299 | CUDAGRAPHs for Flux position embeddings | @yiyixuxu
Is it possible to refactor the Flux positional embeddings so that we can fully make use of CUDAGRAPHs?
```bash
skipping cudagraphs due to skipping cudagraphs due to cpu device (device_put). Found from :
File "/home/sayak/diffusers/src/diffusers/models/transformers/transformer_flux.py", line 469, in forward
image_rotary_emb = self.pos_embed(ids)
File "/home/sayak/.pyenv/versions/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/sayak/diffusers/src/diffusers/models/embeddings.py", line 630, in forward
self.axes_dim[i], pos[:, i], repeat_interleave_real=True, use_real=True, freqs_dtype=freqs_dtype
```
<details>
<summary>Code</summary>
```python
import torch
torch.set_float32_matmul_precision("high")
torch._inductor.conv_1x1_as_mm = True
torch._inductor.coordinate_descent_tuning = True
torch._inductor.epilogue_fusion = False
torch._inductor.coordinate_descent_check_all_directions = True
import diffusers
from platform import python_version
from diffusers import DiffusionPipeline
print(diffusers.__version__)
print(torch.__version__)
print(python_version())
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
pipe.transformer.to(memory_format=torch.channels_last)
pipe.vae.to(memory_format=torch.channels_last)
pipe.transformer = torch.compile(pipe.transformer, mode="max-autotune", fullgraph=True)
pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True)
for _ in range(5):
image = pipe(
"Happy bear",
num_inference_steps=5,
guidance_scale=3.5,
max_sequence_length=512,
generator=torch.manual_seed(42),
height=1024,
width=1024,
).images[0]
```
</details>
If we can fully make sure CUDAGRAPHs `torch.compile()` would be faster. | https://github.com/huggingface/diffusers/issues/9299 | closed | [] | 2024-08-28T11:33:16Z | 2024-08-29T19:37:17Z | 0 | sayakpaul |
huggingface/transformers.js | 906 | Unsupported model type: jais | ### Question
### System Info
macOS, node v20.10, @xenova/transformers 2.17.2
### Environment/Platform
- [ ] Website/web-app
- [ ] Browser extension
- [x] Server-side (e.g., Node.js, Deno, Bun)
- [ ] Desktop app (e.g., Electron)
- [ ] Other (e.g., VSCode extension)
### Description
```
Error: Unsupported model type: jais
at Function.from_pretrained (file:///node_modules/@xenova/transformers/src/models.js:5526:19)
at async Promise.all (index 1)
at loadItems (file:///node_modules/@xenova/transformers/src/pipelines.js:3279:5)
at pipeline (file:///node_modules/@xenova/transformers/src/pipelines.js:3219:21)
at SearchQueryParser.initializeModel (src/search-engine/query-parser/search-query-parser.ts:27:18)
```
### Reproduction
```javascript
import { Logger } from '@nestjs/common';
export class SearchQueryParser {
private tokenizer: any;
private model: any;
private logger: Logger;
private systemPrompt = '';
constructor() {
this.logger = new Logger('query parser');
this.initializeModel();
}
private async initializeModel() {
const { AutoTokenizer, pipeline } = await import('@xenova/transformers');
this.tokenizer = await AutoTokenizer.from_pretrained(
'omarabb315/Query-5KM-no_synonyms_noon_1',
{
progress_callback: (data) => {
this.logger.verbose(
${data.status} ${data.file || ''} ${data.progress || ''}`,
);
},
},
);
this.model = await pipeline(
'text-generation',
'omarabb315/Query-5KM-no_synonyms_noon_1',
);
}
async parse(query: string): Promise<any> {
if (!this.model) {
await this.initializeModel();
}
const tokenizerResponse = this.tokenizer.apply_chat_template(
[
{ role: 'system', content: this.systemPrompt },
{ role: 'user', content: query },
],
{
tokenize: false,
add_generation_prompt: true,
},
);
const response = this.model(tokenizerResponse.toString());
const parsedQuery = response[0].generated_text;
return parsedQuery;
}
}
```
| https://github.com/huggingface/transformers.js/issues/906 | closed | [
"question"
] | 2024-08-28T09:46:17Z | 2024-08-28T21:01:10Z | null | SherifElfadaly |
huggingface/trl | 1,986 | how to convert dpodata to ktodata | ### Feature request
how to convert dpodata to ktodata
### Motivation
how to convert dpodata to ktodata
### Your contribution
how to convert dpodata to ktodata | https://github.com/huggingface/trl/issues/1986 | closed | [] | 2024-08-28T06:23:13Z | 2024-08-28T09:02:35Z | null | dotsonliu |
huggingface/datasets | 7,128 | Filter Large Dataset Entry by Entry | ### Feature request
I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process.
Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset. Now, the dataset consists of many tables. Ideally, I would want to have some simple filtering criterion, such that I only see the "good" tables. Here is an example of what the code might look like:
```
dataset = load_dataset(
"really-large-dataset",
streaming=True
)
# And let's say we process the dataset bit by bit because we want intermediate results
dataset = islice(dataset, 10000)
# Define a function to filter the data
def filter_function(table):
if some_condition:
return True
else:
return False
# Use the filter function on your dataset
filtered_dataset = (ex for ex in dataset if filter_function(ex))
```
And then I work on the processed dataset, which would be magnitudes faster than working on the original. I would love to hear if the problem setup + solution makes sense to people, and if anyone has suggestions!
### Motivation
See description above
### Your contribution
Happy to make PR if this is a new feature | https://github.com/huggingface/datasets/issues/7128 | open | [
"enhancement"
] | 2024-08-27T20:31:09Z | 2024-10-07T23:37:44Z | 4 | QiyaoWei |
huggingface/huggingface_hub | 2,491 | How to uplaod folders into repo with most effective way - on error continue resume max speed | Hello. I have the below tasks for uploading however I am not sure if they are most effective way of doing
#### This cell is used to upload single file into a repo with certain name
```
from huggingface_hub import HfApi
api = HfApi()
api.upload_file(
path_or_fileobj=r"/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion/model_name.safetensors",
path_in_repo="model_name.safetensors",
repo_id="YourUserName/reponame",
repo_type="model",
)
```
#### This cell is used to upload a folder into a repo with single commit
```
from huggingface_hub import HfApi
api = HfApi()
api.upload_folder(
folder_path=r"/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion",
repo_id="YourUserName/reponame",
repo_type="model",
)
```
This one is especially super slow whenever I run. I think it re-calculates sha to compare if files modified
#### This cell uploads a folder into remote repo with multi commit
#### Supports continue feature so if gets interrupted you can run again to continue / resume
```
from huggingface_hub import HfApi
from huggingface_hub import get_collection, delete_collection_item
from huggingface_hub import upload_file
from huggingface_hub import (
HfFolder,
ModelCard,
ModelCardData,
create_repo,
hf_hub_download,
upload_folder,
whoami,
)
api = HfApi()
upload_folder(
folder_path=r"/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion",
repo_id="YourUserName/reponame",
repo_type="model",
multi_commits=True,
multi_commits_verbose=True,
)
```
| https://github.com/huggingface/huggingface_hub/issues/2491 | closed | [
"bug"
] | 2024-08-27T16:36:04Z | 2024-08-28T08:24:22Z | null | FurkanGozukara |
huggingface/Google-Cloud-Containers | 73 | Download model files from GCS (Instead of HF Hub) | When deploying an HF model to Vertex AI, I would like to download a fine-tuned model from GCS, instead of from HF Hub, like so:
```
model = aiplatform.Model.upload(
display_name="my-model",
serving_container_image_uri=os.getenv("CONTAINER_URI"),
serving_container_environment_variables={
"AIP_STORAGE_URI": "gs://path/to/model/files",
},
serving_container_ports=[8080],
)
model.wait()
```
I would expect this to be supported since the entrypoint script logic should handle this: https://github.com/huggingface/Google-Cloud-Containers/blob/main/containers/tei/cpu/1.4.0/entrypoint.sh
Will this be supported when V1.4 is released? When will this be? | https://github.com/huggingface/Google-Cloud-Containers/issues/73 | closed | [
"tei",
"question"
] | 2024-08-27T12:14:10Z | 2024-09-16T07:07:11Z | null | rm-jeremyduplessis |
huggingface/chat-ui | 1,436 | MODELS=`[ variable problem when I docker run | Hello,
I want to use Ollama to use Mistral model and I followed the documentation below : https://huggingface.co/docs/chat-ui/configuration/models/providers/ollama
`deploy.sh` :
```sh
#!/bin/bash
sudo docker compose down
sudo docker rm -f mongodb && sudo docker rm -f chat-ui
# nginx and ollama
sudo docker compose up -d
# mongodb
sudo docker run -d -p 27017:27017 -v mongodb-data:/data/db --name mongodb --network backend mongo:latest
# chat-ui
sudo docker run -d -p 3000:3000 --env-file .env.local -v chat-ui:/data --name chat-ui --network proxy ghcr.io/huggingface/chat-ui-db && sudo docker network connect backend chat-ui
```
`docker-compose.yml` :
```YAML
services:
nginx:
image: nginx:latest
container_name: nginx
ports:
- 80:80
- 443:443
networks:
- proxy
volumes:
- ./nginx:/etc/nginx/conf.d
- ./ssl:/etc/ssl
restart: unless-stopped
ollama:
build:
context: ./ollama
dockerfile: Dockerfile
image: ollama-with-ca
container_name: ollama
ports:
- 11434:11434
networks:
- backend
environment:
- HTTPS_PROXY=http://<username>:<password>@proxy.test.fr:8090
volumes:
- ollama-data:/data
restart: unless-stopped
entrypoint: ["/bin/bash", "start-mistral.sh"]
networks:
backend:
proxy:
external: true
volumes:
ollama-data:
```
`.env.local` :
```
MONGODB_URL=mongodb://mongodb:27017
HF_TOKEN=hf_*****
MODELS=`[
{
"name": "Ollama Mistral",
"chatPromptTemplate": "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{
{/if}
}{
{/if}
} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s> {{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["</s>"]
},
"endpoints": [
{
"type": "ollama",
"url" : "ollama://ollama:11434",
"ollamaName" : "mistral"
}
]
}
]`
```
When I start my script, at the end of the execution, the container doesn't want to launch, I get the following error :
```sh
docker: poorly formatted environment: variable '"name": "Ollama Mistral",' contains whitespaces.
See 'docker run --help'.
```
I already tried to put `chat-ui` and `mongodb` containers in the `docker-compose.yml` and it doesn't works, same as this issue : https://github.com/huggingface/chat-ui/issues/614
Any solutions ?
Thanks in advance.
| https://github.com/huggingface/chat-ui/issues/1436 | closed | [
"support"
] | 2024-08-26T14:00:26Z | 2024-08-27T11:04:39Z | 5 | avirgos |
huggingface/diffusers | 9,276 | How can I manually update some of their checkpoints of UNet2/3DConditionModel objects? | ### Discussed in https://github.com/huggingface/diffusers/discussions/9273
<div type='discussions-op-text'>
<sup>Originally posted by **justin4ai** August 26, 2024</sup>
Hello, I'm quite new to diffusers package and trying to implement fine-tuning code that uses the saved checkpoints initialized with ```UNet2/3DConditionModel.from_pretrained``` method as shown below:
```python
reference_unet = UNet2DConditionModel.from_pretrained( # ReferenceNet은 2D condition만 받음 (reference image via CLIP)
cfg.base_model_path,
subfolder="unet",
).to(device="cuda")
denoising_unet = UNet3DConditionModel.from_pretrained_2d(
cfg.base_model_path,
"",
subfolder="unet",
unet_additional_kwargs={
"use_motion_module": False,
"unet_use_temporal_attention": False,
},
).to(device="cuda")
prev = denoising_unet.state_dict()
li = torch.load("./pretrained_weights/denoising_unet.pth")
for key in li:
denoising_unet[key] = li[key] # I know this kind of direct assigning to the object doesn't make sense though.
reference_unet.load_state_dict(torch.load("./pretrained_weights/reference_unet.pth"))
```
The checkpoint I try to load is saved from the previous training of ``` UNet2/3DConditionModel objects``` with ```state_dict = model.state_dict()``` and ```torch.save(state_dict, save_path)```. But I have no Idea about how to directly assign certain values to specific layers in those class objects.
If you help me out with this, I will be so much glad! Looking forward to your help. Also please let me know if my description of the situation is not enough for you to help me out.
Cheers,
Junstin</div> | https://github.com/huggingface/diffusers/issues/9276 | open | [
"stale"
] | 2024-08-26T07:49:23Z | 2024-09-25T15:03:01Z | 1 | justin4ai |
huggingface/transformers | 33,115 | How to get the score of each token when using pipeline | pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1,
output_scores=True
)
The model I use is Qwen2-7B-Instruct. When I try to output the score of each token by modifying the parameters, it doesn't work. | https://github.com/huggingface/transformers/issues/33115 | closed | [
"Usage"
] | 2024-08-26T07:00:54Z | 2025-03-06T08:23:58Z | null | xin0623 |
huggingface/diffusers | 9,271 | The different quality between ComfyUI and Diffusers ? | ### Discussed in https://github.com/huggingface/diffusers/discussions/9265
<div type='discussions-op-text'>
<sup>Originally posted by **vuongminh1907** August 25, 2024</sup>
I had a problem using InstantID (https://github.com/instantX-research/InstantID), which uses Diffusers as its base. Additionally, I tried ComfyUI (https://github.com/cubiq/ComfyUI_InstantID), and the quality of the images improved better I think.
I discussed this with Cubiq, and he mentioned that there are no differences in how they applied the IP Adapter (https://github.com/cubiq/ComfyUI_InstantID/issues/206).

Can you explain this issue to me? Perhaps it’s related to the Sampler in ComfyUI and Diffusers.</div> | https://github.com/huggingface/diffusers/issues/9271 | closed | [
"stale"
] | 2024-08-26T02:53:23Z | 2024-10-15T18:10:42Z | 3 | vuongminh1907 |
huggingface/diffusers | 9,264 | Could you make an inpainting model for flux? | ### Model/Pipeline/Scheduler description
The [stable-diffusion-xl-1.0-inpainting-0.1](https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1) model helps a lot. Could you make a similar inpainting model for flux?
https://huggingface.co/black-forest-labs/FLUX.1-dev
### Open source status
- [ ] The model implementation is available.
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1
https://huggingface.co/black-forest-labs/FLUX.1-dev | https://github.com/huggingface/diffusers/issues/9264 | closed | [] | 2024-08-24T17:32:32Z | 2024-08-24T17:37:59Z | 2 | snowbedding |
huggingface/transformers | 33,106 | how to fine tune TrOCR on specifique langage guide. | ### Model description
hello , just passed through issues and other , but none of them talked on how to fine-tune TrOCR on specifique langage , like how to pick encoder and decoder , model .. etc ,
can you @NielsRogge , write a simple instructions/guide on this topic ?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | https://github.com/huggingface/transformers/issues/33106 | closed | [] | 2024-08-24T14:33:02Z | 2025-06-15T08:07:10Z | null | MohamedLahmeri01 |
huggingface/datasets | 7,123 | Make dataset viewer more flexible in displaying metadata alongside images | ### Feature request
To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is that this be made more flexible for datasets with multiple subsets to avoid the need to put a `metadata.csv` into each image directory where they are not as easily accessed.
### Motivation
When creating datasets with multiple subsets I can't get the images to display alongside their associated metadata (it's usually one or the other that will show up). Since this requires a file specifically named `metadata.csv`, I then have to place that file within the image directory, which makes it much more difficult to access. Additionally, it still doesn't necessarily display the images alongside their metadata correctly (see, for instance, [this discussion](https://huggingface.co/datasets/imageomics/2018-NEON-beetles/discussions/8)).
It was suggested I bring this discussion to GitHub on another dataset struggling with a similar issue ([discussion](https://huggingface.co/datasets/imageomics/fish-vista/discussions/4)). In that case, it's a mix of data subsets, where some just reference the image URLs, while others actually have the images uploaded. The ones with images uploaded are not displaying images, but renaming that file to just `metadata.csv` would diminish the clarity of the construction of the dataset itself (and I'm not entirely convinced it would solve the issue).
### Your contribution
I can make a suggestion for one approach to address the issue:
For instance, even if it could just end in `_metadata.csv` or `-metadata.csv`, that would be very helpful to allow for more flexibility of dataset structure without impacting clarity. I would think that the functionality on the backend looking for `metadata.csv` could reasonably be adapted to look for such an ending on a filename (maybe also check that it has a `file_name` column?).
Presumably, requiring the `configs` in a setup like on [this dataset](https://huggingface.co/datasets/imageomics/rare-species/blob/main/README.md) could also help in figuring out how it should work?
```
configs:
- config_name: <image subset>
data_files:
- <image-metadata>.csv
- <path/to/images>/*.jpg
```
I'd also be happy to look at whatever solution is decided upon and contribute to the ideation.
Thanks for your time and consideration! The dataset viewer really is fabulous when it works :) | https://github.com/huggingface/datasets/issues/7123 | open | [
"enhancement"
] | 2024-08-23T22:56:01Z | 2024-10-17T09:13:47Z | 3 | egrace479 |
huggingface/diffusers | 9,258 | Kohya SS FLUX LoRA training is way faster on Linux than Windows any ideas to debug? Same settings, libraries and GPU | ### Describe the bug
I am using Kohya SS to train FLUX LoRA
On Linux RTX 3090 gets like 5.5 second / it - batch size 1 and 1024x1024 px resolution
On Windows RTX 3090 TI gets 7.7 second / it - has the most powerful CPU 13900 K
This speed dispercany is huge between Windows and Linux for some reason
Torch upgrade from 2.1 to 2.4 on Linux caused huge speed up and VRAM usage reduction but on Windows only VRAM usage dropped - speed same
Any ideas for how to fix? Using SDPA Cross Attention
I am sharing venv pip freeze of both Windows and Linux
Both has Python 3.10.11
**Windows pip freeze**
```
Microsoft Windows [Version 10.0.19045.4717]
(c) Microsoft Corporation. All rights reserved.
R:\Kohya_GUI_Flux_Installer\kohya_ss\venv\Scripts>activate
(venv) R:\Kohya_GUI_Flux_Installer\kohya_ss\venv\Scripts>pip freeze
absl-py==2.1.0
accelerate==0.33.0
aiofiles==23.2.1
aiohappyeyeballs==2.4.0
aiohttp==3.10.5
aiosignal==1.3.1
altair==4.2.2
annotated-types==0.7.0
antlr4-python3-runtime==4.9.3
anyio==4.4.0
appdirs==1.4.4
astunparse==1.6.3
async-timeout==4.0.3
attrs==24.2.0
bitsandbytes==0.43.3
certifi==2022.12.7
charset-normalizer==2.1.1
click==8.1.7
colorama==0.4.6
coloredlogs==15.0.1
contourpy==1.2.1
cycler==0.12.1
dadaptation==3.2
diffusers==0.25.0
docker-pycreds==0.4.0
easygui==0.98.3
einops==0.7.0
entrypoints==0.4
exceptiongroup==1.2.2
fairscale==0.4.13
fastapi==0.112.1
ffmpy==0.4.0
filelock==3.13.1
flatbuffers==24.3.25
fonttools==4.53.1
frozenlist==1.4.1
fsspec==2024.2.0
ftfy==6.1.1
gast==0.6.0
gitdb==4.0.11
GitPython==3.1.43
google-pasta==0.2.0
gradio==4.41.0
gradio_client==1.3.0
grpcio==1.65.5
h11==0.14.0
h5py==3.11.0
httpcore==1.0.5
httpx==0.27.0
huggingface-hub==0.24.5
humanfriendly==10.0
idna==3.4
imagesize==1.4.1
importlib_metadata==8.4.0
importlib_resources==6.4.4
invisible-watermark==0.2.0
Jinja2==3.1.3
jsonschema==4.23.0
jsonschema-specifications==2023.12.1
keras==3.5.0
kiwisolver==1.4.5
libclang==18.1.1
-e git+https://github.com/kohya-ss/sd-scripts.git@e1cd19c0c0ef55709e8eb1e5babe25045f65031f#egg=library&subdirectory=..\..\sd-scripts
lightning-utilities==0.11.6
lion-pytorch==0.0.6
lycoris-lora==2.2.0.post3
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.9.2
mdurl==0.1.2
ml-dtypes==0.4.0
mpmath==1.3.0
multidict==6.0.5
namex==0.0.8
networkx==3.2.1
numpy==1.26.3
nvidia-cublas-cu12==12.4.2.65
nvidia-cuda-cupti-cu12==12.4.99
nvidia-cuda-nvrtc-cu12==12.4.99
nvidia-cuda-runtime-cu12==12.4.99
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.0.44
nvidia-curand-cu12==10.3.5.119
nvidia-cusolver-cu12==11.6.0.99
nvidia-cusparse-cu12==12.3.0.142
nvidia-nvjitlink-cu12==12.4.99
nvidia-nvtx-cu12==12.4.99
omegaconf==2.3.0
onnx==1.16.1
onnxruntime-gpu==1.17.1
open-clip-torch==2.20.0
opencv-python==4.7.0.68
opt-einsum==3.3.0
optree==0.12.1
orjson==3.10.7
packaging==24.1
pandas==2.2.2
pathtools==0.1.2
pillow==10.2.0
prodigyopt==1.0
protobuf==3.20.3
psutil==6.0.0
pydantic==2.8.2
pydantic_core==2.20.1
pydub==0.25.1
Pygments==2.18.0
pyparsing==3.1.2
pyreadline3==3.4.1
python-dateutil==2.9.0.post0
python-multipart==0.0.9
pytorch-lightning==1.9.0
pytz==2024.1
PyWavelets==1.7.0
PyYAML==6.0.2
referencing==0.35.1
regex==2024.7.24
requests==2.32.3
rich==13.7.1
rpds-py==0.20.0
ruff==0.6.1
safetensors==0.4.4
scipy==1.11.4
semantic-version==2.10.0
sentencepiece==0.2.0
sentry-sdk==2.13.0
setproctitle==1.3.3
shellingham==1.5.4
six==1.16.0
smmap==5.0.1
sniffio==1.3.1
starlette==0.38.2
sympy==1.12
tensorboard==2.17.1
tensorboard-data-server==0.7.2
tensorflow==2.17.0
tensorflow-intel==2.17.0
tensorflow-io-gcs-filesystem==0.31.0
termcolor==2.4.0
timm==0.6.12
tk==0.1.0
tokenizers==0.19.1
toml==0.10.2
tomlkit==0.12.0
toolz==0.12.1
torch==2.4.0+cu124
torchmetrics==1.4.1
torchvision==0.19.0+cu124
tqdm==4.66.5
transformers==4.44.0
typer==0.12.4
typing_extensions==4.9.0
tzdata==2024.1
urllib3==2.2.2
uvicorn==0.30.6
voluptuous==0.13.1
wandb==0.15.11
wcwidth==0.2.13
websockets==12.0
Werkzeug==3.0.4
wrapt==1.16.0
xformers==0.0.27.post2
yarl==1.9.4
zipp==3.20.0
(venv) R:\Kohya_GUI_Flux_Installer\kohya_ss\venv\Scripts>
```
**Ubuntu pip freeze**
```
(venv) Ubuntu@0054-kci-prxmx10136:~/apps/kohya_ss$ pip freeze
absl-py==2.1.0
accelerate==0.33.0
aiofiles==23.2.1
aiohttp==3.9.5
aiosignal==1.3.1
altair==4.2.2
annotated-types==0.7.0
antlr4-python3-runtime==4.9.3
anyio==4.4.0
appdirs==1.4.4
astunparse==1.6.3
async-timeout==4.0.3
attrs==23.2.0
bitsandbytes==0.43.3
cachetools==5.3.3
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
coloredlogs==15.0.1
contourpy==1.2.1
cycler==0.12.1
dadaptation==3.1
diffusers==0.25.0
dnspython==2.6.1
docker-pycreds==0.4.0
easygui==0.98.3
einops==0.7.0
email_validator==2.1.1
entrypoints==0.4
exceptiongroup==1.2.1
fairscale==0.4.13
fastapi==0.111.0
fastapi-cli==0.0 | https://github.com/huggingface/diffusers/issues/9258 | closed | [
"bug"
] | 2024-08-23T11:42:53Z | 2024-08-23T11:55:18Z | 1 | FurkanGozukara |
huggingface/datasets | 7,122 | [interleave_dataset] sample batches from a single source at a time | ### Feature request
interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar manner (each batch only contains data from a single source)?
### Motivation
Some recent research [[1](https://blog.salesforceairesearch.com/sfr-embedded-mistral/), [2](https://arxiv.org/pdf/2310.07554)] shows that source homogenous batching can be helpful for contrastive learning. Can we add a function called `RandomlyCyclingMultiSourcesBatchesIterable` to support this functionality?
### Your contribution
I can contribute a PR. But I wonder what the best way is to test its correctness and robustness. | https://github.com/huggingface/datasets/issues/7122 | open | [
"enhancement"
] | 2024-08-23T07:21:15Z | 2024-08-23T07:21:15Z | 0 | memray |
huggingface/text-generation-inference | 2,452 | How to get the token probability by curl request? | ### Feature request
curl -v -X POST http://.....srv/generate -H "Content-Type: application/json" -d '{"inputs": "xxxxx:","parameters": {"max_new_tokens": 256}}'
user this curl request, get output like
{"generated_text": xxxx}
how to get generated text probability from llm in TGI service?
### Motivation
no
### Your contribution
no | https://github.com/huggingface/text-generation-inference/issues/2452 | closed | [] | 2024-08-23T03:01:17Z | 2024-08-27T01:32:44Z | null | TWSFar |
huggingface/speech-to-speech | 37 | [Feature request] How about adding an optional speech to viseme model at the end of our chain? | Hi there,
Thank you so much for your work on this project. It's truly amazing, and I’m excited to see all the innovative tools that people will build based on it. I can already imagine many will integrate your speech-to-speech pipeline with avatar or robot embodiments, where lip sync will be crucial.
To support this, could you help us add functionality to the current flow? The current process includes 1) speech-to-text, 2) LLM, and 3) text-to-speech. I’d like to add a fourth step: either speech-to-viseme or speech-to-text with `return_timestamp = "word"`, followed by manual mapping of words to phonemes, and then to visemes.
Best regards,
Fabio | https://github.com/huggingface/speech-to-speech/issues/37 | open | [] | 2024-08-22T21:32:47Z | 2024-09-09T17:16:45Z | null | fabiocat93 |
huggingface/huggingface_hub | 2,480 | How to use the HF Nvidia NIM API with the HF inference client? | ### Describe the bug
We recently introduced the [Nvidia NIM API](https://huggingface.co/blog/inference-dgx-cloud) for selected models. The recommended use is via the OAI client like this (with a specific fine-grained token for an enterprise org):
```py
from openai import OpenAI
client = OpenAI(
base_url="https://huggingface.co/api/integrations/dgx/v1",
api_key="YOUR_FINE_GRAINED_TOKEN_HERE"
)
chat_completion = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Count to 500"}
],
stream=True,
max_tokens=1024
)
# Iterate and print stream
for message in chat_completion:
print(message.choices[0].delta.content, end='')
```
How can users use this API with the HF inference client directly?
The InferenceClient.chat_completions [docs](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion) provide this example snippet for OAI syntax (example 3):
```py
# instead of `from openai import OpenAI`
from huggingface_hub import InferenceClient
# instead of `client = OpenAI(...)`
client = InferenceClient(
base_url=...,
api_key=...,
)
output = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Count to 10"},
],
stream=True,
max_tokens=1024,
)
for chunk in output:
print(chunk.choices[0].delta.content)
```
When I transpose the logic from the NIM OAI code snippet to the code above, I get this:
```py
# instead of `from openai import OpenAI`
from huggingface_hub import InferenceClient
# instead of `client = OpenAI(...)`
client = InferenceClient(
api_key="enterprise-org-token",
base_url="https://huggingface.co/api/integrations/dgx/v1",
)
output = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Count to 10"},
],
stream=True,
max_tokens=1024,
)
for chunk in output:
print(chunk.choices[0].delta.content)
```
This throws this error:
```py
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
File ~/miniconda/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:304, in hf_raise_for_status(response, endpoint_name)
303 try:
--> 304 response.raise_for_status()
305 except HTTPError as e:
File ~/miniconda/lib/python3.9/site-packages/requests/models.py:1024, in Response.raise_for_status(self)
1023 if http_error_msg:
-> 1024 raise HTTPError(http_error_msg, response=self)
HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/integrations/dgx/v1/chat/completions
The above exception was the direct cause of the following exception:
BadRequestError Traceback (most recent call last)
Cell In[48], line 10
4 # instead of `client = OpenAI(...)`
5 client = InferenceClient(
6 api_key="hf_****",
7 base_url="https://huggingface.co/api/integrations/dgx/v1",
8 )
---> 10 output = client.chat.completions.create(
11 model="meta-llama/Meta-Llama-3-8B-Instruct",
12 messages=[
13 {"role": "system", "content": "You are a helpful assistant."},
14 {"role": "user", "content": "Count to 10"},
15 ],
16 stream=True,
17 max_tokens=1024,
18 )
20 for chunk in output:
21 print(chunk.choices[0].delta.content)
File ~/miniconda/lib/python3.9/site-packages/huggingface_hub/inference/_client.py:837, in InferenceClient.chat_completion(self, messages, model, stream, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, temperature, tool_choice, tool_prompt, tools, top_logprobs, top_p)
833 # `model` is sent in the payload. Not used by the server but can be useful for debugging/routing.
834 # If it's a ID on the Hub => use it. Otherwise, we use a random string.
835 model_id = model if not is_url and model.count("/") == 1 else "tgi"
--> 837 data = self.post(
838 model=model_url,
839 json=dict(
840 model=model_id,
841 messages=messages,
842 frequency_penalty=frequency_penalty,
843 logit_bias=logit_bias,
844 logprobs=logprobs,
845 max_tokens=max_tokens,
846 n=n,
847 presence_penalty=presence_penalty,
848 response_format=response_format,
849 seed | https://github.com/huggingface/huggingface_hub/issues/2480 | closed | [
"bug"
] | 2024-08-22T12:32:16Z | 2024-08-26T12:45:55Z | null | MoritzLaurer |
huggingface/transformers.js | 896 | How to use this model: Xenova/bge-reranker-base | ### Question
I see that it supports transformers.js, but I can't find the instructions for use. Please help me with using it. | https://github.com/huggingface/transformers.js/issues/896 | closed | [
"question"
] | 2024-08-22T07:33:42Z | 2024-08-29T00:12:52Z | null | gy9527 |
huggingface/sentence-transformers | 2,900 | how to keep `encode_multi_process` output on the GPU | I saw this [example](https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/semantic-search/semantic_search.py) where we can do the following:
`query_embedding = embedder.encode(query, convert_to_tensor=True)`
`hits = util.semantic_search(query_embedding, corpus_embeddings, top_k=5)`
I read that setting `convert_to_tensor=True` keeps the embedding vectors on the GPU to optimize the similarity calculations. But if I work with multiple CPUs and GPUs, can I do the same? I didn't see a `convert_to_tensor` argument for `encode_multi_process`. | https://github.com/huggingface/sentence-transformers/issues/2900 | open | [] | 2024-08-21T21:05:35Z | 2024-08-21T21:07:39Z | null | anshuchen |
huggingface/parler-tts | 116 | How to use italian language? | It is possible use an italian style speaker? I've tried many prompt but all of this are in english style | https://github.com/huggingface/parler-tts/issues/116 | open | [] | 2024-08-21T15:24:57Z | 2025-06-18T13:20:22Z | null | piperino11 |
huggingface/chat-ui | 1,423 | Generated answers with Llama 3 include <|start_header_id|>assistant<|end_header_id|> | ## Bug description
I have set up a local endpoint serving Llama 3. All the answers I get from it start with `<|start_header_id|>assistant<|end_header_id|>`.
## Steps to reproduce
Set up Llama 3 in a local endpoint. In my `.env.local`, it is defined as the following:
```
MODELS=`[
{
"name": "llama3",
"displayName": "Llama 3 loaded from GCS",
"chatPromptTemplate": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{{preprompt}}<|eot_id|>{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|><|start_header_id|>assistant<|end_header_id|>{{/ifUser}}{{#ifAssistant}}{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}",
"preprompt": "You are a helpful AI assistant.",
"parameters": {
"stop": ["<|endoftext|>", "<|eot_id|>"],
"temperature": 0.4,
"max_new_tokens": 1024,
"truncate": 3071
},
"endpoints": [{
"type": "openai",
"baseURL": "http://localhost:8080/openai/v1"
}],
}
]`
```
## Context
I have tried variations of the chat template, also not providing any. The `<|start_header_id|>assistant<|end_header_id|>` is always there.
AFAIK, these tokens should be the last ones in the prompt, so that the model knows that it should continue the prompt with the assistant's answer. It seems they are not properly appended to the prompt, but the model still realizes it should add them itself.
### Logs
This a sample request that my local server receives (running VLLM):
```
INFO 08-21 11:47:18 async_llm_engine.py:529] Received request cmpl-d1482c4eb4ce49c2a259a2f782ee3712-0: prompt: "<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant. Unless otherwise specified, give concise and straightforward answers.<|eot_id|><|start_header_id|>user<|end_header_id|>
[ChatCompletionRequestMessageContentPartText(type='text', text='Hi, what is pizza?')]<|eot_id|>", sampling_params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.4, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=['<|endoftext|>', '<|eot_id|>'], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=1024, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [128000, 128000, 128006, 9125, 128007, 271, 2675, 527, 264, 11190, 15592, 18328, 13, 11115, 6062, 5300, 11, 3041, 64694, 323, 31439, 11503, 13, 128009, 128006, 882, 128007, 271, 58, 16047, 34290, 1939, 2097, 2831, 5920, 1199, 5930, 1151, 1342, 518, 1495, 1151, 13347, 11, 1148, 374, 23317, 30, 52128, 128009], lora_request: None.
```
### Specs
- **OS**: macOS
- **Browser**: Firefox 129.0.1
- **chat-ui commit**: 28351dfefa581e4494b2047de3c093eaa7a7cdbc
### Config
```
MONGODB_URL=mongodb://localhost:27017
HF_TOKEN=...
```
## Notes
I'm not sure what the `ChatCompletionRequestMessageContentPartText(...)` in the prompt is supposed to mean. Is it some internal request object rendered as a string? | https://github.com/huggingface/chat-ui/issues/1423 | closed | [
"support"
] | 2024-08-21T11:56:47Z | 2024-08-26T14:31:53Z | 5 | erickrf |
huggingface/trl | 1,955 | How to fine-tune LLaVA using PPO | Does LLaVA support training with PPO?
If not, what modifications do I need to make to enable this support? | https://github.com/huggingface/trl/issues/1955 | open | [
"✨ enhancement",
"👁️ VLM"
] | 2024-08-21T07:34:30Z | 2024-08-26T11:13:46Z | null | Yufang-Liu |
huggingface/diffusers | 9,235 | Is there any way to get diffusers-v0.27.0.dev0? | Is there any way to get diffusers-v0.27.0.dev0? I want to compare the difference between diffusers-v0.27.0.dev0 and branches that develop on it in another project, but I didn't find it on the releases or tags page. | https://github.com/huggingface/diffusers/issues/9235 | closed | [] | 2024-08-21T03:42:11Z | 2024-08-21T05:10:26Z | 2 | D222097 |
huggingface/llm.nvim | 108 | How to use proxy env var | I am unable to communicate with any http endpoints because I am behind a corporate proxy that uses self-signed certificates. Typically we use the http_proxy and https_proxy environment variables for this purpose, but I can't see any obvious configurations that I can add to my lua config to make this work.
I have tried adding http_proxy = "http://ProxyURL:ProxyPort" to cmd_env in the llm.setup but it still keeps throwing an http error... invalid peer certificate, unknown issuer. | https://github.com/huggingface/llm.nvim/issues/108 | open | [] | 2024-08-20T18:52:54Z | 2024-08-20T18:53:36Z | null | SethARhodes |
huggingface/huggingface_hub | 2,468 | How can I modify this repo files downloader jupyter notebook script to improve downloading speed? Perhaps multiple downloads at the same time? | This below code works but it is just slow
How can i speed up? Machine has much bigger speed and i really need to download lots of AI models to test
Thank you
```
import os
import requests
import hashlib
from huggingface_hub import list_repo_files, hf_hub_url, hf_hub_download
from huggingface_hub.utils import HfFolder
from tqdm import tqdm
def calculate_file_hash(file_path):
sha256_hash = hashlib.sha256()
with open(file_path, "rb") as f:
for byte_block in iter(lambda: f.read(4096), b""):
sha256_hash.update(byte_block)
return sha256_hash.hexdigest()
def download_file(url, target_path, headers, expected_size=None):
response = requests.get(url, headers=headers, stream=True)
response.raise_for_status()
total_size = int(response.headers.get('content-length', 0))
mode = 'ab' if os.path.exists(target_path) else 'wb'
with tqdm(total=total_size, unit='B', unit_scale=True, desc=os.path.basename(target_path), initial=0, ascii=True) as pbar:
with open(target_path, mode) as f:
for chunk in response.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
pbar.update(len(chunk))
if expected_size and os.path.getsize(target_path) != expected_size:
raise ValueError(f"Size mismatch for {target_path}. Expected: {expected_size}, Got: {os.path.getsize(target_path)}")
# Define the repository and target folder
repo_id = "YourUserName/reponame"
target_folder = "/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion"
# Retrieve the token from the .huggingface folder or set it manually
token = HfFolder.get_token()
if not token:
raise ValueError("Hugging Face token not found. Please log in using `huggingface-cli login` or set the token manually.")
headers = {"Authorization": f"Bearer {token}"}
# List all files in the repository
files = list_repo_files(repo_id)
# Ensure the target folder exists
os.makedirs(target_folder, exist_ok=True)
# Download each file directly to the target folder
for file in files:
try:
target_path = os.path.join(target_folder, file)
# Get file metadata
file_info = hf_hub_download(repo_id, filename=file, repo_type='model', token=token, local_dir=target_folder, local_dir_use_symlinks=False)
expected_size = os.path.getsize(file_info)
# Check if the file already exists and has the correct size
if os.path.exists(target_path):
if os.path.getsize(target_path) == expected_size:
print(f"File {file} already exists and is complete. Skipping download.")
continue
else:
print(f"File {file} exists but is incomplete. Resuming download.")
# Get the URL for the file
file_url = hf_hub_url(repo_id, filename=file, repo_type='model')
# Ensure subdirectories exist
os.makedirs(os.path.dirname(target_path), exist_ok=True)
# Download the file with authentication and size verification
download_file(file_url, target_path, headers, expected_size)
# Set the correct permissions for the downloaded file
os.chmod(target_path, 0o644) # Read and write for owner, read for group and others
except Exception as e:
print(f"An error occurred while processing file {file}: {e}")
print(f"All files have been downloaded and verified in {target_folder}")
```
### System info
```shell
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.24.6
- Platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Running in iPython ?: Yes
- iPython shell: ZMQInteractiveShell
- Running in notebook ?: Yes
- Running in Google Colab ?: No
- Token path ?: /home/Ubuntu/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: MonsterMMORPG
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: N/A
- Jinja2: 3.1.4
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: N/A
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: N/A
- pydantic: N/A
- aiohttp: 3.10.5
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/Ubuntu/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/Ubuntu/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/Ubuntu/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
{'huggingface_hub version': '0.24.6',
'Platform': 'Linux-6.5.0-45-generic-x86_64-with-glibc2.35',
'Python version': '3.10.12',
'Running in iPython ?': 'Yes',
'iPython shell': 'ZM | https://github.com/huggingface/huggingface_hub/issues/2468 | closed | [] | 2024-08-20T15:13:13Z | 2024-08-27T16:22:14Z | null | FurkanGozukara |
huggingface/datasets | 7,116 | datasets cannot handle nested json if features is given. | ### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value('string'),
'cuts': datasets.Sequence({
"cut1": datasets.Value("uint16"),
"cut2": datasets.Value("uint16")
})
}))
```
The above code does not work. However, I can load it without giving features.
```python
ds = datasets.load_dataset('json', data_files="./temp.json")
```
Is it possible to load integers as uint16 to save some memory?
### Steps to reproduce the bug
As in the bug description.
### Expected behavior
The data are loaded and integers are uint16.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | https://github.com/huggingface/datasets/issues/7116 | closed | [] | 2024-08-20T12:27:49Z | 2024-09-03T10:18:23Z | 3 | ljw20180420 |
huggingface/datasets | 7,113 | Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch) | ### Describe the bug
Hi there,
I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgraded to datasets-2.19.2. With 2.21.0 the problem remains.
Please see the code below to reproduce the problem.
The dataset can iterate correctly if we set either streaming=False or drop_last_batch=False.
I have to use drop_last_batch=True since it's for distributed training.
### Steps to reproduce the bug
```python
# datasets==2.21.0
import datasets
def data_prepare(examples):
print(examples["sentence1"][0])
return examples
batch_size = 101
# the size of the dataset is 100
# the dataset iterates correctly if we set either streaming=False or drop_last_batch=False
dataset = datasets.load_dataset("mteb/biosses-sts", split="test", streaming=True)
dataset = dataset.map(lambda x: data_prepare(x),
drop_last_batch=True,
batched=True, batch_size=batch_size)
for ex in dataset:
print(ex)
pass
```
### Expected behavior
The dataset iterates regardless of the batch size.
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0
| https://github.com/huggingface/datasets/issues/7113 | closed | [] | 2024-08-20T08:26:40Z | 2024-08-26T04:24:11Z | 1 | memray |
huggingface/diffusers | 9,216 | I made a pipeline that lets you use any number of models at once | ### Model/Pipeline/Scheduler description
Here's how to do it:
from rubberDiffusers import StableDiffusionRubberPipeline
pipe=StableDiffusionRubberPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float32,local_files_only=True,safety_checker=None, requires_safety_checker=False,
)
pipe2=StableDiffusionRubberPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float32,local_files_only=True,safety_checker=None, requires_safety_checker=False,
)
apply_multiModel(pipe)
pipe.added_model=[pipe2]
image=pipe("your prompt",width=512,height=512,pos=["0:0-512:512"],mask_strengths=[.5],model_kwargs=[{prompt="your prompt for the first loaded model"}]).images[0]
### Open source status
- [ ] The model implementation is available.
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
https://github.com/alexblattner/RubberDiffusers | https://github.com/huggingface/diffusers/issues/9216 | open | [
"stale"
] | 2024-08-19T11:46:08Z | 2024-09-21T15:03:31Z | 3 | alexblattner |
huggingface/transformers | 32,873 | How to use 【examples/pytorch/contrastive-image-text】 to inter inference | ### Feature request
I have reviewed the training code for CLIP and successfully executed it. Now, I want to use the obtained model for inference testing.
### Motivation
I would like to test the performance of the model I have trained.
### Your contribution
I hope I can get a example script to inference testing like below script :
python examples/pytorch/contrastive-image-text/run_clip.py \
--output_dir ./clip-roberta-finetuned \
--model_name_or_path ./clip-roberta \
--data_dir $PWD/data \
--dataset_name ydshieh/coco_dataset_script \
--dataset_config_name=2017 \
--image_column image_path \
--caption_column caption \
--remove_unused_columns=False \
--do_train --do_eval \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
--overwrite_output_dir \
--push_to_hub | https://github.com/huggingface/transformers/issues/32873 | open | [
"Feature request"
] | 2024-08-19T05:54:54Z | 2024-08-19T08:33:50Z | null | rendaoyuan |
huggingface/chat-ui | 1,415 | Bad request: Task not found for this model | Hi all,
I am facing the following issue when using HuggingFaceEndpoint for my custom finetuned model in my repository "Nithish-2001/RAG-29520hd0-1-chat-finetune" which is public with gradio.
llm_name: Nithish-2001/RAG-29520hd0-1-chat-finetune
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api-inference.huggingface.co/models/Nithish-2001/RAG-29520hd0-1-chat-finetune
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 763, in predict
output = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/route_utils.py", line 288, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1931, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1516, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "<ipython-input-7-4e46265a5151>", line 90, in conversation
response = qa_chain.invoke({"question": message, "chat_history": formatted_chat_history})
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 164, in invoke
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 154, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/conversational_retrieval/base.py", line 169, in _call
answer = self.combine_docs_chain.run(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py", line 170, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 603, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py", line 170, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 381, in __call__
return self.invoke(
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 164, in invoke
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 154, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/combine_documents/base.py", line 138, in _call
output, extra_return_dict = self.combine_docs(
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/combine_documents/stuff.py", line 257, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 316, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py", line 170, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 381, in __call__
return self.invoke(
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 164, in invoke
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 154, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 126, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 138, in generate
return self.llm.generate_prompt(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 750, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File | https://github.com/huggingface/chat-ui/issues/1415 | open | [
"support"
] | 2024-08-18T09:33:10Z | 2024-08-25T22:38:00Z | 1 | NITHISH-Projects |
huggingface/sentence-transformers | 2,893 | how to finetune sentence-transformers with unsupervised methods? | how to finetune sentence-transformers with unsupervised methods? for semantic search | https://github.com/huggingface/sentence-transformers/issues/2893 | closed | [] | 2024-08-17T02:32:09Z | 2024-08-18T02:51:29Z | null | keyuchen21 |
huggingface/diffusers | 9,205 | Can we pass output_attentions=True to DiT model such as pixart to get attention output? | Can we pass output_attentions=True to DiT model such as pixart to get attention output? Like using output_attentions=True in transformer? | https://github.com/huggingface/diffusers/issues/9205 | open | [
"stale"
] | 2024-08-16T17:26:14Z | 2024-09-16T15:02:42Z | 1 | foreverpiano |
huggingface/datatrove | 266 | How to look into the processed data? | Hi,
After running `tokenize_from_hf_to_s3.py`, I would like to inspect the resulting data. But I find that the current data is in a binary file (`.ds`). is there a way to allow me to look into the data?
Thanks! | https://github.com/huggingface/datatrove/issues/266 | open | [] | 2024-08-16T16:54:45Z | 2024-08-29T15:26:35Z | null | shizhediao |
huggingface/trl | 1,934 | How to Save the PPOTrainer? | The previous issue for this question https://github.com/huggingface/trl/issues/1643#issue-2294886330 is closed but remained unanswered. If I do `ppo_trainer.save_pretrained('path/to/a/folder')` and then `ppo_trainer.from_pretrained('path/to/that/folder')`, I get this error:
ValueError: tokenizer must be a PreTrainedTokenizerBase like a PreTrainedTokenizer or a PreTrainedTokenizerFast, got <class 'NoneType'>
It seems that the `PPOTrainer` object does not implement the two functions from `huggingface_hub.PyTorchModelHubMixin`. How should I save my trainer then? | https://github.com/huggingface/trl/issues/1934 | closed | [] | 2024-08-16T09:41:39Z | 2024-10-07T14:57:51Z | null | ThisGuyIsNotAJumpingBear |
huggingface/parler-tts | 109 | How many epoch of training did you do? What is the accuracy? | How many epoch of training did you do? What is the accuracy? | https://github.com/huggingface/parler-tts/issues/109 | open | [] | 2024-08-16T09:35:31Z | 2024-08-16T09:35:31Z | null | xuezhongfei2008 |
huggingface/diffusers | 9,195 | Problem with Flux Schnell bfloat16 multiGPU | ### Describe the bug
Hello! I set device_map='balanced' and get images generated in 2.5 minutes (expected in 12-20 seconds), while in pipe.hf_device_map it shows that the devices are distributed like this:
```
{
"transformer": "cuda:0",
"text_encoder_2": "cuda:2",
"text_encoder": "cuda:0",
"vae": "cuda:1"
}
```
I have 3 video cards 3090 Ti 24 GB and I can’t run it on them.
I also tried this way:
pipe.transformer.to('cuda:2')
pipe.text_encoder.to('cuda:2')
pipe.text_encoder_2.to('cuda:1')
pipe.vae.to('cuda:0')
What is the best way to launch it so that generation occurs on the GPU and quickly?
### Reproduction
```python
pipe = FluxPipeline.from_pretrained(
path_chkpt,
torch_dtype=torch.bfloat16,
device_map='balanced',
)
```
### Logs
_No response_
### System Info
ubuntu 22.04 3 GPU: 3090 TI 24 GB
accelerate==0.30.1
addict==2.4.0
apscheduler==3.9.1
autocorrect==2.5.0
chardet==4.0.0
cryptography==37.0.2
curl_cffi
diffusers==0.30.0
beautifulsoup4==4.11.2
einops
facexlib>=0.2.5
fastapi==0.92.0
hidiffusion==0.1.6
invisible-watermark>=0.2.0
numpy==1.24.3
opencv-python==4.8.0.74
pandas==2.0.3
pycocotools==2.0.6
pymystem3==0.2.0
pyyaml==6.0
pyjwt==2.6.0
python-multipart==0.0.5
pytrends==4.9.1
psycopg2-binary
realesrgan==0.3.0
redis==4.5.1
sacremoses==0.0.53
selenium==4.2.0
sentencepiece==0.1.97
scipy==1.10.1
scikit-learn==0.24.1
supervision==0.16.0
tb-nightly==2.14.0a20230629
tensorboard>=2.13.0
tomesd
transformers==4.40.1
timm==0.9.16
yapf==0.32.0
uvicorn==0.20.0
spacy==3.7.2
nest_asyncio==1.5.8
httpx==0.25.0
torchvision==0.15.2
insightface==0.7.3
psutil==5.9.6
tk==0.1.0
customtkinter==5.2.1
tensorflow==2.13.0
opennsfw2==0.10.2
protobuf==4.24.4
gfpgan==1.3.8
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9195 | closed | [
"bug"
] | 2024-08-16T06:30:54Z | 2025-12-05T06:38:14Z | 26 | OlegRuban-ai |
huggingface/diffusers | 9,184 | What is the correct way to apply the dictionary with the control strengths (called “scales”) but with blocks? | ### Describe the bug
I have managed to apply the basic dictionary. as the documentation mentions
```
adapter_weight_scales = { "unet": { "down": 1, "mid": 0, "up": 0} }
pipe.set_adapters("Lora1", adapter_weight_scales)
```
and it already works for N number of LORAS that I want to load, for example
```
adapter_weight_scales_1 = { "unet": { "down": 0.5, "mid": 0, "up": 0} }
adapter_weight_scales_2 = { "unet": { "down": 0, "mid": 0, "up": 0.5} }
pipe.set_adapters(["Lora1", "Lora2"], [adapter_weight_scales_1, adapter_weight_scales_2])
```
it works for me correctly, and I get very good results in my images
### Reproduction
Now I'm trying to apply the scaling dictionary to LORA but with blocks, for example:
```
adapter_weight_scales_blocks_1 = {
'unet': {
'down': {
'block_0': [0.2, 0.5],
'block_1': [0.5, 0.2]},
'mid': {
'block_0': [0.2, 0.5],
'block_1': [0.5, 0.2]},
'up': {
'block_0': [0.2, 0.5],
'block_1': [0.5, 0.5, 0.2]
}
}
}
adapter_weight_scales_blocks_2 = {
'unet': {
'down': {
'block_0': [0.5, 0.5],
'block_1': [0.5, 0.5]},
'mid': {
'block_0': [0.5, 0.5],
'block_1': [0.5, 0.5]},
'up': {
'block_0': [0.5, 0.5],
'block_1': [0.5, 0.5, 0.5]
}
}
}
pipe.set_adapters(["Lora1", "Lora2"], [ adapter_weight_scales_blocks_1, adapter_weight_scales_blocks_2])
```
### Logs
```shell
but an error like this is getting me:
/usr/local/lib/python3.10/dist-packages/diffusers/loaders/lora_base.py in set_adapters(self, adapter_names, adapter_weights)
571
572 if issubclass(model.__class__, ModelMixin):
--> 573 model.set_adapters(adapter_names, _component_adapter_weights[component])
574 elif issubclass(model.__class__, PreTrainedModel):
575 set_adapters_for_text_encoder(adapter_names, model, _component_adapter_weights[component])
/usr/local/lib/python3.10/dist-packages/diffusers/loaders/peft.py in set_adapters(self, adapter_names, weights)
107 weights = scale_expansion_fn(self, weights)
108
--> 109 set_weights_and_activate_adapters(self, adapter_names, weights)
110
111 def add_adapter(self, adapter_config, adapter_name: str = "default") -> None:
/usr/local/lib/python3.10/dist-packages/diffusers/utils/peft_utils.py in set_weights_and_activate_adapters(model, adapter_names, weights)
264 else:
265 module.active_adapter = adapter_name
--> 266 module.set_scale(adapter_name, get_module_weight(weight, module_name))
267
268 # set multiple active adapters
/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/layer.py in set_scale(self, adapter, scale)
278 # Ignore the case where the adapter is not in the layer
279 return
--> 280 self.scaling[adapter] = scale * self.lora_alpha[adapter] / self.r[adapter]
281
282 def scale_layer(self, scale: float) -> None:
TypeError: unsupported operand type(s) for *: 'dict' and 'float'``
```
What would be the correct way to do it?
```
### System Info
System Info
I am using google colab,
diffusers version: 0.30.0
Python version: 3.10.
### Who can help?
Diffuser masters can help me understand how to use that feature: @sayakpaul, @yiyixuxu @asomoza | https://github.com/huggingface/diffusers/issues/9184 | closed | [
"bug"
] | 2024-08-15T06:05:42Z | 2024-08-17T00:54:28Z | null | Eduardishion |
huggingface/diffusers | 9,180 | Pipeline has no attribute '_execution_device' | ### Describe the bug
Hello, I implemented my own custom pipeline referring StableDiffusionPipeline (RepDiffusionPipeline), but there are some issues
I called "accelerator.prepare" properly, and mapped the models on device (with "to.(accelerator.device)")
But when I call pipeline and the '__call__' function is called, sometimes I met the error
It is not only problem in using multi-gpu, it occurs when I use single gpu.
For example, I defined my pipeline for my validation in training code like this:
```python
val_pipe = RepDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
unet=accelerator.unwrap_model(unet),
rep_encoder=accelerator.unwrap_model(rep_encoder),
vae=accelerator.unwrap_model(vae),
revision=None, variant=None, torch_dtype=weight_dtype, safety_checker=None
).to(accelerator.device)
```
then, when I called 'val_pipe' like this:
```
model_pred = val_pipe(
image = condition_original_image if args.val_mask_op else data["original_images"],
representation = representation,
prompt = "",
num_inference_steps = 20,
image_guidance_scale = 1.5,
guidance_scale = scale,
generator = generator
).images[0]
```
At that time, the error "RepDiffusionPipeline has no attribute '_execution_device'" occurs. (Not always, just randomly)
How can I solve this issue, or what part of my code can be doubted and fixed?
Thank you for reading:)
### Reproduction
It occurs randomly, so there is no option to reproduce...
But when I call the defined pipeline, it occurs randomly.
### Logs
```shell
RepDiffusionPipeline has no attribute '_execution_device'
```
### System Info
I tried to test in various diffusers & python versions, but the problem still occurs.
In now, I am running my code in diffusers 0.27.2, python 3.10.14.
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.2.2+cu121 with CUDA 1201 (you have 2.2.2+cu118)
Python 3.10.14 (you have 3.10.14)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `diffusers` version: 0.27.2
- Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- PyTorch version (GPU?): 2.2.2+cu118 (True)
- Huggingface_hub version: 0.24.3
- Transformers version: 4.43.3
- Accelerate version: 0.33.0
- xFormers version: 0.0.25.post1
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sayakpaul @yiyixuxu | https://github.com/huggingface/diffusers/issues/9180 | open | [
"bug",
"stale"
] | 2024-08-14T14:43:15Z | 2025-11-18T13:22:52Z | 33 | choidaedae |
huggingface/diffusers | 9,174 | [Quantization] bring quantization to diffusers core | Now that we have a working PoC (#9165) of NF4 quantization through `bitsandbytes` and also [this](https://huggingface.co/blog/quanto-diffusers) through `optimum.quanto`, it's time to bring in quantization more formally in `diffusers` 🎸
In this issue, I want to devise a rough plan to attack the integration. We are going to start with `bitsandbytes` and then slowly increase the list of our supported quantizers based on community interest. This integration will also allow us to do LoRA fine-tuning of large models like [Flux](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) through `peft` ([guide](https://huggingface.co/docs/peft/en/developer_guides/quantization)).
Three PRs are expected:
- [ ] Introduce a [base quantization config class](https://github.com/huggingface/transformers/blob/main/src/transformers/quantizers/base.py) like we have in `transformers`.
- [ ] Introduce `bitsandbytes` related utilities to handle processing, post-processing of layers for injecting `bitsandbytes` layers. Example is [here](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/bitsandbytes.py).
- [ ] Introduce a `bitsandbytes` config ([example](https://github.com/huggingface/transformers/blob/main/src/transformers/quantizers/quantizer_bnb_4bit.py)) and quantization loader mixin aka `QuantizationLoaderMixin`. This loader will enable passing a quantization config to `from_pretrained()` of a `ModelMixin` and will tackle how to modify and prepare the model for the provided quantization config. This will also allow us to serialize the model according to the quantization config.
---
Notes:
* We could have done this with `accelerate` ([guide](https://huggingface.co/docs/accelerate/en/usage_guides/quantization)) but this doesn't yet support NF4 serialization.
* Good example PR: https://github.com/huggingface/transformers/pull/32306
---
@DN6 @SunMarc sounds good? | https://github.com/huggingface/diffusers/issues/9174 | closed | [
"quantization"
] | 2024-08-14T08:05:34Z | 2024-10-21T04:42:46Z | 15 | sayakpaul |
huggingface/diffusers | 9,172 | why rebuild a vae in inference stage? | Thanks for ur effort for diffusion model.
I want to know why we need to rebuild a vae in inference stage. I think it will introduce extra GPU cost.
https://github.com/huggingface/diffusers/blob/a85b34e7fdc0a5fceb11aa0fa6199bd9afaca396/examples/text_to_image/train_text_to_image_sdxl.py#L1217C16-L1223C24
| https://github.com/huggingface/diffusers/issues/9172 | open | [
"stale"
] | 2024-08-14T05:52:38Z | 2024-11-14T15:03:55Z | 2 | WilliammmZ |
huggingface/candle | 2,413 | How to load multiple safetensors with json format | For such a task:
https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/transformer
how should safetensors be loaded?
| https://github.com/huggingface/candle/issues/2413 | open | [] | 2024-08-14T04:50:37Z | 2025-06-11T19:05:05Z | null | oovm |
huggingface/diffusers | 9,170 | sdxl and contronet must has a GPU memory more than 36G? | ### Describe the bug
https://github.com/huggingface/diffusers/blob/15eb77bc4cf2ccb40781cb630b9a734b43cffcb8/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py
line73---line113
I run the demo with 24G GPU, then OOM everytime.
so I must run SDXl with 48G?
@yiyixuxu @sayakpaul @DN6 tks
### Reproduction
File "/root/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1150, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB. GPU 0 has a total capacity of 23.65 GiB of which 7.56 MiB is free. Process 3431486 has 18.91 GiB memory in use. Process 3081991 has 4.72 GiB memory in use. Of the allocated memory 4.09 GiB is allocated by PyTorch, and 171.75 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
### Logs
_No response_
### System Info
0.28?
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9170 | closed | [
"bug"
] | 2024-08-14T01:46:35Z | 2024-11-13T08:49:22Z | 3 | henbucuoshanghai |
huggingface/trl | 1,927 | how to use kto_pair loss in the latest version ? | I can see that kto_pair losstype is no longer available in the latest version of dpo trainer. You suggest to use ktotrainer instead.
But kto_pair loss worked much better than kto_trainer on my dataset, so how do I continue to use kto_pair if I'm using the latest version of the trl library?
thanks a lot! | https://github.com/huggingface/trl/issues/1927 | closed | [
"🏋 DPO",
"🏋 KTO"
] | 2024-08-13T15:59:25Z | 2024-10-20T16:56:21Z | null | vincezengqiang |
huggingface/autotrain-advanced | 728 | [BUG] Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. How to mitigate this? | ### Prerequisites
- [X] I have read the [documentation](https://hf.co/docs/autotrain).
- [X] I have checked other issues for similar problems.
### Backend
Local
### Interface Used
CLI
### CLI Command
```
!autotrain --config path-to.yml
```
```
task: llm-sft
base_model: teknium/OpenHermes-2.5-Mistral-7B
project_name: XXX
log: none
backend: local
data:
path: /content
train_split: train
valid_split: null
chat_template: null
column_mapping:
text_column: text
params:
block_size: 256
model_max_length: 512
epochs: 1
batch_size: 2
lr: 3e-5
peft: true
quantization: int4
target_modules: all-linear
padding: right
optimizer: adamw_torch
scheduler: cosine
gradient_accumulation: 1
mixed_precision: none
unsloth: true
lora_r: 16
lora_alpha: 16
lora_dropout: 0
hub:
username: abc
token: hf_XXX
push_to_hub: false
```
### UI Screenshots & Parameters
_No response_
### Error Logs
```
Loading checkpoint shards: 100% 2/2 [01:21<00:00, 40.56s/it]
INFO | 2024-08-13 04:46:20 | autotrain.trainers.clm.utils:get_model:666 - model dtype: torch.float16
INFO | 2024-08-13 04:46:20 | autotrain.trainers.clm.train_clm_sft:train:37 - creating trainer
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'.
Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead.
warnings.warn(message, FutureWarning)
/usr/local/lib/python3.10/dist-packages/trl/trainer/sft_trainer.py:192: UserWarning: You passed a `packing` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/trl/trainer/sft_trainer.py:280: UserWarning: You passed a `max_seq_length` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/trl/trainer/sft_trainer.py:318: UserWarning: You passed a `dataset_text_field` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
```
### Additional Information
I am not sure why this pops up. I know this is just a UserWarning and model is able to fine-tune ok, but is anything being affected? | https://github.com/huggingface/autotrain-advanced/issues/728 | closed | [
"bug"
] | 2024-08-13T05:00:10Z | 2024-08-13T12:31:19Z | null | jackswl |
huggingface/diffusers | 9,164 | the dog example of train_dreambooth_lora_flux.py can not convergence | ### Describe the bug
```
export MODEL_NAME="black-forest-labs/FLUX.1-dev"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="trained-flux-lora"
accelerate launch train_dreambooth_lora_flux.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--mixed_precision="bf16" \
--instance_prompt="a photo of sks dog" \
--resolution=512 \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--learning_rate=1e-5 \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=500 \
--validation_prompt="A photo of sks dog in a bucket" \
--validation_epochs=25 \
--seed="0" \
--push_to_hub
```
I follow this command to train lora of flux-dev and download the dog-example from huggingFace, but this setting could not get better result, the loss is normal

the dog-example look like this:

but my result look like below:

and don't use the lora to generate image of the same prompt look like below:

### Reproduction
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("/opt/ml/volume/default/aigc/project/FLUX.1-dev",torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()
pipe.lora_state_dict("/opt/ml/volume/default/aigc/project/diffusers/examples/dreambooth/trained-flux-lora/checkpoint-500")
prompts = []
prompts.append("an sks dog")
index = 0
for prompt in prompts:
image = pipe(
prompt=prompt,
num_inference_steps=20,
guidance_scale=7.5,
max_sequence_length=512,
width=1152,
height=768
).images[0]
save_file = "dog"+str(index)+'.png'
index+=1
image.save(save_file)
```
### Logs
_No response_
### System Info
ubuntu 20.04
### Who can help?
@sayakpaul @linoytsaban | https://github.com/huggingface/diffusers/issues/9164 | closed | [
"bug"
] | 2024-08-13T03:08:10Z | 2024-08-13T10:23:23Z | 7 | chongxian |
huggingface/text-embeddings-inference | 380 | How do i deploy to vertex ? | How do i deploy to vertex ? I think i saw some feature=google setting in code which supports compatibility with vertex . Please guide. | https://github.com/huggingface/text-embeddings-inference/issues/380 | closed | [] | 2024-08-12T17:15:30Z | 2024-10-17T10:19:02Z | null | pulkitmehtaworkmetacube |
huggingface/trl | 1,916 | How to Add PEFT to PPO Trainer or PPO Config | I am trying to realize RLHF through PPO.
May I ask how can I realize PEFT in RLHF/PPO. I can see this parameter in DPOTrainer. However, I cannot see that in PPOTrainer.
| https://github.com/huggingface/trl/issues/1916 | closed | [
"✨ enhancement",
"🧒 good second issue",
"🏋 PPO"
] | 2024-08-12T01:02:07Z | 2024-11-18T10:54:10Z | null | ZhichaoWang970201 |
huggingface/trl | 1,915 | How to dpo llava? | Thank you for great work!
I do dpo llava using raw `/trl/examples/scripts/dpo_visual.py` code by using a command
`CUDA_VISIBLE_DEVICES=0 accelerate launch examples/scripts/dpo_visual.py --dataset_name HuggingFaceH4/rlaif-v_formatted --model_name_or_path llava-hf/llava-1.5-7b-hf --per_device_train_batch_size 1 --gradient_accumulation_steps 64 --dataset_num_proc 32 --output_dir dpo_llava --bf16 --torch_dtype bfloat16 --gradient_checkpointing --use_peft --lora_target_modules=all-linear`
however I got a error such as
> multiprocess.pool.RemoteTraceback:
> """
> Traceback (most recent call last):
> File "/root/anaconda3/lib/python3.12/site-packages/multiprocess/pool.py", line 125, in worker
> result = (True, func(*args, **kwds))
> ^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 678, in _write_generator_to_queue
> for i, result in enumerate(func(**kwargs)):
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3522, in _map_single
> example = apply_function_on_filtered_inputs(example, i, offset=offset)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs
> processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/trl/trainer/dpo_trainer.py", line 808, in tokenize_row
> prompt_tokens = self.processor(prompt, images=images, add_special_tokens=False)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> TypeError: LlavaProcessor.__call__() got an unexpected keyword argument 'add_special_tokens'
> """
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File "/trl/examples/scripts/dpo_visual.py", line 178, in <module>
> trainer = DPOTrainer(
> ^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f
> return f(*args, **kwargs)
> ^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/trl/trainer/dpo_trainer.py", line 529, in __init__
> train_dataset = train_dataset.map(self.tokenize_row, num_proc=self.dataset_num_proc, writer_batch_size=10)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 602, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3253, in map
> for rank, done, content in iflatmap_unordered(
> File "/root/anaconda3/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 718, in iflatmap_unordered
> [async_result.get(timeout=0.05) for async_result in async_results]
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/multiprocess/pool.py", line 774, in get
> raise self._value
> TypeError: LlavaProcessor.__call__() got an unexpected keyword argument 'add_special_tokens'
> Traceback (most recent call last):
> File "/root/anaconda3/bin/accelerate", line 8, in <module>
> sys.exit(main())
> ^^^^^^
> File "/root/anaconda3/lib/python3.12/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
> args.func(args)
> File "/root/anaconda3/lib/python3.12/site-packages/accelerate/commands/launch.py", line 1106, in launch_command
> simple_launcher(args)
> File "/root/anaconda3/lib/python3.12/site-packages/accelerate/commands/launch.py", line 704, in simple_launcher
> raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
> subprocess.CalledProcessError: Command '['/root/anaconda3/bin/python', 'examples/scripts/dpo_visual.py', '--dataset_name', 'HuggingFaceH4/rlaif-v_formatted', '--model_name_or_path', 'llava-hf/llava-1.5-7b-hf', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '64', '--dataset_num_proc', '32', '--output_dir', 'dpo_llava', '--bf16', '--torch_dtype', 'bfloat16', '--gradient_checkpointing', '--use_peft', '--lora_target_modules=all-linear']' returned non-zero exit status 1.
Is there a solution? | https://github.com/huggingface/trl/issues/1915 | closed | [] | 2024-08-11T00:57:38Z | 2024-08-11T01:23:16Z | null | ooooohira |
huggingface/transformers.js | 887 | VSCode Interpolation | ### Question
I'm finding that VSCode is extremely slow when reading type definitions from the `@xenova/transformers` path. Is there anything I might be doing wrong? I've noticed that it uses JS comments to define the types instead of a type definition file, is the issue I am having a known issue with using that type of markup? | https://github.com/huggingface/transformers.js/issues/887 | closed | [
"question"
] | 2024-08-11T00:08:30Z | 2024-08-25T01:55:36Z | null | lukemovement |
huggingface/diffusers | 9,140 | Diffusers model not working as good as repo ckpt model | Hi,
When I try to run the models stable diffusion v1-5 or Instructpix2pix through the diffusers pipeline and use .from_pretrained() it downloads the models from hugging face and I'm using the code to run inference given in hugging face, the results are not good at all in the sense that there is still noise in the generated images.
But when I run these models using their GitHub repo code and ckpt models given by them the outputs are very good.
Is there any solution to this or any other way to use the diffusers library pipeline.
Also the diffusers.StableDiffusionInstructPix2PixPipeline does not have .from_single_file() option.
Thank you
| https://github.com/huggingface/diffusers/issues/9140 | closed | [
"stale"
] | 2024-08-09T09:34:30Z | 2024-12-14T12:13:15Z | 6 | kunalkathare |
huggingface/diffusers | 9,136 | IP adapter output on some resolutions suffers in quality? | ### Describe the bug
I am running IP adapter for 768x1344 which is one of the sdxl listed resolutions. I find that the output quality is much less than say regular 768x768 generations. I've attached sample images and code below. In this experiment 1080x768 seemed to get best output, but its not one of the supported resolutions @asomo





### Reproduction
import torch
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL, UniPCMultistepScheduler
from diffusers.image_processor import IPAdapterMaskProcessor
from transformers import CLIPVisionModelWithProjection
from controlnet_aux import AnylineDetector
import cv2
import numpy as np
from PIL import Image, ImageOps
from huggingface_hub import hf_hub_download
def create_controlnet_pipes(image_encoder=None)->StableDiffusionXLControlNetPipeline:
## get controlnet
controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-canny-sdxl-1.0",
torch_dtype=torch.float16,
use_safetensors=True,
)
pipe = StableDiffusionXLPipeline.from_single_file(
"sdxl model path",
add_watermarker=False,
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True,
image_encoder=image_encoder,
)
pipe = StableDiffusionXLControlNetPipeline(
controlnet=controlnet,
**pipe.components,
add_watermarker=False,
)
pipe = pipe.to("cuda")
return pipe
def canny(image):
image = np.array(image)
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
return Image.fromarray(image)
if __name__ == '__main__':
## crop different values like 0,0,1080,768 or 0,0,1280,768
ref_image = Image.open('images/fridge_fg.png').crop((0,0,1344,768))
bg_ref_image = Image.open('images/fridge_bg.png').crop((0,0,1344,768))
mask_new = Image.open('images/fridge_mask.png').convert('L').crop((0,0,1344,768))
inv_mask = Image.open('images/fridge_inv_mask.png').convert('L').crop((0,0,1344,768))
processor = IPAdapterMaskProcessor()
mask_fg = processor.preprocess([mask_new])
mask_fg = mask_fg.reshape(1, mask_fg.shape[0], mask_fg.shape[2], mask_fg.shape[3])
mask_bg = processor.preprocess([inv_mask])
mask_bg = mask_bg.reshape(1, mask_bg.shape[0], mask_bg.shape[2], mask_bg.shape[3])
canny_pil = Image.open('images/fridge_canny.png').crop((0,0,1344,768))
image_encoder = CLIPVisionModelWithProjection.from_pretrained(
"h94/IP-Adapter",
subfolder="models/image_encoder",
torch_dtype=torch.float16
)
pipe = create_controlnet_pipes(image_encoder=image_encoder)
pipe.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name=["ip-adapter-plus_sdxl_vit-h.safetensors", "ip-adapter-plus_sdxl_vit-h.safetensors"], use_safetensors=True)
scale_config_fg = {'down':1, 'mid':1, 'up':1}
scale_config_bg = {"down":0.7, 'mid':0.7, 'up':0.7}
pipe.set_ip_adapter_scale([scale_config_fg, scale_config_bg])
for idx in range(5):
outputs = pipe(
prompt='kitchen scene',
image=canny_pil,
ip_adapter_image=[ref_image, bg_ref_image],
negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality, fuzzy, blurry",
guidance_scale=5,
num_inference_steps=30,
controlnet_conditioning_scale=0.53,
cross_attention_kwargs={"ip_adapter_masks": [mask_fg, mask_bg]},
num_images_per_prompt=1
# generator=generator,
).images
for image in outputs:
image.save(<path>)
# image.save(f'output_plus/fridge_ar_ctrlnet_1280_plus_{idx}.png')
print('done')
pipe.unload_ip_adapter()
### Logs
_No response_
### System Info
v0.28.2 diffusers
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9136 | open | [
"bug",
"stale"
] | 2024-08-09T06:36:39Z | 2024-09-14T15:03:17Z | 2 | darshats |
huggingface/transformers.js | 885 | TimeSformer on the web | ### Question
Glad to see this repo! If I want to use TimeSformer on the web, any suggestion or guide for it? Where can I learn from this repo or it's a totally different things? Thanks in advance! | https://github.com/huggingface/transformers.js/issues/885 | open | [
"question"
] | 2024-08-08T17:59:13Z | 2024-08-11T09:02:47Z | null | tomhsiao1260 |
huggingface/cookbook | 163 | Incorrect markdown table rendering in Colab in "How to use Inference Endpoints to Embed Documents" | There is an issue with the rendering of the Inference Endpoints table in Colab in [How to use Inference Endpoints to Embed Documents](https://huggingface.co/learn/cookbook/automatic_embedding_tei_inference_endpoints). Although the table correctly renders on HF cookbook webpage:
<img width="610" alt="image" src="https://github.com/user-attachments/assets/e32731fb-31e1-4a5d-8a35-a230b1bea50c">
when opening with Colab with the upper "Open in Colab" button, the rows are rendered incorrectly:
<img width="583" alt="image" src="https://github.com/user-attachments/assets/65d76a12-bd4d-41ce-93d9-4c0b19986bdf">
| https://github.com/huggingface/cookbook/issues/163 | closed | [] | 2024-08-08T11:16:40Z | 2024-08-08T16:22:48Z | null | sergiopaniego |
huggingface/alignment-handbook | 192 | Constant training loss in the model adapter card | Hello,
I could fine-tune a model using a small dataset and I see that the validation loss decreases, while the training loss remains the same in the model card.
I don't think this is normal, even though the new task I try to teach the model is similar to what it already does, I think it should be able to learn from the dataset. I took a look at the trainer_state.json file created during the fine-tuning process and I saw that the training_loss for step 2 is different from the one displayed in the model card.
**Results from model_card:**
|Training Loss | Epoch | Step | Validation Loss|
|-------|-------|-------|-------|
|1.3185 | 1.0 | 1 | 1.4256|
|1.3185 | 1.1429 | 2 | 1.3196|
**Results from the trainer_state.json:**
"log_history": [
{
"epoch": 1.0,
"grad_norm": 1.1992276906967163,
"learning_rate": 0.0002,
"loss": 1.3185,
"step": 1
},
{
"epoch": 1.0,
"eval_loss": 1.4256268739700317,
"eval_runtime": 1.7474,
"eval_samples_per_second": 1.145,
"eval_steps_per_second": 0.572,
"step": 1
},
{
"epoch": 1.1428571428571428,
"eval_loss": 1.3196333646774292,
"eval_runtime": 1.552,
"eval_samples_per_second": 1.289,
"eval_steps_per_second": 0.644,
"step": 2
},
{
"epoch": 1.1428571428571428,
"step": 2,
"total_flos": 823612516859904.0,
"train_loss": 0.7439389228820801,
"train_runtime": 27.974,
"train_samples_per_second": 0.5,
"train_steps_per_second": 0.071
}
Does the training loss remain the same, or is there a problem with the model card generation?
Have a nice day! | https://github.com/huggingface/alignment-handbook/issues/192 | closed | [] | 2024-08-08T09:35:40Z | 2024-08-08T13:29:00Z | 1 | Michelet-Gaetan |
huggingface/optimum | 1,985 | Correct example to use TensorRT? | ### System Info
```shell
optimum: 1.20.0
os: ubuntu 20.04 with RTX 2080TI
python: 3.10.14
```
### Who can help?
@michaelbenayoun @JingyaHuang @echarlaix
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
I followed the doc [here](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/gpu#tensorrtexecutionprovider). The below is my code:
```python
from transformers import AutoProcessor
from optimum.onnxruntime import ORTModelForVision2Seq
model = 'facebook/nougat-small'
ort_model = ORTModelForVision2Seq.from_pretrained(
"facebook/nougat-small",
export=True,
provider="TensorrtExecutionProvider",
)
assert ort_model.providers == ["TensorrtExecutionProvider", "CUDAExecutionProvider", "CPUExecutionProvider"]
processor = AutoProcessor.from_pretrained(model)
ort_model.save_pretrained('./nougat-small-trt')
processor.save_pretrained('./nougat-small-trt')
```
When running the code, the terminal looks like:
```
2024-08-08 16:31:02.881585368 [W:onnxruntime:Default, tensorrt_execution_provider.h:83 log] [2024-08-08 08:31:02 WARNING] onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
```
I waited for almost half an hour for exporting the model (RTX 2080TI). However, when I loaded it by the below code, it just repeated the same thing.
```python
session_options = ort.SessionOptions()
session_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
session_options.log_severity_level = 3
trt_engine_cache = './nougat-small-trt-cache'
os.makedirs(trt_engine_cache, exist_ok=True)
provider_options = {
'trt_engine_cache_enable': True,
'trt_engine_cache_path': trt_engine_cache
}
self.model = ORTModelForVision2Seq.from_pretrained(
model,
provider='TensorrtExecutionProvider',
provider_options=provider_options,
session_options=session_options,
)
```
Therefore, I want to know whether Optimum supports TensorRT or not. Or there is something wrong with the official doc to run TensorRT.
### Expected behavior
When loading the converted model by TensorRT, optimum should not repeat the converting process again.
| https://github.com/huggingface/optimum/issues/1985 | open | [
"bug"
] | 2024-08-08T08:46:14Z | 2024-08-29T11:24:35Z | 2 | sherlcok314159 |
huggingface/diffusers | 9,127 | flux.1-dev device_map didn't work | I try to use device_map to use multiple gpu's, but it not worked, how can I use all my gpus?
| https://github.com/huggingface/diffusers/issues/9127 | closed | [] | 2024-08-08T08:30:33Z | 2024-11-26T02:11:03Z | 33 | hznnnnnn |
huggingface/diffusers | 9,120 | [ar] Translating docs to Arabic (العربية) | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐.
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about Diffusers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/diffusers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63).
Thank you so much for your help! 🤗
| https://github.com/huggingface/diffusers/issues/9120 | closed | [] | 2024-08-07T21:04:54Z | 2024-10-29T08:14:24Z | 2 | AhmedAlmaghz |
huggingface/chat-ui | 1,394 | I need to reload to get the response | 
i am using LLama 3.1 70B to chat, but it is so slow to get response and i need to reload to get response , is it because the model is overload ? | https://github.com/huggingface/chat-ui/issues/1394 | closed | [
"support"
] | 2024-08-07T09:31:03Z | 2024-08-15T06:56:59Z | 2 | renaldy-therry |
huggingface/chat-ui | 1,393 | Generation Error with Ollama - Inconsistent Output Generation | Hi,
I'm experiencing issues while running GEMMA2 on Ollama. Specifically, I'm encountering the following problems:
Error on Message Generation:
Whenever a new chat is created, every message results in the error:
Error: Generation failed, in the back end
No output is generated,on the front end.
Inconsistent Message Handling:
After retrying the same message multiple times (ranging from 2 to 15 attempts), the message is eventually processed correctly and the output is displayed on the front end.
Server Responsiveness:
Despite the above issues, the server responds to every query.
Expected Behavior:
Messages should be processed and output generated on the first attempt without errors.
Additional Context:
Ollama Version: 0.3.3
GEMMA2:2b (I've tried others models and the problem is the same)
Operating System: CentOS
Relevant Logs:
error message:
ERROR (537688): Generation failed
err: {
"type": "Error",
"message": "Generation failed",
"stack":
Error: Generation failed
at Module.generateFromDefaultEndpoint (/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:23:9)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async generateTitle (/chat-ui/src/lib/server/textGeneration/title.ts:54:10)
at async Module.generateTitleForConversation (/chat-ui/src/lib/server/textGeneration/title.ts:17:19)
Its something with the title of the conversation but retrying the message finally the conversations name is changed too. And messages after conversations name is changed have the same problem, rarely it works at first attempt.
My env.local:
MONGODB_URL="mongodb://localhost:27017"
HF_TOKEN=Mytoken
OPENAI_API_KEY="ollama"
MODELS=`[
{
"name": "google/gemma-2-2b-it",
"chatPromptTemplate": "{{#each messages}}{{#ifUser}}<start_of_turn>user\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<end_of_turn>\n<start_of_turn>model\n{{/ifUser}}{{#ifAssistant}}{{content}}<end_of_turn>\n{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"max_new_tokens": 2048,
"stop": ["<end_of_turn>"]
},
"endpoints": [
{
"type": "ollama",
"baseURL": "http://127.0.0.1:11434",
"ollamaName" : "gemma2:2b"
}
]
},
]`
USE_LOCAL_WEBSEARCH=true
Any assistance in resolving this issue would be greatly appreciated. Thank you! | https://github.com/huggingface/chat-ui/issues/1393 | open | [
"support"
] | 2024-08-07T09:02:19Z | 2024-08-07T11:05:19Z | 1 | juanjuanignacio |
huggingface/chat-ui | 1,392 | Cannot send the message and get response in hugging chat | I cannot send message and get a response from llm, and i cannot click "activate" to change model in huggingchat (https://huggingface.co/chat/) | https://github.com/huggingface/chat-ui/issues/1392 | closed | [
"support",
"huggingchat"
] | 2024-08-07T08:37:01Z | 2024-08-07T09:06:59Z | 4 | renaldy-therry |
huggingface/text-embeddings-inference | 371 | how to support a SequenceClassification model | ### Feature request
I have a model can be run by transformers.AutoModelForSequenceClassification.from_pretrained, how can i serve it in TEI
### Motivation
to support more models
### Your contribution
YES | https://github.com/huggingface/text-embeddings-inference/issues/371 | closed | [] | 2024-08-06T10:45:00Z | 2024-10-17T10:24:09Z | null | homily707 |
huggingface/chat-ui | 1,387 | CopyToClipBoardBtn in ChatMessage.svelte has a bug? | https://github.com/huggingface/chat-ui/blob/6de97af071c69aa16e8f893adebb46f86bdeeaff/src/lib/components/chat/ChatMessage.svelte#L378-L384
When compared to other components, classNames is the only difference here.
When rendered, the icon appears faint in the browser.
Is there a reason for this, or is it a bug?
https://github.com/huggingface/chat-ui/blob/6de97af071c69aa16e8f893adebb46f86bdeeaff/src/lib/components/CopyToClipBoardBtn.svelte#L37-L51
It seems that the classNames of IconCopy is the cause of the faintness. | https://github.com/huggingface/chat-ui/issues/1387 | closed | [
"bug",
"good first issue",
"front"
] | 2024-08-06T04:59:45Z | 2024-08-12T09:35:21Z | 5 | calycekr |
huggingface/diffusers | 9,092 | Fluxpipeline report model_index.json not found | ### Describe the bug
I use the Fluxpipeline and report no file model_index.json.
I read other issue and set the `revision="refs/pr/3"`,but it doesn't work, how can i do to solve this problem and how to use the T5xxl as text encoder? thanks for your help
### Reproduction
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("/opt/ml/volume/default/aigc/project/chanPin/models/flux", revision="refs/pr/3",torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()
prompt = "a tiny astronaut hatching from an egg on the moon"
out = pipe(
prompt=prompt,
guidance_scale=3.5,
height=768,
width=1360,
num_inference_steps=50,
).images[0]
out.save("image.png")
```
### Logs
_No response_
### System Info
ubuntu 20.04
### Who can help?
@sayakpaul | https://github.com/huggingface/diffusers/issues/9092 | closed | [
"bug"
] | 2024-08-06T01:48:40Z | 2024-08-06T02:25:03Z | 3 | chongxian |
huggingface/trl | 1,900 | How to speed up PPOTrainer .generate()? | During PPO, I'm finding that `.generate()` is extremely slow. The following call takes ~3 and a half minutes for batch size of 64 with a 1.4B parameter policy LM:
```
ppo_trainer.generate(
input_token_ids_list,
pad_token_id=policy_model_tokenizer.eos_token_id,
return_prompt=False,
**generation_config_dict,
)
```
How can I accelerate sampling? The same function call with `vllm` takes <30s for setup and execution, so I feel like I am doing something suboptimally. | https://github.com/huggingface/trl/issues/1900 | closed | [] | 2024-08-05T18:35:31Z | 2024-10-01T06:35:50Z | null | RylanSchaeffer |
huggingface/chat-ui | 1,386 | System role problem running Gemma 2 on vLLM | Hello,
In running chat ui and trying some models, with phi3 and llama i had no problem but when I run gemma2 in vllm Im not able to make any good api request,
in env.local:
{
"name": "google/gemma-2-2b-it",
"id": "google/gemma-2-2b-it",
"chatPromptTemplate": "{{#each messages}}{{#ifUser}}<start_of_turn>user\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<end_of_turn>\n<start_of_turn>model\n{{/ifUser}}{{#ifAssistant}}{{content}}<end_of_turn>\n{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 2048,
"stop": ["<end_of_turn>"]
},
"endpoints": [
{
"type": "openai",
"baseURL": "http://127.0.0.1:8000/v1",
}
]
}
and I always have the same response in vllm server:
ERROR 08-05 12:39:06 serving_chat.py:118] Error in applying chat template from request: System role not supported
INFO: 127.0.0.1:42142 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request
do someone know if I have to change and how do change the chat template or deactivate system role ? is it a vllm problem or a chat ui problem?
Thank U! | https://github.com/huggingface/chat-ui/issues/1386 | closed | [
"support"
] | 2024-08-05T13:22:10Z | 2024-11-07T21:39:47Z | 5 | juanjuanignacio |
huggingface/optimum | 1,981 | [GPTQQuantizer] How to use multi-GPU for GPTQQuantizer? | ### System Info
```shell
hello:
I encountered an out-of-memory error while attempting to quantize a model using GPTQQuantizer. The error seems to be related to the large size of the model weights. Below is the quantization code I used:
from optimum.gptq import GPTQQuantizer
quantizer = GPTQQuantizer(
bits=4,
dataset='wikitext2',
block_name_to_quantize=decoder.layers,
disable_exllama=False,
damp_percent=0.1,
group_size=128
)
The error message I received is as follows:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 784.00 MiB. GPU 0 has a total capacty of 10.90 GiB of which 770.44 MiB is free. Including non-PyTorch memory
Environment:
· Transformers version: 4.43.2
· Optimum version: 1.21.2
· GPU model and memory: 11GiB * 2
· CUDA version: 12.4
Question:How to use multi-GPU for GPTQQuantizer? thank you!
```
### Who can help?
@kashif @srush @danieldk @mausch @dmaniloff How to use multi-GPU for GPTQQuantizer?
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
from optimum.gptq import GPTQQuantizer
```python
quantizer = GPTQQuantizer(
bits=4,
dataset='wikitext2',
block_name_to_quantize=decoder.layers,
disable_exllama=False,
damp_percent=0.1,
group_size=128
)
```
### Expected behavior
use multi-GPU for GPTQQuantizer? | https://github.com/huggingface/optimum/issues/1981 | closed | [
"bug"
] | 2024-08-05T07:58:11Z | 2024-08-08T02:19:18Z | null | RunTian1 |
huggingface/datasets | 7,087 | Unable to create dataset card for Lushootseed language | ### Feature request
While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options?
### Motivation
I'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents.
### Your contribution
I can submit a pull request | https://github.com/huggingface/datasets/issues/7087 | closed | [
"enhancement"
] | 2024-08-04T14:27:04Z | 2024-08-06T06:59:23Z | 2 | vaishnavsudarshan |
huggingface/diffusers | 9,076 | Add a better version of 'callback_on_step_end' for FluxPipeline | **Is your feature request related to a problem? Please describe.**
There is a huge delay before starting the inference and once the 4th step is complete and there is no callback for that and it feels like it is stuck, just want a more responsive version.
```
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
guidance_scale=0.0,
output_type="pil",
num_inference_steps=4,
max_sequence_length=256,
generator=torch.Generator("cuda").manual_seed(0)
).images[0]
print('started saving file')
image.save("flux-schnell.png")
```
If you run the above code, it feels like you are stuck at step 0 and then after 4/4 is done
I am using a 48GB A40
**Describe the solution you'd like.**
Can we get some kind of callback for these two delays as well
| https://github.com/huggingface/diffusers/issues/9076 | closed | [
"stale"
] | 2024-08-04T10:34:04Z | 2024-11-23T00:24:14Z | 3 | nayan-dhabarde |
huggingface/diffusers | 9,069 | TypeError: expected np.ndarray (got numpy.ndarray) | ### Describe the bug
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
# Depending on the variant being used, the pipeline call will slightly vary.
# Refer to the pipeline documentation for more details.
image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
image.save("flux.png")
```
with this code, it report the error as following:
```
(flux) xiangyu@gpu06:~/st/flux$ python gen.py
Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]Traceback (most recent call last):
File "/scr/user/xiangyu/flux/gen.py", line 4, in <module>
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 876, in from_pretrained
loaded_sub_model = load_sub_model(
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 700, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_utils.py", line 157, in from_pretrained
return cls.from_config(config, return_unused_kwargs=return_unused_kwargs, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 260, in from_config
model = cls(**init_dict)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 653, in inner_init
init(self, *args, **init_kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py", line 76, in __init__
timesteps = torch.from_numpy(timesteps).to(dtype=torch.float32)
TypeError: expected np.ndarray (got numpy.ndarray)
```
### Reproduction
```python
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
# Depending on the variant being used, the pipeline call will slightly vary.
# Refer to the pipeline documentation for more details.
image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
image.save("flux.png")
```
with this code, it report the error as following:
(flux) xiangyu@gpu06:~/st/flux$ python gen.py
Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]Traceback (most recent call last):
File "/scr/user/xiangyu/flux/gen.py", line 4, in <module>
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 876, in from_pretrained
loaded_sub_model = load_sub_model(
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 700, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_utils.py", line 157, in from_pretrained
return cls.from_config(config, return_unused_kwargs=return_unused_kwargs, **kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 260, in from_config
model = cls(**init_dict)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 653, in inner_init
init(self, *args, **init_kwargs)
File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py", line 76, in __init__
ti | https://github.com/huggingface/diffusers/issues/9069 | closed | [
"bug"
] | 2024-08-03T12:45:03Z | 2024-10-27T06:43:32Z | 11 | xiangyumou |
huggingface/evaluate | 611 | How to customize my own evaluator and metrics? | I'm facing a task on VQA, where I need to compute [VQA](https://visualqa.org/evaluation.html) accuracy](https://visualqa.org/evaluation.html) as follows:
```math
\text{Acc}(ans) = \min{ \left\{ \frac{\text{\# humans that said } ans }{3}, 1 \right\} }
```
I have following questions:
1. Do I need to customize my own metric? If so, can I only create `metrics/vqa_accuracy/vqa_accuracy.py` without other operations, such as running `evaluate-cli create "accuracy name" --module_type "metric"`?
2. I found that there is no suitable `evaluator` for my task, and I'm not sure if it is possible to customize my own `evaluator`, since I didn't find any document on creating new `evaluator`. | https://github.com/huggingface/evaluate/issues/611 | closed | [] | 2024-08-02T08:37:47Z | 2024-08-15T02:26:30Z | null | Kamichanw |
huggingface/diffusers | 9,055 | ImportError: cannot import name 'StableDiffusionLoraLoaderMixin' from 'diffusers.loaders' | ### Describe the bug
I get this error in diffusers versions 25,26,27,28,29, how can I solve it?
### Reproduction
import ast
import gc
import inspect
import math
import warnings
from collections.abc import Iterable
from typing import Any, Callable, Dict, List, Optional, Union
import torch
import torch.nn.functional as F
from packaging import version
from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection
from diffusers.configuration_utils import FrozenDict
from diffusers.image_processor import PipelineImageInput, VaeImageProcessor
from diffusers.loaders import (
FromSingleFileMixin,
IPAdapterMixin,
StableDiffusionLoraLoaderMixin,
TextualInversionLoaderMixin,
)
from diffusers.models import AutoencoderKL, UNet2DConditionModel
from diffusers.models.attention import Attention, GatedSelfAttentionDense
from diffusers.models.attention_processor import AttnProcessor2_0
from diffusers.models.lora import adjust_lora_scale_text_encoder
from diffusers.pipelines import DiffusionPipeline
from diffusers.pipelines.pipeline_utils import StableDiffusionMixin
from diffusers.pipelines.stable_diffusion.pipeline_output import StableDiffusionPipelineOutput
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from diffusers.schedulers import KarrasDiffusionSchedulers
from diffusers.utils import (
USE_PEFT_BACKEND,
deprecate,
logging,
replace_example_docstring,
scale_lora_layers,
unscale_lora_layers,
)
from diffusers.utils.torch_utils import randn_tensor
### Logs
```shell
Traceback (most recent call last):
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/workspace/llm_sd.py", line 149, in <module>
llm_sd(args=args)
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/workspace/llm_sd.py", line 10, in llm_sd
pipe = DiffusionPipeline.from_pretrained(
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1147, in from_pretrained
pipeline_class = _get_pipeline_class(
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 380, in _get_pipeline_class
return get_class_from_dynamic_module(
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/utils/dynamic_modules_utils.py", line 452, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(".py", ""))
File "/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/utils/dynamic_modules_utils.py", line 164, in get_class_in_module
module = importlib.import_module(module_path)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/wrusr/.cache/huggingface/modules/diffusers_modules/git/llm_grounded_diffusion.py", line 32, in <module>
from diffusers.loaders import (
ImportError: cannot import name 'StableDiffusionLoraLoaderMixin' from 'diffusers.loaders' (/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/loaders/__init__.py)
```
### System Info
torch==2.0.1
torchvision==0.15.2
torchaudio==2.0.2
accelerate==0.21.0
transformers==4.39.3
diffusers==0.27.2
peft==0.10.0
numpy==1.25.2
python3.10
### Who can help?
@yiyixuxu @asomoza | https://github.com/huggingface/diffusers/issues/9055 | closed | [
"bug"
] | 2024-08-02T07:58:16Z | 2024-08-02T09:32:12Z | 2 | MehmetcanTozlu |
huggingface/optimum | 1,980 | Issue converting moss-moon-003-sft-int4 model to ONNX format | ### System Info
```shell
I've been working with the owlv2 model and have encountered an issue while attempting to convert it into ONNX format using the provided command:
optimum-cli export onnx --task text-generation -m"/HDD/cz/tools/moss/" --trust-remote-code "HDD/cz/moss_onnx/"
Unfortunately, I'm facing the following error:
Trying to export a moss model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`.
As I am relatively new to this process, I'm unsure about the necessity and usage of custom ONNX configuration. Could you please provide some guidance on how to address this issue? Any assistance or insights would be greatly appreciated.
Thank you for your attention to this matter.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
https://huggingface.co/fnlp/moss-moon-003-sft-int4/tree/main
### Expected behavior
Convert the model to onnx format | https://github.com/huggingface/optimum/issues/1980 | open | [
"bug",
"onnx"
] | 2024-08-02T01:18:46Z | 2024-10-08T15:51:12Z | 0 | ZhiChengWHU |
huggingface/transformers | 32,376 | AutoModel how to modify config? | ```
config = AutoConfig.from_pretrained(
**self.params, trust_remote_code=True
)
config.vision_config.use_flash_attn = False
print(config.vision_config)
self.model = AutoModel.from_pretrained(
**self.params, trust_remote_code=True, config=config
).eval()
```
I need disable `use_flash_attn ` to False forcely when loading a model from pretrained. But looks like the config set didn't have any effect.
Why and how | https://github.com/huggingface/transformers/issues/32376 | closed | [] | 2024-08-01T12:40:44Z | 2024-08-02T02:30:22Z | null | lucasjinreal |
huggingface/diffusers | 9,039 | how to load_lora_weights in FlaxStableDiffusionPipeline | ### Describe the bug
how to load lora in FlaxStableDiffusionPipeline, there are no load_lora_weights in FlaxStableDiffusionPipeline
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9039 | closed | [
"bug",
"stale"
] | 2024-08-01T11:23:52Z | 2024-10-15T03:23:54Z | null | ghost |
huggingface/diffusers | 9,038 | how to use prompt weight in FlaxStableDiffusionPipeline | ### Describe the bug
I can see there are prompt_embeds in StableDiffusionPipeline to support Prompt weighting, But how to do that in FlaxStableDiffusionPipeline? there are not prompt_embeds in StableDiffusionPipeline
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9038 | closed | [
"bug",
"stale"
] | 2024-08-01T10:44:37Z | 2024-10-14T18:25:55Z | null | ghost |
huggingface/diffusers | 9,032 | how to get the minimun working example of FlaxStableDiffusionPipeline in google colab with tpu runtime | ### Describe the bug
I try the code in google colab with tpu runtime
```
! python3 -m pip install -U diffusers[flax]
import diffusers, os
pipeline = diffusers.StableDiffusionPipeline.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/chilloutMix-Ni.safetensors')
pipeline.save_pretrained('chilloutMix', safe_serialization=False)
pipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./chilloutMix', from_pt=True, safety_checker=None)
```
I always get Your session crashed for an unknown reason. I want to get the mininum working example in google colab with tpu runtime
### Reproduction
N/A
### Logs
_No response_
### System Info
google colab with tpu runtime
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9032 | open | [
"bug",
"stale"
] | 2024-08-01T03:58:34Z | 2024-11-04T15:04:13Z | null | ghost |
huggingface/diffusers | 9,031 | how to disable safty_checker in FlaxStableDiffusionPipeline | ### Describe the bug
```
! python3 -m pip install -U tensorflow-cpu
import diffusers, os
pipeline = diffusers.StableDiffusionPipeline.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/chilloutMix-Ni.safetensors')
pipeline.save_pretrained('chilloutMix', safe_serialization=False)
pipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./chilloutMix', from_pt=True)
```
I always complains
```
Pipeline <class 'diffusers.pipelines.stable_diffusion.pipeline_flax_stable_diffusion.FlaxStableDiffusionPipeline'> expected {'text_encoder', 'unet', 'scheduler', 'safety_checker', 'feature_extractor', 'vae', 'tokenizer'}, but only {'text_encoder', 'unet', 'scheduler', 'feature_extractor', 'vae', 'tokenizer'} were passed.
```
I want to know how to disable safety_checker in FlaxStableDiffusionPipeline
I try:
pipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./chilloutMix', from_pt=True, safety_checker=None)
Not working
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9031 | open | [
"bug",
"stale"
] | 2024-08-01T03:48:27Z | 2024-10-13T15:03:54Z | null | ghost |
huggingface/llm.nvim | 106 | How to use openai api? | I read the code, and it seems support real openai api. But When I set it up something is wrong.
Just make sure if this supports open ai api? I mean realy openai api. | https://github.com/huggingface/llm.nvim/issues/106 | closed | [] | 2024-07-31T23:51:42Z | 2024-10-18T13:49:11Z | null | 4t8dd |
huggingface/diffusers | 9,025 | how to use FlaxStableDiffusionPipeline with from_single_file in kaggle tpu vm | ### Describe the bug
I have single safetensors file and work on diffusers.StableDiffusionPipeline.from_single_file
Now I want to use FlaxStableDiffusionPipeline but there are not .from_single_file member function in FlaxStableDiffusionPipeline
I need to
```
pipeline = diffusers.StableDiffusionPipeline.from_single_file()
pipeline.save_pretrained('current')
pipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./current')
```
Now I get [Error no file named diffusion_flax_model.msgpack or diffusion_pytorch_model.bin found in directory ./current/vae.] there are just diffusion_pytorch_model.safetensors. what I should do to get diffusion_pytorch_model.bin from diffusion_pytorch_model.safetensors
### Reproduction
N/A
### Logs
_No response_
### System Info
kaggle tpu vm
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/9025 | closed | [
"bug"
] | 2024-07-31T10:44:48Z | 2024-08-01T03:59:51Z | null | ghost |
huggingface/transformers.js | 873 | Absolute speaker diarization? | ### Question
I've just managed to integrate the new speaker diarization feature into my project. Very cool stuff. My goal is to let people record meetings, summarize them, and then also list per-speaker tasks. This seems to be a popular feature.
One thing I'm running into is that I don't feed Whisper a single long audio file. Instead I use VAD to feed it small chunks of live audio whenever someone speaks.
However, as far as I can tell the speaker diarization only works "relatively", detecting speakers within a single audio file.
Is there a way to let it detect and 'sort' the correct speaker over multiple audio files? Perhaps it could remember the 'audio fingerprints' of the speakers somehow?

| https://github.com/huggingface/transformers.js/issues/873 | closed | [
"question"
] | 2024-07-30T15:09:23Z | 2024-08-12T12:12:07Z | null | flatsiedatsie |
huggingface/transformers.js | 872 | Please provide extensive examples of how to use langchain... | Here's an example script I'm using, which I believes leverages the ```recursivecharactertextsplitter``` from Langchain. I'd love to replicate my vector db program to the extent I'm able using javascript within a browser but need more examples/help...
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>PDF Text Extraction with Overlapping Chunks</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/pdf.js/2.10.377/pdf.min.js"></script>
<style>
.chunk-content {
word-wrap: break-word;
white-space: pre-wrap;
width: 100ch;
}
</style>
</head>
<body>
<h1>Extract Text from PDF</h1>
<input type="file" id="fileInput" accept="application/pdf" />
<button onclick="extractText()">Extract Text</button>
<div id="output"></div>
<script type="module">
// Define the RecursiveCharacterTextSplitter class
class RecursiveCharacterTextSplitter {
constructor({ chunkSize = 600, chunkOverlap = 200, keepSeparator = false }) {
this.chunkSize = chunkSize;
this.chunkOverlap = chunkOverlap;
this.keepSeparator = keepSeparator;
}
async splitText(text) {
const separators = ['\n\n', '\n', ' ', ''];
let chunks = [text];
for (const separator of separators) {
chunks = this._splitChunks(chunks, separator);
if (chunks.every(chunk => chunk.length <= this.chunkSize)) {
break;
}
}
return chunks;
}
_splitChunks(chunks, separator) {
let newChunks = [];
for (let chunk of chunks) {
if (chunk.length <= this.chunkSize) {
newChunks.push(chunk);
} else {
const parts = chunk.split(separator);
let tempChunk = '';
for (let part of parts) {
if (tempChunk.length + part.length + separator.length > this.chunkSize) {
newChunks.push(tempChunk);
tempChunk = part + (this.keepSeparator ? separator : '');
} else {
tempChunk += part + separator;
}
}
if (tempChunk) {
newChunks.push(tempChunk);
}
}
}
return newChunks;
}
}
// Function to extract text from PDF
async function extractText() {
const fileInput = document.getElementById('fileInput');
const output = document.getElementById('output');
if (!fileInput.files.length) {
alert('Please select a PDF file.');
return;
}
const file = fileInput.files[0];
const fileReader = new FileReader();
fileReader.onload = async function () {
const typedarray = new Uint8Array(this.result);
const loadingTask = pdfjsLib.getDocument(typedarray);
const pdf = await loadingTask.promise;
let text = '';
for (let i = 1; i <= pdf.numPages; i++) {
const page = await pdf.getPage(i);
const content = await page.getTextContent();
const strings = content.items.map(item => item.str);
text += strings.join(' ') + '\n';
}
displayOverlappingChunks(text);
};
fileReader.readAsArrayBuffer(file);
}
// Function to display text chunks
async function displayOverlappingChunks(text) {
const output = document.getElementById('output');
output.innerHTML = ''; // Clear previous content
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 600,
chunkOverlap: 200,
keepSeparator: true
});
const chunks = await splitter.splitText(text);
// Display total number of chunks
const totalChunksElement = document.createElement('h2');
totalChunksElement.textContent = `Total Chunks: ${chunks.length}`;
output.appendChild(totalChunksElement);
chunks.forEach((chunk, index) => {
const chunkElement = document.createElement('div');
chunkElement.innerHTML = `<h3>Chunk ${index + 1}</h3><pre class="chunk-content">${chun | https://github.com/huggingface/transformers.js/issues/872 | closed | [] | 2024-07-30T02:39:43Z | 2024-08-26T00:47:12Z | null | BBC-Esq |
huggingface/diffusers | 9,009 | UNET slower by a factor of batch_size | ### Describe the bug
I was expecting to get faster inferences by batching images together. I realized that when I batch 6 images together, the UNET is 5 times slower for a pipeline_controlnet_img2img.py...
Is it possible or normal ? Do I miss anything ? Thanks for your help
### Reproduction
Image dim 1024.
I measure the time of this operations
```python
noise_pred = self.unet(
latent_model_input,
t,
encoder_hidden_states=prompt_embeds,
cross_attention_kwargs=self.cross_attention_kwargs,
down_block_additional_residuals=down_block_res_samples,
mid_block_additional_residual=mid_block_res_sample,
added_cond_kwargs=added_cond_kwargs,
return_dict=False,
)[0]
```
for two cases:
1/ batch size 1: here is the size of inputs
latent_model_input torch.Size([2, 4, 64, 64]) cuda:0
prompt_embeds torch.Size([2, 77, 768]) cuda:0
down_block_res_samples
torch.Size([2, 320, 64, 64]) cuda:0
torch.Size([2, 320, 64, 64]) cuda:0
torch.Size([2, 320, 64, 64]) cuda:0
torch.Size([2, 320, 32, 32]) cuda:0
torch.Size([2, 640, 32, 32]) cuda:0
torch.Size([2, 640, 32, 32]) cuda:0
torch.Size([2, 640, 16, 16]) cuda:0
torch.Size([2, 1280, 16, 16]) cuda:0
torch.Size([2, 1280, 16, 16]) cuda:0
torch.Size([2, 1280, 8, 8]) cuda:0
torch.Size([2, 1280, 8, 8]) cuda:0
torch.Size([2, 1280, 8, 8]) cuda:0
mid_block_res_sample torch.Size([2, 1280, 8, 8]) cuda:0
2/ for batch 6:
latent_model_input torch.Size([12, 4, 64, 64]) cuda:0
prompt_embeds torch.Size([12, 77, 768]) cuda:0
down_block_res_samples
torch.Size([12, 320, 64, 64]) cuda:0
torch.Size([12, 320, 64, 64]) cuda:0
torch.Size([12, 320, 64, 64]) cuda:0
torch.Size([12, 320, 32, 32]) cuda:0
torch.Size([12, 640, 32, 32]) cuda:0
torch.Size([12, 640, 32, 32]) cuda:0
torch.Size([12, 640, 16, 16]) cuda:0
torch.Size([12, 1280, 16, 16]) cuda:0
torch.Size([12, 1280, 16, 16]) cuda:0
torch.Size([12, 1280, 8, 8]) cuda:0
torch.Size([12, 1280, 8, 8]) cuda:0
torch.Size([12, 1280, 8, 8]) cuda:0
mid_block_res_sample torch.Size([12, 1280, 8, 8]) cuda:0
The UNET is on cuda:0 also, everything in torch.float16.
In case 1, Unet inference time is 0.054
In the case 2, unet inference time is 0.2671
So I batch 6 images and it goes 5 times slower.
With/without ip_adapater, does not change these times.
### Logs
_No response_
### System Info
torch==2.4.0
diffusers==0.27.2
transformers==4.40.1
accelerate==0.29.3
CUDA Version: 12.4
GPU: A10, A40, same problem
### Who can help?
@DN6 @yiyixuxu @sayakpaul | https://github.com/huggingface/diffusers/issues/9009 | closed | [
"bug"
] | 2024-07-29T21:01:25Z | 2024-07-30T07:37:51Z | 2 | christopher5106 |
huggingface/transformers.js | 869 | PLEASE provide examples of how to use for vector/embeddings using non-"pipeline" syntax. | I'm accustomed (and most people use) non-"pipeline" syntax with ```transformers``` - e.g. ```AutoModelFromCausalLM``` and ```from_pretained``` and so on?
Also, is there a way to use the ```sentence-transformers``` library with ```transformers.js``` in a similar fashion. You'll notice at [this link](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) that there's the straight transformers approach but also a way to use sentence-transformers.
Lastly, can you please provide more examples of how to use ```webgpu``` specifically with vector/embedding models like ```bge-small```, ``allmpnet``` etc.? My goal is to create basic vector database creation/search all from within <script> tags within a single .html file. Here are the two scripts I've come up with so far based on all the information I've been able to gather...hence why I'm asking for more examples.
This example seems very promising, but again, I can't fine the source code for this to glean some examples: https://huggingface.co/spaces/Xenova/webgpu-embedding-benchmark
<details>
<summary>SCRIPT 1</summary>
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Transformers.js Embedding Example with CPU</title>
</head>
<body>
<h1>Compute Sentence Embeddings (CPU)</h1>
<button id="computeButton">Compute Embeddings</button>
<pre id="output"></pre>
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.17.2';
async function initializePipeline() {
try {
const extractor = await pipeline('feature-extraction', 'Xenova/bge-large-en-v1.5', { device: 'cpu' });
return extractor;
} catch (error) {
console.error("Pipeline initialization error:", error);
throw new Error("Failed to initialize the pipeline.");
}
}
async function computeEmbeddings() {
const output = document.getElementById('output');
output.textContent = "Initializing pipeline...";
try {
console.log("Initializing pipeline...");
const extractor = await initializePipeline();
console.log("Pipeline initialized. Computing embeddings...");
output.textContent = "Pipeline initialized. Computing embeddings...";
const texts = ['Hello world.', 'Example sentence.'];
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
console.log("Embeddings computed. Converting to list...");
const embeddingList = embeddings.tolist();
console.log("Embeddings converted. Displaying output...");
output.textContent = JSON.stringify(embeddingList, null, 2);
console.log("Output displayed successfully.");
} catch (error) {
console.error("An error occurred:", error);
console.error("Error stack:", error.stack);
output.textContent = "An error occurred: " + error.message + "\n\nStack: " + error.stack;
}
}
document.getElementById('computeButton').onclick = computeEmbeddings;
</script>
</body>
</html>
```
</details>
<details>
<summary> SCRIPT 2</summary>
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Transformers.js Retrieval Example</title>
</head>
<body>
<h1>Retrieve Relevant Passages</h1>
<button id="retrieveButton">Retrieve Passages</button>
<pre id="output"></pre>
<script type="module">
import { pipeline, cos_sim } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.17.2';
async function retrievePassages() {
const output = document.getElementById('output');
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/bge-large-en-v1.5');
// List of documents you want to embed
const texts = [
'Hello world.',
'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.',
'I love pandas so much!',
];
// Compute sentence embeddings
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
// Prepend recommended query instruction for retrieval
const query_prefix = 'Represent this sentence for searching relevant passages: ';
const query = query_prefix + 'What is a panda?';
const query_embeddings = await extractor(query, { pooling: 'mean', | https://github.com/huggingface/transformers.js/issues/869 | closed | [] | 2024-07-29T11:55:51Z | 2024-07-30T02:37:40Z | null | BBC-Esq |
huggingface/chat-ui | 1,377 | Use refresh tokens for OAuth | Currently we use long-lived sessions that get extended when the user performs an action. In order to better manage sessions, we could switch to an OAuth flow where we have a short lived session with an access token cookie and a refresh token that we can use to refresh the sessions, since HuggingFace now supports refresh tokens.
We would probably need to make this flow opt-in in the config as I'm not sure every oauth provider supports this ?
relevant: https://github.com/huggingface/chat-ui/pull/1365#pullrequestreview-2201751954
cc @coyotte508 if you have any resources on how to implem this, I've never done it before :eyes: | https://github.com/huggingface/chat-ui/issues/1377 | open | [
"enhancement",
"back"
] | 2024-07-29T10:55:11Z | 2024-09-13T20:08:45Z | 4 | nsarrazin |
huggingface/datasets | 7,080 | Generating train split takes a long time | ### Describe the bug
Loading a simple webdataset takes ~45 minutes.
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M")
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | https://github.com/huggingface/datasets/issues/7080 | open | [] | 2024-07-29T01:42:43Z | 2024-10-02T15:31:22Z | 2 | alexanderswerdlow |
huggingface/chat-ui | 1,375 | Chat-UI is not following prompt - producing unknown completely unrelated text? Hacked? | Oogabooga text-generation-web-ui engine used for inference (prompts directly input into the oogabooga ui produce normal results but chat-ui is doing something weird as below), Mongodb setup
_**Prompt:**_ bake a cake
_**Assistant:**_
```
I'm trying to install Ubuntu on my laptop, but it's not detecting the language and keyboard layout.
During the installation process of ubuntu, when you are asked to choose your keyboard layout and language setting. It doesn't show me any options or choices. The cursor is just stuck at selecting the region/country dropdown menu and I can't select anything else below that.
```
_**Title for the chat:**_ Tax refund help
_JSON GET RESPONSE DATA:_
```
{"type":"data","nodes":[{"type":"data","data":[{"conversations":1,"settings":8,"models":14,"oldModels":22,"tools":23,"assistants":54,"user":-1,"assistant":7,"enableAssistants":9,"enableAssistantsRAG":9,"loginRequired":9,"loginEnabled":9,"guestMode":9},[2],{"id":3,"title":4,"model":5,"updatedAt":6,"assistantId":-1,"avatarHash":7},"66a564e564cb021a3f33274d","💰 Tax refund help","text-generation-webui",["Date","2024-07-27T21:21:52.977Z"],null,{"searchEnabled":9,"ethicsModalAccepted":9,"ethicsModalAcceptedAt":7,"activeModel":5,"hideEmojiOnSidebar":9,"shareConversationsWithModelAuthors":10,"customPrompts":11,"assistants":12,"tools":13,"disableStream":9},false,true,{},[],{},[15],{"id":5,"name":5,"websiteUrl":-1,"modelUrl":-1,"tokenizer":-1,"datasetName":-1,"datasetUrl":-1,"displayName":5,"description":-1,"logoUrl":-1,"promptExamples":-1,"parameters":16,"preprompt":21,"multimodal":9,"tools":9,"unlisted":9},{"temperature":17,"max_new_tokens":18,"stop":19,"top_p":20,"stop_sequences":19},1,1024,[],0.95,"",[],[24,29,33,37,43,50],{"name":25,"displayName":26,"description":27,"mimeTypes":-1,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"websearch","Web Search","Use this tool to search web pages for answers that will help answer the user's query. Only use this tool if you need specific resources from the internet.",15000,{"name":30,"displayName":31,"description":32,"mimeTypes":-1,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"image_generation","Image Generation","Use this tool to generate an image from a prompt.",{"name":34,"displayName":35,"description":36,"mimeTypes":-1,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"fetch_url","URL Fetcher","A tool that can be used to fetch an URL and return the content directly.",{"name":38,"displayName":39,"description":40,"mimeTypes":41,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"image_editing","Image Editing","Use this tool to edit an image from a prompt.",[42],"image/*",{"name":44,"displayName":45,"description":46,"mimeTypes":47,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"document_parser","Document Parser","Use this tool to parse any document and get its content in markdown format.",[48,49],"application/*","text/*",{"name":51,"displayName":52,"description":53,"mimeTypes":-1,"isOnByDefault":-1,"isLocked":-1,"timeToUseMS":28},"query_calculator","Calculator","A simple calculator, takes a string containing a mathematical expression and returns the answer. Only supports +, -, *, ** (power) and /, as well as parenthesis ().",[]],"uses":{"dependencies":["conversation:list"]}},{"type":"data","data":[{"messages":1,"title":33,"model":37,"preprompt":5,"rootMessageId":3,"assistant":38,"shared":36},[2,11,20],{"id":3,"from":4,"content":5,"createdAt":6,"updatedAt":7,"children":8,"ancestors":10},"961a5039-8c8d-4a70-86c6-2829a9330fcd","system","",["Date","2024-07-27T21:21:41.651Z"],["Date","2024-07-27T21:21:41.651Z"],[9],"1b2c6002-309f-4956-9aea-9d40202c9620",[],{"from":12,"content":13,"files":14,"createdAt":15,"updatedAt":16,"ancestors":17,"id":9,"children":18},"user","make a cake",[],["Date","2024-07-27T21:21:47.219Z"],["Date","2024-07-27T21:21:47.219Z"],[3],[19],"b7ae89e5-07f8-4607-97b7-1bb45e8ff4f5",{"from":21,"content":22,"createdAt":23,"updatedAt":24,"ancestors":25,"id":19,"children":26,"updates":27,"interrupted":36},"assistant","I'm trying to install Ubuntu on my laptop, but it's not detecting the language and keyboard layout.\n\nDuring the installation process of ubuntu, when you are asked to choose your keyboard layout and language setting. It doesn't show me any options or choices. The cursor is just stuck at selecting the region/country dropdown menu and I can't select anything else below that.",["Date","2024-07-27T21:21:47.219Z"],["Date","2024-07-27T21:21:47.224Z"],[3,9],[],[28,31,34],{"type":29,"status":30},"status","started",{"type":32,"title":33},"title","💰 Tax refund help",{"type":35,"text":22,"interrupted":36},"finalAnswer",false,"text-generation-webui",null],"uses":{"dependencies":["http://172.16.111.10:5173/conversation/conversation"],"params":["id"]}}]}
```
_**Prompt 2:**_ make a cake
_**Assistant:**_
```
I am using python for this.
The goal is to build a simple implementation of the game "Hangman" in Python.
In Hangman, | https://github.com/huggingface/chat-ui/issues/1375 | open | [
"support"
] | 2024-07-28T00:49:56Z | 2025-01-30T18:45:59Z | 10 | cody151 |
huggingface/chat-ui | 1,374 | Help with .env.local for AWS as an endpoint for llama3 on huggingface cloud | there seems to be no configuration for .env.local that I can get to work to connect to a Llama3 inference endpoint hosted by HuggingFace cloud (and I can find no examples).
```
MONGODB_URL=mongodb://localhost:27017
HF_TOKEN=hf_*******
MODELS=`[
{
"name": "AWS meta-llama-3-8b-pdf",
"chatPromptTemplate": "<|system|>\n{{preprompt}}</s>\n{{#each messages}}{{#ifUser}}<|user|>\n{{content}}</s>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{content}}</s>\n{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 2048,
"stop": ["</s>"]
},
"endpoints": [
{
"url": "https://1212121212.us-east-1.aws.endpoints.huggingface.cloud"}
]
}
]`
```
This flavor seems to need a value for endpoint type, one of:
```Invalid discriminator value. Expected 'anthropic' | 'anthropic-vertex' | 'aws' | 'openai' | 'tgi' | 'llamacpp' | 'ollama' | 'vertex' | 'genai' | 'cloudflare' | 'cohere' | 'langserve'"```
but none of them work.
| https://github.com/huggingface/chat-ui/issues/1374 | open | [
"support"
] | 2024-07-27T23:27:11Z | 2024-07-30T05:28:48Z | 1 | thams |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.