repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 โ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 25,138 | How to return detected language using whisper with asr pipeline? | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi, @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hello,
I'm trying to use asr pipeline with whisper, in other to detect an audio language and transcribe it. I get the transcribed audio successfully, but I have not found a way to return the detected language too.
I search the GitHub issues, and it seems this was added by [#21427](https://github.com/huggingface/transformers/pull/21427), but I don't know how to return the detected language. Here is my code:
```
from transformers import pipeline
import torch
speech_file = "input.mp3"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
whisper = pipeline("automatic-speech-recognition", max_new_tokens=448, model="openai/whisper-small", device=device)
whisper_result = whisper(speech_file)
print(whisper_result)
```
### Expected behavior
Be able to return detected language. | https://github.com/huggingface/transformers/issues/25138 | closed | [] | 2023-07-27T10:51:31Z | 2025-02-11T11:24:49Z | null | arso1er |
huggingface/text-generation-inference | 703 | Is there an example how to quantize a model (e.g. meta-llama/Llama-2-7b-chat-hf) using the prebuilt docker image (e.g. ghcr.io/huggingface/text-generation-inference:0.9.3) | ### System Info
0.9.3
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
NA
### Expected behavior
A command to quantize a model (e.g. meta-llama/Llama-2-7b-chat-hf) using the prebuilt docker image (e.g. ghcr.io/huggingface/text-generation-inference:0.9.3)
After quantization, the model should be able to be loaded with `text-generation-inference --quantize gptq` | https://github.com/huggingface/text-generation-inference/issues/703 | closed | [] | 2023-07-27T01:08:54Z | 2023-07-28T21:41:46Z | null | taoari |
huggingface/sentence-transformers | 2,262 | How to pass more than sentence pairs to InputExamples for fine-tuning? | I have more information about each data point such as language and contextual data that could potentially help (maybe) for our task. The task is to generate sentence similarity embedding and labels.
For the time being, I was able to expand the input examples code to get these features in to expand the input.
```
Train_data = [โsentence1โ,โsentence2โ,โtextcategory1โ,โlabelโ]
Train_examples =[InputExample(texts=[x[0],x[1],x[2]],label=x[3]) for x in Train_data]
```
Since the `textcategory1` gets encoded as well at the end of the input example in the form of `sentence1[0];sentence2[0];textcategory1[0]` separated by ;.
1. How does this impact the overall input for a model since it doesnt just see a sentence pair but more?
2. Does the fine-tuning layer see the two sentences as pairs or it sees as a single input and a label?
3. Even though it works, if this is not the correct way how do I include the sense of tokens for the fine-tuning? I.e. use textcategory1 as <TOKEN1> or feature without messing with the embedding. | https://github.com/huggingface/sentence-transformers/issues/2262 | open | [] | 2023-07-26T18:29:54Z | 2023-07-30T15:39:24Z | null | cyriltw |
huggingface/trl | 578 | How to load a trained reward model? Different (random) results each time the model is loaded. | I trained a reward model using QLoRA and now I want to load it. I followed the instructions from this example from peft:
https://github.com/huggingface/peft/blob/main/examples/sequence_classification/LoRA.ipynb
This leads me to the following code:
```
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSequenceClassification, AutoTokenizer
peft_model_id = "vincentmin/llama-2-7b-reward-oasst1"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForSequenceClassification.from_pretrained(
config.base_model_name_or_path,
num_labels=1,
load_in_8bit=True,
torch_dtype=torch.float16,
)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, use_auth_token=True)
model.eval()
with torch.no_grad():
reward = model(**tokenizer("hello world", return_tensors='pt')).logits
reward
```
If I run this code twice in a row, including loading the model again, I get different results for `reward`. The model output should be deterministic. If I just calculate the reward with the same loaded model, the result is deterministic. Hence, I'm concluding that there are randomly initialised weights that are not correctly loaded with `PeftModel.from_pretrained`. If I try to test the model on the test data, I'm getting random (close to 50% accuracy) results, while the model reached accuracies of >70% during training.
I trained the model using an adaptation of https://github.com/lvwerra/trl/blob/main/examples/scripts/reward_trainer.py. The resulting configuration is here https://huggingface.co/vincentmin/llama-2-7b-reward-oasst1/blob/main/adapter_config.json.
How are we advised to push and load our finetuned reward models to get deterministic results? I think the community would benefit from a documented example as a companion to `reward_trainer.py`. | https://github.com/huggingface/trl/issues/578 | closed | [] | 2023-07-26T15:02:13Z | 2023-07-26T19:00:10Z | null | vincentmin |
huggingface/datasets | 6,078 | resume_download with streaming=True | ### Describe the bug
I used:
```
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True,
split="train"
)
```
Unfortunately, the server had a problem during the training process. I saved the step my training stopped at.
But how can I resume download from step 1_000_ยด000 without re-streaming all the first 1 million docs of the dataset?
`download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True.
### Steps to reproduce the bug
```
from datasets import load_dataset, DownloadConfig
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True, # optional
split="train",
download_config=DownloadConfig(resume_download=True)
)
# interupt the run and try to relaunch it => this restart from scratch
```
### Expected behavior
I would expect a parameter to start streaming from a given index in the dataset.
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.0 | https://github.com/huggingface/datasets/issues/6078 | closed | [] | 2023-07-26T14:08:22Z | 2023-07-28T11:05:03Z | 3 | NicolasMICAUX |
huggingface/diffusers | 4,281 | how o convert trained LoRA bin format file to A111 safetensor format | ### Describe the bug
I find script convert_lora_safetensor_to_diffusers.py,but it seems like convert safetensors to bin,not bin to safetensors,I try run this script,error like this:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ C:\Users\fut\Desktop\tinaniu\convert_lora_safetensor_to_diffusers.py:125 in <module> โ
โ โ
โ 122 โ lora_prefix_text_encoder = args.lora_prefix_text_encoder โ
โ 123 โ alpha = args.alpha โ
โ 124 โ โ
โ โฑ 125 โ pipe = convert(base_model_path, checkpoint_path, lora_prefix_unet, lora_prefix_text_ โ
โ 126 โ โ
โ 127 โ pipe = pipe.to(args.device) โ
โ 128 โ pipe.save_pretrained(args.dump_path, safe_serialization=args.to_safetensors) โ
โ โ
โ C:\Users\fut\Desktop\tinaniu\convert_lora_safetensor_to_diffusers.py:31 in convert โ
โ โ
โ 28 โ pipeline = StableDiffusionPipeline.from_pretrained(base_model_path, torch_dtype=torc โ
โ 29 โ โ
โ 30 โ # load LoRA weight from .safetensors โ
โ โฑ 31 โ state_dict = load_file(checkpoint_path) โ
โ 32 โ โ
โ 33 โ visited = [] โ
โ 34 โ
โ โ
โ D:\anaconda3\lib\site-packages\safetensors\torch.py:259 in load_file โ
โ โ
โ 256 โ ``` โ
โ 257 โ """ โ
โ 258 โ result = {} โ
โ โฑ 259 โ with safe_open(filename, framework="pt", device=device) as f: โ
โ 260 โ โ for k in f.keys(): โ
โ 261 โ โ โ result[k] = f.get_tensor(k) โ
โ 262 โ return result โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
SafetensorError: Error while deserializing header: HeaderTooLarge
### Reproduction
SafetensorError: Error while deserializing header: HeaderTooLarge
### Logs
_No response_
### System Info
diffusers==0.18.2
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/4281 | closed | [
"bug",
"stale"
] | 2023-07-26T08:16:48Z | 2023-09-04T15:03:46Z | null | futureflsl |
huggingface/llm-vscode | 50 | the vsix doesn't work?,how to fix it | i download the vsix from https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode&ssr=false#version-history๏ผbut in vscode when i installed it ,it doesn't work ใcould you fix this? | https://github.com/huggingface/llm-vscode/issues/50 | closed | [] | 2023-07-26T07:05:17Z | 2023-10-17T14:34:58Z | null | CuteBadEgg |
huggingface/transformers.js | 216 | [Question] Getting a lot of ERR 404s when running in browser. | When implementing code that accesses bart-large-mnli in the front-end part of my code, the browser console tells me every attempt to use the pipeline fails with an error 404. (at least that's what I think it's telling me)
So I am trying to use the bart-large-mnli to analyze a bunch of 'post' objects, and only display them if the text in the post relates to a selected 'interest'.
Here is my javascript code to do that (checkRelevance.js):
```
import { pipeline } from "@xenova/transformers";
export default async function checkTweet(text, interest) {
try {
console.log(
`checking tweet...\ntext:${text.substring(
0,
10
)}...\ninterest:${interest}`
);
let pipe = await pipeline(
"zero-shot-classification",
"Xenova/bart-large-mnli",
{ quantized: false }
);
// console.log("await out...");
let out = await pipe(text, interest);
console.log(out);
const relevant = out.scores[0] >= 0.5;
console.log(out.scores[0]);
return relevant;
} catch (error) {
console.log(error);
}
}
```
And here is how it is implemented in the front end Feed.jsx:
```
useEffect(() => {
setFilteredPosts(posts.map(post => {
checkTweet(post.text, selectedInterest).then(result => {
if (result) {
return post
}
}
)
}))
}, [selectedInterest]);
// ...
filteredPosts.map((post) => (
<Post
displayName={post.displayName}
userName={post.userName}
verified={post.verified}
text={post.text}
image={post.image}
avatar={post.avatar}
/>)
```
Now when I run checkRelevance.js on it's own with a small test, it accesses the api just fine, but when it's implemented in the browser I get this:
<img width="467" alt="Screen Shot 2023-07-25 at 5 40 40 PM" src="https://github.com/xenova/transformers.js/assets/77216995/6d693e09-d12d-4cfc-855d-7a764e0faca3">
and then this:
<img width="475" alt="Screen Shot 2023-07-25 at 5 41 06 PM" src="https://github.com/xenova/transformers.js/assets/77216995/50ad64c1-28b3-4469-8171-e652ecdc0a33">
I'm not asking you to debug all my code lol, just wondering if there's something extra that needs doing for running it in the browser. If you need to see more lmk. Thanks!
| https://github.com/huggingface/transformers.js/issues/216 | closed | [
"question"
] | 2023-07-26T00:42:20Z | 2023-08-20T23:43:04Z | null | eklavyaisabird |
huggingface/transformers.js | 215 | [Question] How to use a sharp buffer as input to "image-classification" pipeline ? | hi,
i am looking to use a sharp buffer as an input to "image-classification" pipeline, it seems that only url can be provided as an input, i am using the model in nodejs environment (backend) , can anyone provide a solution to this.
thanks
| https://github.com/huggingface/transformers.js/issues/215 | closed | [
"question"
] | 2023-07-25T21:10:06Z | 2023-07-25T21:42:18Z | null | geminigeek |
huggingface/chat-ui | 368 | Ability to pass in request headers for model endpoints | Hello.
I am trying to add an AWS Sagemaker model endpoint to chat-ui and I am getting stuck on the authorization part because I can't pass in request headers to the endpoint. I am able to pass in the authorization string but then I get the following error:
```
Could not parse last message {"message":"Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=AWS4-HMAC-SHA256 Credential=<redacted>, Signature=<redacted>"}
SyntaxError: Unexpected end of JSON input
at JSON.parse (<anonymous>)
at parseGeneratedText (/src/routes/conversation/[id]/+server.ts:196:32)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async saveMessage (/src/routes/conversation/[id]/+server.ts:107:26)
```
Is it possible to add the ability to pass in headers to the model endpoints in the `.env.local` file? | https://github.com/huggingface/chat-ui/issues/368 | closed | [] | 2023-07-25T20:12:28Z | 2023-08-18T15:26:41Z | 3 | lotif |
huggingface/autotrain-advanced | 161 | How to save every X steps on cli? | You could set --save_strategy steps, but how do you specify the number of steps so that the model is saved every X steps?
My command:
```
autotrain llm --train --project_name project --model ./llama/llama_models/7B-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 1 --trainer sft --save_strategy steps --save_total_limit 1
``` | https://github.com/huggingface/autotrain-advanced/issues/161 | closed | [] | 2023-07-25T16:10:22Z | 2023-12-18T15:29:08Z | null | astarostap |
huggingface/setfit | 400 | From which number of training samples does it not make sense anymore to use SetFit? | I'm building a classifier that assigns news articles to one of 8 categories, I was wondering if there was a rule of thumb that over a certain number of training samples per class it would make more sense to use a traditional transformer classifier such as roberta-large? Or will SetFit always be more accurate?
| https://github.com/huggingface/setfit/issues/400 | open | [
"question"
] | 2023-07-25T06:56:04Z | 2023-08-01T14:13:48Z | null | lbelpaire |
huggingface/diffusers | 4,234 | How to train instruct-pix2pix with controlnet and inference | Hi guys,
I want to train instruct-pix2pix using controlnet condition. As you know, currently available for [instruct-pix2pix](https://huggingface.co/docs/diffusers/training/instructpix2pix) and [control net](https://huggingface.co/docs/diffusers/training/controlnet) separately.
**Q1)** Have you plan about this problem for implementation?
**Q2)** How I can merge them and add controlnet into instruct-pix2pix?
**Q3)** Suppose this issue is done, I want to do start training, In your opinion, If we use controlnet pretraining network, and freeze that network and I want to train only instruct-pix2pix model, Is it common way to do? | https://github.com/huggingface/diffusers/issues/4234 | closed | [
"stale"
] | 2023-07-24T13:47:02Z | 2023-08-31T15:04:14Z | null | mzeynali |
huggingface/chat-ui | 366 | v0.4.0 Not on GitHub | The hosted version is already at v0.4.0. This is at least not reflected in the tags or releases here. Is there other non public code? | https://github.com/huggingface/chat-ui/issues/366 | closed | [] | 2023-07-24T11:35:38Z | 2023-07-24T13:19:30Z | 2 | claell |
huggingface/chat-ui | 364 | Facing Error 403 after deployment | Hi folks!
My Chat-UI setup along with a custom LangChain model works perfect on localhost. I tried to deploy it on an Azure VM with Docker Containers and I have been facing this issue which might be due to MongoDB.

Any help is appreciated. Thank you | https://github.com/huggingface/chat-ui/issues/364 | closed | [
"back",
"support"
] | 2023-07-24T10:57:53Z | 2024-04-25T16:29:38Z | 13 | awsum0225 |
huggingface/chat-ui | 363 | When starting with build files, it becomes impossible to change the model. | When starting with pm2 following the Docker file's instructions, I encounter an issue where I cannot change the model. Specifically, after clicking on "Current Model," a popup to select the model appears, but even after selecting "Apply," no changes are observed. Upon inspecting the developer tools, I noticed a 403 Error for http://localhost:3000/settings. This problem occurs both when hosting the software on a Docker container and when deploying it directly.

Also, I have confirmed that this error does not occur when using `npm run dev` or `npm run preview`. Therefore, I suspect that this issue may be related to pm2. If someone has any hints or insights that could help resolve this problem, I would greatly appreciate comments.
My environment is as follows:
OS: Windows 10 + WSL 2 (Ubuntu 20.04)
Node Version: 18.15.0
Commit ID: 569bde33470b075bf1365af2cb03a1b31b875379
| https://github.com/huggingface/chat-ui/issues/363 | closed | [
"bug",
"support"
] | 2023-07-24T08:30:03Z | 2023-10-16T16:07:25Z | 4 | suzuki-shm |
huggingface/diffusers | 4,222 | How to train ldm on a low-resolution image dataset (128*128) | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| https://github.com/huggingface/diffusers/issues/4222 | closed | [
"stale"
] | 2023-07-24T03:14:20Z | 2023-08-31T15:04:25Z | null | crowningwang |
huggingface/text-generation-inference | 679 | How to load a model from a given path? | ### System Info
tgi version:0.9.0
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
I just want to use tgi to run llama-7b model to get the throughput on A100. The model files are preloaded in a given path. I followed the readme and found the following error.
**Is theres any option for load model from a path?** Thanks~
```shell
me@ubuntu20-02:~/zy$ docker run --gpus all --shm-size 1g -p 8080:80 -v ~/w/data:/data ghcr.io/huggingface/text-generation-inference:0.9.2 --model-id /shared/models/huggingface/llama-7B-hf/
2023-07-23T14:17:02.797888Z INFO text_generation_launcher: Args { model_id: "/shared/models/huggingface/LLM/llama-7B-hf/", revision: None, validation_workers: 2, sharded: None, num_shard: None, quantize: None, dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_input_length: 1024, max_total_tokens: 2048, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: 16000, max_waiting_tokens: 20, hostname: "1401cbf60306", port: 80, shard_uds_path: "/tmp/text-generation-server", master_addr: "localhost", master_port: 29500, huggingface_hub_cache: Some("/data"), weights_cache_override: None, disable_custom_kernels: false, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, ngrok: false, ngrok_authtoken: None, ngrok_domain: None, ngrok_username: None, ngrok_password: None, env: false }
2023-07-23T14:17:02.798147Z INFO text_generation_launcher: Starting download process.
2023-07-23T14:17:08.906356Z ERROR text_generation_launcher: Download encountered an error: Traceback (most recent call last):
File "/opt/conda/bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py", line 109, in download_weights
utils.weight_files(model_id, revision, extension)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/hub.py", line 96, in weight_files
filenames = weight_hub_files(model_id, revision, extension)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/hub.py", line 25, in weight_hub_files
info = api.model_info(model_id, revision=revision)
File "/opt/conda/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 112, in _inner_fn
validate_repo_id(arg_value)
File "/opt/conda/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/bigdata/shared/models/huggingface/LLM/llama-7B-hf/'. Use `repo_type` argument if needed.
Error: DownloadError
```
### Expected behavior
output the running log. | https://github.com/huggingface/text-generation-inference/issues/679 | closed | [] | 2023-07-23T06:35:16Z | 2023-07-24T01:34:10Z | null | zhaoyang-star |
huggingface/controlnet_aux | 67 | Please I want to know how to install | Hello, I am new to this and I want to know how to install this particular package. I have installed other packages, but this one I do not know how. Please help with this.
| https://github.com/huggingface/controlnet_aux/issues/67 | open | [] | 2023-07-22T18:57:33Z | 2023-07-26T01:03:21Z | null | sohaib19922 |
huggingface/diffusers | 4,210 | How to use "attention_mask" in "forward" function of "UNet2DConditionModel" defined in "diffusers/src/diffusers/models /unet_2d_condition.py"? | ### Describe the bug
How to use the "attention_mask" in UNet2DConditionModel? What should the size of "attention_mask" look like?
And "attention_mask" can not be used when opening "enable_xformers_memory_efficient_attention" in "examples/text_to_image/train_text_to_image.py"?
` File "/usr/local/lib/python3.9/dist-packages/diffusers/models/unet_2d_blocks.py", line 970, in custom_forward
return module(*inputs, return_dict=return_dict)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/diffusers/models/transformer_2d.py", line 291, in forward
hidden_states = block(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/diffusers/models/attention.py", line 154, in forward
attn_output = self.attn1(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/diffusers/models/attention_processor.py", line 321, in forward
return self.processor(
File "/usr/local/lib/python3.9/dist-packages/diffusers/models/attention_processor.py", line 1027, in __call__
attention_mask = attention_mask.expand(-1, query_tokens, -1)
RuntimeError: expand(torch.cuda.HalfTensor{[80, 1, 6144, 6144]}, size=[-1, 6144, -1]): the number of sizes provided (3) must be greater or equal to the number of dimensions in the tensor (4)`
### Reproduction
None
### Logs
_No response_
### System Info
- `diffusers` version: 0.19.0.dev0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Huggingface_hub version: 0.16.4
- Transformers version: 4.30.2
- Accelerate version: 0.21.0
- xFormers version: 0.0.20
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/4210 | closed | [
"bug",
"stale"
] | 2023-07-22T17:28:56Z | 2024-10-18T16:34:37Z | null | ZihaoW123 |
huggingface/accelerate | 1,758 | How to use c10 backend for fault tolerance | Hi,
I found little to no documentation on how to use c10 backend for fault tolerance with accelerate. PyTorch seems to be having this:
https://pytorch.org/docs/stable/elastic/rendezvous.html
I am looking for fault tolerance in case of crash in few nodes, which also means adjusting batch size dynamically to account for nodes that are down.
Thanks in advance. | https://github.com/huggingface/accelerate/issues/1758 | closed | [] | 2023-07-22T08:26:33Z | 2023-08-29T15:06:00Z | null | geekyGoku |
huggingface/autotrain-advanced | 155 | How to do inference via autotrain-advanced? | I see an option to do inference autotrain llm --help.
1. Can you share command to do inference on say llama2 model ? How do you pass lora files to do inference?
2. Any option to do merge and unload while saving the model locally?
3. Any option for multi-gpu training with single node - specify local rank? | https://github.com/huggingface/autotrain-advanced/issues/155 | closed | [] | 2023-07-22T05:55:25Z | 2023-12-15T00:14:28Z | null | sujithjoseph |
huggingface/transformers.js | 206 | [Question] Output always equal to Input in text-generation | I tried a different types of input and always get the output equals the input... What I'm missing?
```
const answerer = await pipeline('text-generation', 'Xenova/LaMini-Cerebras-590M');
let zica = await answerer(`Based on this history:
Andrรฉ de Mattos Ferraz is an engineering manager in Rio de Janeiro, Brazil. He has worked in systems development in the oil sector, working in several areas of the oil/gas life cycle: Exploration, Reservoir, and Production. He also worked on data science projects for predicting failures of water injection pumps, forecasting water filter saturation (SRU), and analyzing vibrations.
What are Andrรฉ tech skills?`);
console.log(zica)
```

| https://github.com/huggingface/transformers.js/issues/206 | closed | [
"question"
] | 2023-07-21T21:18:02Z | 2023-07-22T02:21:05Z | null | AndreEneva |
huggingface/transformers.js | 205 | [Question] Is transformers.js expected to work with react native? | I've naively been trying to run the transformers js library via react native on android.
Note that onnxruntime-react-native explicitly supports react native, however the transformers.js package depends only on onnxruntime-web and onnruntime-node.
Importing the transformers.js works fine, however as I try to load a model, I receive the error `import.meta` is currently unsupported from `transformers.js`.
It would be super convenient to be able to use pipes directly without needing to interface without onnxruntine-react-native directly! If not supported yet, what would need to be done? | https://github.com/huggingface/transformers.js/issues/205 | closed | [
"question"
] | 2023-07-21T20:55:44Z | 2023-07-21T21:35:35Z | null | Wehzie |
huggingface/setfit | 398 | hyperparameters to control how to handle long documents | It's common that one might want to use setfit for classifying documents that are longer than max_token_len.
There are several strategies for handling long documents, and the efficacy of each is data dependent:
* Break the document up at max_token_length, possibly avoiding breaking word boundaries.
* Optionally using a sliding window.
* Keeping all the windows, or the first k-windows, or something fancier like finding the most "interesting" windows with respect to the overall corpus.
Then after embedding each window, different classification strategies are possible:
* maxpool then predict
* average then predict
* predict then average
It would be great if these could approaches could be hyperparameters for validation + test.
For train, it might be easiest to insist the training max_token_len is in bounds, alternately the above strategies could be used too.
Related:
https://github.com/UKPLab/sentence-transformers/issues/1673
https://github.com/UKPLab/sentence-transformers/issues/1333
https://github.com/UKPLab/sentence-transformers/issues/1166 | https://github.com/huggingface/setfit/issues/398 | open | [] | 2023-07-21T11:53:13Z | 2023-07-21T11:53:13Z | null | turian |
huggingface/text-generation-inference | 672 | What is optimal max batch size max sequence length (max_total_tokens) for running llama 2 70b chat on 4 A100 80GB? | This is what i have in my current config
validation_workers: 2, max_total_tokens: 4096, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: None, max_waiting_tokens: 20
What do you recommend I should use to get the most out of inference for this setup? | https://github.com/huggingface/text-generation-inference/issues/672 | closed | [] | 2023-07-21T11:17:49Z | 2023-07-21T12:45:31Z | null | yakotoka |
huggingface/datasets | 6,057 | Why is the speed difference of gen example so big? | ```python
def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):
with open(metadata_path, 'r') as file:
metadata = json.load(file)
for idx, item in enumerate(metadata):
image_path = item.get('image_path')
text_content = item.get('text_content')
image_data = open(image_path, "rb").read()
yield idx, {
"text": text_content,
"image": {
"path": image_path,
"bytes": image_data,
},
"conditioning_image": {
"path": image_path,
"bytes": image_data,
},
}
```
Hello,
I use the above function to deal with my local data set, but I am very surprised that the speed at which I generate example is very different. When I start a training task, **sometimes 1000examples/s, sometimes only 10examples/s.**

I'm not saying that speed is changing all the time. I mean, the reading speed is different in different training, which will cause me to start training over and over again until the speed of this generation of examples is normal.
| https://github.com/huggingface/datasets/issues/6057 | closed | [] | 2023-07-21T03:34:49Z | 2023-10-04T18:06:16Z | 1 | pixeli99 |
huggingface/transformers.js | 203 | how to do embeddings? | I want to create an AI assistant for my personal website using Node.js. While I can easily create it using OpenAI embeddings, their API costs are prohibitively expensive. Therefore, I am looking for an alternative method and wondering how I can perform embeddings using a CSV file. Can you advise me on how to do this?
```
async function getEmbeddings(tokens) {
console.log("start getEmbeddings");
let response;
try {
console.log("initiating openai api call");
response = await openai.createEmbedding({
model: "text-embedding-ada-002",
input: tokens,
});
} catch (e) {
console.error("Error calling OpenAI API getEmbeddings:", e?.response?.data);
throw new Error("Error calling OpenAI API getEmbeddings");
}
return response.data.data;
}
``` | https://github.com/huggingface/transformers.js/issues/203 | closed | [
"question"
] | 2023-07-21T02:41:40Z | 2024-06-26T14:09:51Z | null | putuoka |
huggingface/chat-ui | 361 | Configuration for Llama 2 | I am trying to self host Llama 2 with https://github.com/huggingface/text-generation-inference and https://github.com/huggingface/chat-ui . If I give configuration for chat-ui like this:
```
{
"name": "llama2-7b-chat",
"datasetName": "llama2-7b-chat",
"description": "A good alternative to ChatGPT",
"endpoints": [{"url": "http://127.0.0.1:8081/generate_stream"}],
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",
"preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.8,
"top_p": 0.95,
"repetition_penalty": 1.8,
"top_k": 10,
"truncate": 1000,
"max_new_tokens": 1024
}
}
```
It will not return good response like https://huggingface.co/chat.

| https://github.com/huggingface/chat-ui/issues/361 | closed | [
"support",
"models"
] | 2023-07-20T14:04:29Z | 2023-08-22T13:54:46Z | 3 | aisensiy |
huggingface/text-generation-inference | 658 | How to use AutoGPTQ model in tgi |

command๏ผ
export GPTQ_BITS=4
export GPTQ_GROUPSIZE=128
text-generation-launcher --model-id Ziya-LLaMA-13B_4bit --disable-custom-kernels --port 6006 --revision gptq-4bit-128g-actorder_True --quantize gptq
result:
Traceback (most recent call last):
File "/root/miniconda3/envs/text-generation-inference/bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/cli.py", line 78, in serve
server.serve(
File "/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/server.py", line 169, in serve
asyncio.run(
File "/root/miniconda3/envs/text-generation-inference/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/root/miniconda3/envs/text-generation-inference/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/server.py", line 136, in serve_inner
model = get_model(
File "/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/models/__init__.py", line 195, in get_model
return CausalLM(
File "/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/models/causal_lm.py", line 477, in __init__
model = AutoModelForCausalLM.from_pretrained(
File "/root/miniconda3/envs/text-generation-inference/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 467, in from_pretrained
return model_class.from_pretrained(
File "/root/miniconda3/envs/text-generation-inference/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2387, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory Ziya-LLaMA-13B_4bit.
rank=0
2023-07-20T08:34:02.453608Z ERROR text_generation_launcher: Shard 0 failed to start
2023-07-20T08:34:02.453654Z INFO text_generation_launcher: Shutting down shards | https://github.com/huggingface/text-generation-inference/issues/658 | closed | [] | 2023-07-20T08:42:57Z | 2023-07-31T23:50:55Z | null | Minami-su |
huggingface/chat-ui | 358 | Broken encoding for Korean and possibly other languages | I was testing the llama2 and noticed there are some encoding errors (Ignore that the output is total nonsense):
<img width="1618" alt="image" src="https://github.com/huggingface/chat-ui/assets/15624271/61868780-efa0-4670-84d9-734410a05451">
I though It could be because of weird mid-unicode tokenization but I also noticed this on a custom demo using huggingchat ui:
It renders correctly & strangely enough breaks and unbreaks randomly.
https://github.com/huggingface/chat-ui/assets/15624271/7b7e97cb-876d-47cc-b89d-aabebb9197cf
| https://github.com/huggingface/chat-ui/issues/358 | closed | [
"question",
"models"
] | 2023-07-20T05:00:03Z | 2023-09-11T09:34:12Z | null | cceyda |
huggingface/diffusers | 4,160 | How to use diffusers force zeros? | it seems that it only has effect if its used on instance of diffusers class before model is loaded,
but i only get instance when i call from_pretrained or from_single_file
| https://github.com/huggingface/diffusers/issues/4160 | closed | [
"stale",
"SD.Next"
] | 2023-07-19T22:36:38Z | 2023-09-01T13:09:28Z | null | patrickvonplaten |
huggingface/transformers.js | 200 | [Question] Translation models | <!-- QUESTION GOES HERE -->
@xenova is there a model that do the text translation that have lighter weight i mean with minimum size? | https://github.com/huggingface/transformers.js/issues/200 | closed | [
"question"
] | 2023-07-19T22:07:37Z | 2023-07-27T00:17:24Z | null | jedLahrim |
huggingface/dataset-viewer | 1,532 | provide one "partial" field per entry in aggregated responses | For example, https://datasets-server.huggingface.co/size?dataset=c4 only provides a global `partial: true` field and the response does not explicit that the "train" split is partial, while the "test" one is complete.
Every entry in `configs` and `splits` should also include its own `partial` field, to be able to show this information in the viewer (selects)
- currently:
<img width="1528" alt="Capture dโeฬcran 2023-07-19 aฬ 16 00 28" src="https://github.com/huggingface/datasets-server/assets/1676121/92d27982-0fa3-44f2-a73f-a0ae614da40c">
- ideally, something like:
<img width="1529" alt="Capture dโeฬcran 2023-07-19 aฬ 16 01 39" src="https://github.com/huggingface/datasets-server/assets/1676121/c638af93-30de-4ab7-8fdd-389202d41c88">
Endpoints where we want these extra fields:
- /info, dataset-level
- /size, dataset-level
- /size, config-level
| https://github.com/huggingface/dataset-viewer/issues/1532 | open | [
"question",
"feature request",
"P2"
] | 2023-07-19T20:01:58Z | 2024-05-16T09:36:20Z | null | severo |
huggingface/datasets | 6,053 | Change package name from "datasets" to something less generic | ### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude.
My preference would be a pattern like what you get with all the other big libraries like numpy or pandas:
```
import huggingface as hf
# hf.transformers, hf.datasets, hf.evaluate
```
or things like
```
import huggingface.transformers as tf
# tf.load_model(), etc
```
If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on.
I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this.
Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name".
Sister issues:
- [transformers](https://github.com/huggingface/transformers/issues/24934)
- **datasets**
- [evaluate](https://github.com/huggingface/evaluate/issues/476)
### Motivation
Not taking up package names the user is likely to want to use.
### Your contribution
No - more a matter of internal discussion among core library authors. | https://github.com/huggingface/datasets/issues/6053 | closed | [
"enhancement"
] | 2023-07-19T19:53:28Z | 2024-11-20T21:22:36Z | 2 | jack-jjm |
huggingface/trl | 542 | Supervised Finetuning - How to mask loss for prompts | How can I mask the loss in supervised fine-tuning for prompts similar to how it is done in the LLAMA-2 paper?
Specifically, I have a dataset of prompts and ideal answers. When fine-tuning my model with a `SFTTrainer` using a `ConstantLengthDataset` (similar to the StackExchange example), how can I ensure that prompts are not considered in the loss? | https://github.com/huggingface/trl/issues/542 | closed | [] | 2023-07-19T14:55:17Z | 2023-08-16T15:02:50Z | null | jvhoffbauer |
huggingface/chat-ui | 351 | Starchat-beta doesn't stop generating text properly | Hi, I am deploying starchat-beta and chat-ui locally, it is strange that I found the chat will generate some useful text in the beginning, then it will not stop, then generates some unrelated text, like below

Is it related with .env.local configuration?

| https://github.com/huggingface/chat-ui/issues/351 | closed | [
"support",
"models"
] | 2023-07-19T14:32:59Z | 2023-07-20T06:29:09Z | 3 | XiaPZ |
huggingface/trl | 534 | How to load a trained model to continue trianing? | Dear TRL team,
I face a challenge that I can't finish the training in one go. Thus, I need to load the model that is trained half-way and continue the training process. Could you please guide me how to load the half-way trained model and continue the trianing process?
Best | https://github.com/huggingface/trl/issues/534 | closed | [] | 2023-07-19T04:36:15Z | 2023-08-26T15:04:58Z | null | zyzisastudyreallyhardguy |
huggingface/diffusers | 4,150 | How to train text-to-image model based on SDXL? | Can I use the train_text_to_image.py code directly? | https://github.com/huggingface/diffusers/issues/4150 | closed | [] | 2023-07-19T02:59:00Z | 2023-07-21T15:23:30Z | null | EnzoWuu |
huggingface/text-generation-inference | 636 | How to config vllm gpu_memory_utilization? | Hi team, I am trying using codegen2.5 7b model on tgi with A100 40GB and it gives me out of memory error because of vllm. I wonder if there is any way I can config gpu_memory_utilization in the code such that the vllm does not reserve too memory beforehand | https://github.com/huggingface/text-generation-inference/issues/636 | closed | [] | 2023-07-18T20:19:28Z | 2024-07-04T07:32:01Z | null | zch-cc |
huggingface/optimum | 1,202 | What is the process for contributing a new backend? | ### Feature request
In terms of contributing a new backend/optimizer to Optimum as an optional extension, what is the process?
I have been working on an Optimum integration with [DeepSparse](https://github.com/neuralmagic/deepsparse), Neural Magic's inference runtime for sparse execution on CPUs. If it is an open-source contribution that we've already started and will continue to support, is it mostly just a function of creating a `huggingface/optimum-deepsparse` repo to push up the state?
### Motivation
We already have a project hosted by Neural Magic: https://github.com/neuralmagic/optimum-deepsparse
It is already functional for a few simple tasks (image/text/audio/token classification, question answering, masked lm) and is generally going for usability-parity with ORTModel since DeepSparse also takes in ONNX models directly for compilation.
DeepSparse supports x86 and ARM CPUs, and is able to see performance benefits from unstructured sparsity on all platforms.
Having optimum-deepsparse be officially installable through the Optimum base as an extension i.e. `pip install optimum[deepsparse]` would be important for writing clean flows for people to sparsify their models and get the maximal inference performance out of their CPUs.
### Your contribution
https://github.com/neuralmagic/optimum-deepsparse
I'm happy to submit a PR to add it to Optimum's setup.py, write documentation to detail how to use it, and anything else required to make an official request. Thank you! | https://github.com/huggingface/optimum/issues/1202 | closed | [
"question",
"Stale"
] | 2023-07-18T18:07:14Z | 2025-05-13T02:14:09Z | null | mgoin |
huggingface/accelerate | 1,743 | what is the possible reason for accelerate running on cuda 12.2 8xA100 with error accelerate multiprocessing.api:failed (exitcode: -9) | ### System Info
```Shell
ubuntu 22.04
gpu A100 80G
cuda version 12.2
accelerate version 0.21.0
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
running the demo script from diffusers [train_text_to_image.py](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) for 100k iterations with batch size 8 each gpu, 8 A100 gpus in total
### Expected behavior
successful training without any problem | https://github.com/huggingface/accelerate/issues/1743 | closed | [] | 2023-07-18T13:33:35Z | 2023-08-15T09:18:05Z | null | garychan22 |
huggingface/datasets | 6,048 | when i use datasets.load_dataset, i encounter the http connect error! | ### Describe the bug
`common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)`
when i run the code above, i got the error as below:
--------------------------------------------
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f299ed082e0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
--------------------------------------------------
My all data is on local machine, why does it need to connect the internet? how can i fix it, because my machine cannot connect the internet.
### Steps to reproduce the bug
1
### Expected behavior
no error when i use the load_dataset func
### Environment info
python=3.8.15 | https://github.com/huggingface/datasets/issues/6048 | closed | [] | 2023-07-18T10:16:34Z | 2023-07-18T16:18:39Z | 1 | yangy1992 |
huggingface/safetensors | 299 | Any plan to support Nvidia GPUDirect Storage? | ### Feature request
Nvidia GPUDirect Storage has better performance to load model from NVMe disk or supported distributed storage. It will do the real `zero copy`.
### Motivation
It will get better performance with Nvidia GDS.
### Your contribution
Not sure. | https://github.com/huggingface/safetensors/issues/299 | closed | [
"Stale"
] | 2023-07-17T06:36:51Z | 2025-11-22T05:21:50Z | 9 | carmark |
huggingface/optimum | 1,191 | ONNX Generation - Support for Donut | ### Feature request
I have been trying to convert my custom Donut model to ONNX by using this specific command:
!python3 -m optimum.exporters.onnx --model={custom_model_id} --task=vision2seq-lm ./models/onnx --optimize O4 --atol 1e-2 --opset=13
The following exception occurs at the end of the process, by which I understand the vision-encoder-decoder is not supported yet. Are there any plans to integrate vision-encoder-decoder for optimum.exporters.onnx soon?
Error observed:
File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/utils.py", line 162, in check_optimization_supported_model
raise NotImplementedError(
NotImplementedError: ONNX Runtime doesn't support the graph optimization of vision-encoder-decoder yet. Only ['albert', 'bart', 'bert', 'big_bird', 'blenderbot', 'bloom', 'camembert', 'codegen', 'deberta', 'deberta-v2', 'distilbert', 'electra', 'gpt2', 'gpt_neo', 'gpt_neox', 'gptj', 'longt5', 'llama', 'marian', 'mbart', 'mt5', 'm2m_100', 'nystromformer', 'pegasus', 'roberta', 't5', 'vit', 'whisper', 'xlm-roberta'] are supported. If you want to support vision-encoder-decoder please propose a PR or open up an issue in ONNX Runtime: https://github.com/microsoft/onnxruntime.
### Motivation
Use optimum.exporters.onnx to convert custom Donut model to ONNX to improve inference performance.
### Your contribution
Still looking at the links and getting familiar how to proceed with change. will be grateful if someone can point me to resources where I can get started. thanks. | https://github.com/huggingface/optimum/issues/1191 | closed | [
"feature-request",
"onnx"
] | 2023-07-16T13:38:38Z | 2024-10-15T16:14:33Z | 3 | ghost |
huggingface/transformers.js | 194 | [Question] Transformers.js bundle size | I'm building a small project that runs `transformers.js` in a `Worker` to do client side embedding.
I noticed that including `import { pipeline } from '@xenova/transformers';` immediately increases my bundle size to over **3MB**.

Created using [webpack-bundle-analyzer](https://www.npmjs.com/package/webpack-bundle-analyzer)
Optimizing for this It's probably a large effort, but I was wondering if you have any ideas on how this could be optimized. | https://github.com/huggingface/transformers.js/issues/194 | closed | [
"question"
] | 2023-07-16T08:06:28Z | 2023-07-16T16:28:52Z | null | lizozom |
huggingface/trl | 520 | how to change the cache directory when using AutoModelForCausalLMWithValueHead.from_pretrained() | I have tried several methods, but it still download to my home directory | https://github.com/huggingface/trl/issues/520 | closed | [] | 2023-07-16T04:21:45Z | 2023-07-17T08:11:02Z | null | zyzisastudyreallyhardguy |
huggingface/peft | 711 | How to change the location of soft tokens in prompt tuning | ### Feature request
In fact, when prompt tuning, we will not always add it to the front, it may be in the middle. So I think it's important for us to change the location of soft tokens.
### Motivation
In fact, when prompt tuning, we will not always add it to the front, it may be in the middle. So I think it's important for us to change the location of soft tokens.
### Your contribution
no | https://github.com/huggingface/peft/issues/711 | closed | [] | 2023-07-15T13:57:52Z | 2024-04-09T06:39:55Z | null | XueTianci |
huggingface/datasets | 6,038 | File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'? | Hi, I use the code below to load local file
```
def _split_generators(self, dl_manager):
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
# If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
# dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
# urls = _URLS[self.config.name]
data_dir = dl_manager.download_and_extract(_URLs)
print(data_dir)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["train"]),
"split": "train",
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["dev"]),
"split": "dev",
},
),
]
```
and error occured
```
Traceback (most recent call last):
File "/home/zhizhou/data1/zhanghao/huggingface/FineTuning_Transformer/load_local_dataset.py", line 2, in <module>
dataset = load_dataset("./QA_script.py",data_files='/home/zhizhou/.cache/huggingface/datasets/conversatiom_corps/part_file.json')
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare
if str(split_generator.split_info.name).lower() == "all":
AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?
```
Could you help me? | https://github.com/huggingface/datasets/issues/6038 | closed | [] | 2023-07-15T07:58:08Z | 2023-07-24T11:54:15Z | 1 | BaiMeiyingxue |
huggingface/datasets | 6,033 | `map` function doesn't fully utilize `input_columns`. | ### Describe the bug
I wanted to select only some columns of data.
And I thought that's why the argument `input_columns` exists.
What I expected is like this:
If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns.
But it doesn't select columns.
It preserves existing columns.
The main cause is `update` function of `dictionary` type `transformed_batch`.
https://github.com/huggingface/datasets/blob/682d21e94ab1e64c11b583de39dc4c93f0101c5a/src/datasets/iterable_dataset.py#L687-L691
`transformed_batch` gets all the columns by `transformed_batch = dict(batch)`.
Even `function_args` selects `input_columns`, `update` preserves columns other than `input_columns`.
I think it should take a new dictionary with columns in `input_columns` like this:
```
# transformed_batch = dict(batch)
# transformed_batch.update(self.function(*function_args, **self.fn_kwargs)
# This is what I think correct.
transformed_batch = self.function(*function_args, **self.fn_kwargs)
```
Let me know how to use `input_columns`.
### Steps to reproduce the bug
Described all above.
### Expected behavior
Described all above.
### Environment info
datasets: 2.12
python: 3.8 | https://github.com/huggingface/datasets/issues/6033 | closed | [] | 2023-07-14T08:49:28Z | 2023-07-14T09:16:04Z | 0 | kwonmha |
huggingface/text-generation-inference | 614 | How to make it? How can we extend the 'max_new_tokens' from 1512 to either 4096 or 8192? | ### System Info
How can we extend the 'max_new_tokens' from 1512 to either 4096 or 8192?
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [X] My own modifications
### Reproduction
'max_new_tokens' from 1512 to either 4096 or 8192
### Expected behavior
'max_new_tokens' from 1512 to either 4096 or 8192 | https://github.com/huggingface/text-generation-inference/issues/614 | closed | [] | 2023-07-14T08:46:29Z | 2023-07-19T06:04:32Z | null | DiamondYuanqi |
huggingface/transformers.js | 193 | all-MiniLM-L6-v2 vector lengths | Hey, is there any way to programmatically set fix the vector embedding array lengths to a certain length? I was using https://huggingface.co/Xenova/all-MiniLM-L6-v2 with nodejs and every input I ran through the pipe gave a different length, and it would be nice to be able to keep it consistent.
| https://github.com/huggingface/transformers.js/issues/193 | closed | [
"question"
] | 2023-07-13T20:31:06Z | 2023-07-13T22:32:03Z | null | unkn-wn |
huggingface/chat-ui | 344 | 404 not found error when exporting data | https://github.com/huggingface/chat-ui/blob/1eff97d9fd47d8c486480d4d9a5208437c519cbb/src/routes/admin/export/%2Bserver.ts#L16
I am using the main branch and tried to export the dataset with the curl request given in the code, but the server returns 404 not found.
Its behind an reverse proxy with ssl, do i need to call the localhost or should it be possible even from outside the network ? | https://github.com/huggingface/chat-ui/issues/344 | closed | [
"question",
"back"
] | 2023-07-13T08:40:27Z | 2023-11-10T09:50:22Z | null | flozi00 |
huggingface/sentence-transformers | 2,254 | How to prepare label for the dataset that has two pairs of text, but not labels? | Hi,
Thank you for the great information, I have a question. My data has two column of texts, one as description of a request, the other one like an answer for that request. I want to use the Contrasiveloss to make the pairs of request and answer close and the other answer that are not related far, but I do not know how to provide the label for my positive pairs, and negative one, because the dataset function accept is a triple like this calling InputExample:
(a1,b1,1) (a1,bi,0)
I appreciate your help. | https://github.com/huggingface/sentence-transformers/issues/2254 | open | [] | 2023-07-12T21:30:07Z | 2023-07-30T15:38:09Z | null | Yarmohamadshr |
huggingface/optimum | 1,183 | Cannot convert owlvit-base-patch32 model to ONNX and run inference | ### System Info
```shell
Optimum version: 1.9.1
Python version: 3.11.3
OS: MacOS
```
### Who can help?
@mich
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using the CLI command
`optimum-cli export onnx --model google/owlvit-base-patch32 --task zero-shot-object-detection object_detection/owlvit_onnx`
I'm able to get a converted ONNX format. Then, when using the following code to perform inference with the converted model:
`checkpoint = "google/owlvit-base-patch32"`
`processor = AutoProcessor.from_pretrained(checkpoint)`
`image = skimage.data.astronaut()`
`image = Image.fromarray(np.uint8(image)).convert("RGB")`
`text_queries = ["human face", "rocket", "nasa badge", "star-spangled banner", "woman", "smile", "hair", 'human head', 'human eye']`
`np_inputs = processor(text=text_queries, images=image, return_tensors="np")`
`session = ort.InferenceSession("object_detection/owlvit_onnx/model.onnx")`
`out =session.run(['logits', 'pred_boxes', 'text_embeds', 'image_embeds'], np_inputs)`
I get the following error:
`RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'/Reshape_3' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, onnxruntime::TensorShapeVector &, bool) gsl::narrow_cast(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{9,16}, requested shape:{2,4,16}`
Now it seems to be related to some input being wrong, but I cannot get what is wrong. The pre-processing step is the same as for the HF model, the only difference being instead of returning "pt" tensors I'm returning "np" so it can work with ONNX. Here are my input shapes:
input_ids: (9, 16)
attention_mask: (9, 16)
pixel_values: (1, 3, 768, 768)
Thanks in advance!
### Expected behavior
Inference to run successfully and outputs to be very similar to that of the original torch model. | https://github.com/huggingface/optimum/issues/1183 | closed | [
"bug"
] | 2023-07-12T13:20:12Z | 2024-07-27T14:27:58Z | 9 | Pedrohgv |
huggingface/chat-ui | 341 | SSL Wrong version number error | i have added this
"endpoints": [
{"url": "http://127.0.0.1:8080/generate_stream", "weight": 100}
],
in the model but i am getting this error
TypeError: fetch failed
at fetch (/home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/undici/index.js:109:13)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/node_modules/@sveltejs/kit/src/runtime/server/fetch.js:32:10)
at async POST (/home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/src/routes/conversation/[id]/+server.ts:91:16)
at async Module.render_endpoint (/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20)
at async resolve (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17)
at async Object.handle (/src/hooks.server.ts:66:20)
at async Module.respond (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20)
at async file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22 {
cause: [Error: C0770BE8547F0000:error:0A00010B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:355:
] {
library: 'SSL routines',
reason: 'wrong version number',
code: 'ERR_SSL_WRONG_VERSION_NUMBER'
}
}
Error: aborted
at connResetException (node:internal/errors:717:14)
at abortIncoming (node:_http_server:754:17)
at socketOnClose (node:_http_server:748:3)
at Socket.emit (node:events:525:35)
at TCP.<anonymous> (node:net:322:12) {
code: 'ECONNRESET'
} | https://github.com/huggingface/chat-ui/issues/341 | closed | [
"support"
] | 2023-07-12T04:40:58Z | 2023-09-18T14:00:27Z | 4 | swikrit21 |
huggingface/diffusers | 4,054 | [SD-XL] How to apply invisible-watermark for latent output | ### Describe the bug
As a part of the license with SAI, we need to ensure the invisible watermark is applied across all images output by these models, including the Img2Img pipeline.
### Reproduction
```py
# if xformers or torch_2_0 is used attention block does not need
# to be in float32 which can save lots of memory
if use_torch_2_0_or_xformers:
self.vae.post_quant_conv.to(latents.dtype)
self.vae.decoder.conv_in.to(latents.dtype)
self.vae.decoder.mid_block.to(latents.dtype)
else:
latents = latents.float()
if not output_type == "latent":
image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
else:
image = latents
return StableDiffusionXLPipelineOutput(images=image)
```
the relevant portion of the img2img pipeline code.
in the XL pipeline, the latent output mode does not have the watermark applied - so, it is easily bypassed.
### Logs
```shell
N/A
```
### System Info
Git main branch.
### Who can help?
cc: @sayakpaul | https://github.com/huggingface/diffusers/issues/4054 | closed | [
"bug"
] | 2023-07-12T03:58:04Z | 2023-07-12T10:21:29Z | null | bghira |
huggingface/transformers.js | 192 | Table Question Answering Support? | Hi - Interested in support for table question answering models. It's noted that these aren't supported, but is there any reason they wouldn't work if leveraged?
| https://github.com/huggingface/transformers.js/issues/192 | open | [
"question"
] | 2023-07-12T01:12:07Z | 2023-07-13T16:18:19Z | null | timtutt |
huggingface/peft | 685 | Matrix mistmatch when trying to adapt Falcon with QLoRA, how to fix? | ### System Info
```
(data_quality) brando9~ $ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 515.43.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7543 32-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 3455.484
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5599.81
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.25.0
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.25.1 pypi_0 pypi
[conda] numpy-base 1.25.0 py310hb5e798b_0
[conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.2 py310_cu117 pytorch
[conda] torchtriton 2.0.0 py310 pytorch
[conda] torchvision 0. | https://github.com/huggingface/peft/issues/685 | closed | [] | 2023-07-11T20:01:37Z | 2023-07-24T00:11:02Z | null | brando90 |
huggingface/diffusers | 4,047 | How to set lora scale when loading a LoRA model? | Hey there, first of all thanks for your fantastic work!
I am loading LoRA weights, and I would like to set the scale of them being applied. Checking the code, it appears to be possible as shown [here](https://github.com/huggingface/diffusers/blob/fc7aa64ea8f5979b67bd730777e8e1c32e3adb05/src/diffusers/loaders.py#L1094).
How can we do it in practice? Is it possible to provide a small code snippet?
Thank you so much! Really appreciate your help :) | https://github.com/huggingface/diffusers/issues/4047 | closed | [] | 2023-07-11T17:38:05Z | 2023-08-29T05:30:44Z | null | pietrobolcato |
huggingface/diffusers | 4,042 | How to combine the reference-only with inpainting and depth control? | ### Model/Pipeline/Scheduler description
Hi, I recently want to combine the reference-only with image inpaint , with depth control to replace background for portrait images. However, I have no idea to build this pipeline as for there is no reference with inpaint pipeline example. Could you please help me to figure it out?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
_No response_ | https://github.com/huggingface/diffusers/issues/4042 | closed | [] | 2023-07-11T12:17:24Z | 2023-07-14T06:12:29Z | null | AmberCheng |
huggingface/chat-ui | 340 | [WebSearch] "Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 1000 `inputs` tokens and 1024 `max_new_tokens`" | Hello there,
Title says it all.
We are not using any custom endpoints/models. We're just relying on the HuggingFace's API inferences.
Is there a way to increase/decrease the inputs token when using WebSearch (or even just increase the max sum)? Because it works fine if `max_new_tokens` is set to 512 BUT it, obviously, cuts any answer getting upper these numbers.
So far, I didn't find a good balance neither how to decrease the number of tokens of the input.
In advance, thanks for your answer!

| https://github.com/huggingface/chat-ui/issues/340 | closed | [
"question",
"models"
] | 2023-07-11T07:33:18Z | 2023-07-12T09:16:21Z | null | gollumeo |
huggingface/diffusers | 4,029 | How can I make diffuser pipeline to use .safetensors file for SDXL? | Cloning entire repo is taking 100 GB
How can I make below code to use .safetensors file instead of diffusers?
Lets say I have downloaded my safetensors file into path.safetensors
How to provide it?
The below code working but we are cloning 100 GB instead of just single 14 GB safetensors. Waste of bandwidth
**Also how can I add a LoRA checkpoint to this pipeline? a LoRA checkpoint made by Kohya script**
```
import gradio as gr
from diffusers import DiffusionPipeline
import torch
import base64
from io import BytesIO
import os
import gc
from datetime import datetime
from share_btn import community_icon_html, loading_icon_html, share_js
# SDXL code: https://github.com/huggingface/diffusers/pull/3859
model_dir = '/workspace'
access_token = os.getenv("ACCESS_TOKEN")
if model_dir:
# Use local model
model_key_base = os.path.join(model_dir, "stable-diffusion-xl-base-0.9")
model_key_refiner = os.path.join(model_dir, "stable-diffusion-xl-refiner-0.9")
else:
model_key_base = "stabilityai/stable-diffusion-xl-base-0.9"
model_key_refiner = "stabilityai/stable-diffusion-xl-refiner-0.9"
# Use refiner (enabled by default)
enable_refiner = os.getenv("ENABLE_REFINER", "true").lower() == "true"
# Output images before the refiner and after the refiner
output_images_before_refiner = True
# Create public link
share = os.getenv("SHARE", "false").lower() == "true"
print("Loading model", model_key_base)
pipe = DiffusionPipeline.from_pretrained(model_key_base, torch_dtype=torch.float16, use_auth_token=access_token)
#pipe.enable_model_cpu_offload()
pipe.to("cuda")
# if using torch < 2.0
pipe.enable_xformers_memory_efficient_attention()
# pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
if enable_refiner:
print("Loading model", model_key_refiner)
pipe_refiner = DiffusionPipeline.from_pretrained(model_key_refiner, torch_dtype=torch.float16, use_auth_token=access_token)
#pipe_refiner.enable_model_cpu_offload()
pipe_refiner.to("cuda")
# if using torch < 2.0
pipe_refiner.enable_xformers_memory_efficient_attention()
# pipe_refiner.unet = torch.compile(pipe_refiner.unet, mode="reduce-overhead", fullgraph=True)
# NOTE: we do not have word list filtering in this gradio demo
is_gpu_busy = False
def infer(prompt, negative, scale, samples=4, steps=50, refiner_strength=0.3, num_images=1):
prompt, negative = [prompt] * samples, [negative] * samples
images_b64_list = []
for i in range(0, num_images):
images = pipe(prompt=prompt, negative_prompt=negative, guidance_scale=scale, num_inference_steps=steps).images
os.makedirs(r"stable-diffusion-xl-demo/outputs", exist_ok=True)
gc.collect()
torch.cuda.empty_cache()
if enable_refiner:
if output_images_before_refiner:
for image in images:
buffered = BytesIO()
image.save(buffered, format="JPEG")
img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
image_b64 = (f"data:image/jpeg;base64,{img_str}")
images_b64_list.append(image_b64)
images = pipe_refiner(prompt=prompt, negative_prompt=negative, image=images, num_inference_steps=steps, strength=refiner_strength).images
gc.collect()
torch.cuda.empty_cache()
# Create the outputs folder if it doesn't exist
for i, image in enumerate(images):
buffered = BytesIO()
image.save(buffered, format="JPEG")
img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
timestamp = datetime.now().strftime("%Y%m%d%H%M%S")
image_b64 = (f"data:image/jpeg;base64,{img_str}")
images_b64_list.append(image_b64)
# Save the image as PNG with unique timestamp
filename = f"stable-diffusion-xl-demo/outputs/generated_image_{timestamp}_{i}.png"
image.save(filename, format="PNG")
return images_b64_list
```
| https://github.com/huggingface/diffusers/issues/4029 | closed | [] | 2023-07-10T21:52:22Z | 2023-12-11T18:45:18Z | null | FurkanGozukara |
huggingface/chat-ui | 337 | Feature Request: Save messages and error message even if text generation endpoint fails | Situation: Text generation endpoint is not running. Then user sends a message.
Current Behavior: UI throws an error and saves conversation to mongodb like this, with an empty message list.
```
{
_id: ObjectId('64ac1abc2ac09222e24cc984'),
title: 'Untitled 5',
messages: [],
model: 'GPT',
createdAt: ISODate('2023-07-10T14:50:36.324Z'),
updatedAt: ISODate('2023-07-10T14:50:36.324Z'),
sessionId: '0048fb5c-a224-49c2-a7be-ea417defa6e2'
}
```
Desired behavior: UI throws an error and saves conversation to mongodb with the user's message and the error message inside.
```
{
_id: ObjectId('64ac1abc2ac09222e24cc984'),
title: 'Untitled 5',
messages: [
{
content: 'What is 2-2?',
from: 'user',
id: '874cfd40-2c61-49fe-b9f6-8b296a79ab6a',
},
{
from: 'assistant',
error: 'TypeError: fetch failed
at fetch (C:\chat-ui\node_modules\undici\index.js:109:13)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (/node_modules/@sveltejs/kit/src/runtime/server/fetch.js:32:10)
at async POST (/src/routes/conversation/[id]/+server.ts:90:16)
at async Module.render_endpoint (/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20)
at async resolve (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17)
at async Object.handle (/src/hooks.server.ts:66:20)
at async Module.respond (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20)
at async file:///C:/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22 {
cause: Error: connect ECONNREFUSED 127.0.0.1:80
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1532:16) {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 80
}
}
},
],
model: 'GPT',
createdAt: ISODate('2023-07-10T14:50:36.324Z'),
updatedAt: ISODate('2023-07-10T14:50:36.324Z'),
sessionId: '0048fb5c-a224-49c2-a7be-ea417defa6e2'
}
``` | https://github.com/huggingface/chat-ui/issues/337 | closed | [
"enhancement",
"back",
"p2"
] | 2023-07-10T15:18:52Z | 2023-10-10T11:16:22Z | 1 | loganlebanoff |
huggingface/transformers.js | 187 | [Question] Performance and size of models | Great project, tons of potential! I have a general question I thought I may ask. Using the convert.py scripts, I took a Pytorch model and converted it to ONNX. With quantizing, I get a full 428MB model and a 110MB _quantized model. Now how does it work for the user exactly? Does the user automatically download the _quantized one?
Would this be accurate:
- WASM downloaded/loaded (e.g., 15MB)
- Transformers.js runs the core
- Model downloaded/load (e.g., 110MB)
- Model starts and runs
- Result is returned
- (next time it is called, WASM is reloaded and model is cached)
125MB is still quite big for the web: [https://huggingface.co/plopop/industry-classification-api-onnx](https://huggingface.co/plopop/industry-classification-api-onnx)
With something like [https://huggingface.co/Xenova/mobilebert-uncased-mnli](https://huggingface.co/Xenova/mobilebert-uncased-mnli) (27MB), running everything within a worker takes 8-15seconds depending on the input from our end right now - is there any other performance gains that can be saved, or would the only way be to optimize the source model further? | https://github.com/huggingface/transformers.js/issues/187 | closed | [
"question"
] | 2023-07-10T14:39:31Z | 2023-07-11T17:06:38Z | null | sabatale |
huggingface/chat-ui | 336 | how to work in chat-ui with non streaming data? | I was working in a chat-ui by providing my endpoints only which is hosted in a localhost:8000/generate. I dont have any model but endpoints only so can you provide me a solution for working in only endpoints and non streaming data( application/json or application/plain). I have model hosted in this server.
in modelEndpoint.ts
if (!model.endpoints) {
return {
url: `http://10.0.2.27:8000/generate`,
// authorization: `Bearer ${HF_ACCESS_TOKEN}`,
// weight: 1,
};
}
in
Error: An error occurred while fetching the blob
at request (file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@huggingface/inference/dist/index.mjs:89:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Proxy.textGeneration (file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@huggingface/inference/dist/index.mjs:457:15)
at async Module.generateFromDefaultEndpoint (/src/lib/server/generateFromDefaultEndpoint.ts:22:28)
at async POST (/home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/src/routes/conversation/[id]/summarize/+server.ts:30:26)
at async Module.render_endpoint (/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20)
at async resolve (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17)
at async Object.handle (/src/hooks.server.ts:66:20)
at async Module.respond (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20)
at async file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22
| https://github.com/huggingface/chat-ui/issues/336 | closed | [] | 2023-07-10T13:43:17Z | 2023-07-11T08:29:40Z | null | swikrit21 |
huggingface/transformers.js | 186 | [Question] How to interpret boxes in object detection example ? | hi,
can anyone help me how to interpret boxes while using object detection with this model "Xenova/detr-resnet-50".
i want to crop out the detected object from the image using sharp (nodejs) ? how can i pass these boxes to sharp resize function ?
| https://github.com/huggingface/transformers.js/issues/186 | closed | [
"question"
] | 2023-07-10T12:59:22Z | 2023-07-11T00:55:13Z | null | geminigeek |
huggingface/chat-ui | 335 | Bug: Unexpected execution result on Firefox browser with Chat-UI ver. 0.3.0 | I recently installed the 0.3.0 version of the HF Chat-UI software.
I then performed an evaluation using the **HuggingFaceH4/starchat-beta** model.
At that time, I typed the question "_Could you tell me about the weather in Toyko City in Japan on July-10-2023_?" and ran it.
Unfortunately, the results varied between browsers.
In the Firefox browser, the result is displayed normally.
However, the following error occurs in the Chrome browser.
* **Error message:**
```
403 You don't have access to this conversation.
If someone gave you this link, ask them to use the 'share' feature instead.
```
I was wondering if anyone else is experiencing the same issue, any comments are welcome.
| https://github.com/huggingface/chat-ui/issues/335 | closed | [
"support"
] | 2023-07-10T04:40:40Z | 2023-09-11T09:32:14Z | 2 | leemgs |
huggingface/chat-ui | 334 | Chat-ui is starting, but nothing happends | # Description:
When starting the Chat-ui, the initialization process begins as expected but stalls indefinitely, without any evident progress. The application doesn't crash nor gives any errors. This issue occurs across multiple attempts, regardless of browser type or device.
# Steps to reproduce:
- Install prerequisites
- Fill evn.local file
- Lauch a DB container for chat persistance
- Start Chat-UI
- Open a browser (e.g., Chrome, Firefox, Safari)
- Navigate to the Chat-ui web address.
- Observe the behavior.
# Expected result:
After navigating to the url, the Chat-ui should initialize and allow for the use of its various functionalities.
# Actual result:
The UI remains in a state of 'loading' indefinitely without any change, timing out after some time.
# Environment:
This issue was reproduced on:
1. Operating System: Ubuntu 22.04, Fedora Workstation 38
2. Node Version: v18.16.1
3. NPM Version: 9.5.1
Additional context:
- No error messages are displayed.
- There is no notable console log information.
- Network status is stable during the process.
- Similar behavior noticed on Fedora.
- Refreshing the browser, clearing the cache, or using a different browser does not resolve the issue.
- Firewall is disabled on host
If you need any further information, I would be glad to provide it. Thanks in advance! | https://github.com/huggingface/chat-ui/issues/334 | closed | [
"support"
] | 2023-07-09T13:53:34Z | 2023-09-11T09:31:49Z | 2 | Notespeak |
huggingface/diffusers | 3,988 | how to use part of the controlnet models with a "StableDiffusionControlNetInpaintPipeline" object? | I created a "StableDiffusionControlNetInpaintPipeline" object with a list of controlnet models such as "canny","openpose", but sometimes I want to use canny only or openpose only.Is there's a way to reuse part of the controlnet models with a already inited "StableDiffusionControlNetInpaintPipeline" object? | https://github.com/huggingface/diffusers/issues/3988 | closed | [] | 2023-07-07T09:18:18Z | 2023-08-01T04:51:41Z | null | AdamMayor2018 |
huggingface/optimum-habana | 292 | Where in the directory "/tmp/tst-summarization", is the summarization output stored? | ### System Info
```shell
Optimum Habana : 1.6.0
SynapseAI : 1.10.0
Docker Image : Habanaยฎ Deep Learning Base AMI (Ubuntu 20.04)
Volume : 1000 GiB
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Start an EC2 instance with DL1 Resource and this image : Habanaยฎ Deep Learning Base AMI (Ubuntu 20.04)
Run these commands
a. docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.10.0/ubuntu20.04/habanalabs/pytorch-installer-2.0.1:latest
b. git clone https://github.com/huggingface/optimum-habana.git
c. pip install optimum[habana]
d. cd examples
e. cd summarization
f. pip install -r requirements.txt
python run_summarization.py \
--model_name_or_path t5-small \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--predict_with_generate \
--use_habana \
--use_lazy_mode \
--use_hpu_graphs_for_inference \
--gaudi_config_name Habana/t5 \
--ignore_pad_token_for_loss False \
--pad_to_max_length \
--save_strategy epoch \
--throughput_warmup_steps 3
### Expected behavior
Need a file with the summarized text and not just the evaluation metrics | https://github.com/huggingface/optimum-habana/issues/292 | closed | [
"bug"
] | 2023-07-07T03:24:31Z | 2023-07-18T08:30:21Z | null | Abhaycnvrg |
huggingface/trl | 503 | How to get labels into the SFTTrainer | Hi!
I am trying to prompt tune medalpaca 7b using prompt tuning or lora with the SFTTrainer. I have a prompt and I have labels that I want the model to output. I have made a Dataset class that inherits from torch.utils.data.Dataset to prepare my inputs, but I am wondering, if there is some way to make the trainer use the datapoint["labels"] part during training? :
class DiagnosesDataset(torch.utils.data.Dataset):
def __init__(self, instances, tokenizer):
self.instances=instances
#self.labels=labels
self.tokenizer=tokenizer
def __getitem__(self, idx):
item={}
prompt= self.instances["prompt"][idx]
labels = self.instances["label"][idx]
item=self.tokenize(prompt+labels)
tokenized_instruction=self.tokenize(prompt)
label_instruction=self.tokenizer(labels)
i=len(tokenized_instruction["input_ids"])
item["labels"][i:]=label_instruction["input_ids"]
return item
def tokenize(self, prompt):
result_prompt=self.tokenizer(prompt,
truncation=True,
max_length=2048,
padding=False,
return_tensors=None)
result_prompt["labels"]=[-100]*len(result_prompt["input_ids"])
return result_prompt
def __len__(self):
return len(self.instances)
I am calling the trainer like this:
trainer=SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=dataset,
peft_config=peft_config,
packing=True,
data_coolator=DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding="max_length", max_length=2048)
args=training_arguments)
trainer.train()
This is the error I am currently getting, but I am not sure, this has something to do with sfttrainer
โญโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโฎ
โ /home/students/kulcsar/Bachelor/for_dataset/10000_diagnoses/falcon_model_pef โ
โ t.py:544 in <module> โ
โ โ
โ 541 โ โ
โ 542 โ โ
โ 543 โ args=parser.parse_args() โ
โ โฑ 544 โ run() โ
โ 545 โ #main() โ
โ 546 โ โ
โ 547 โ #all_data, prompts, golds=preprocess("./dataset.pkl") โ
โ โ
โ /home/students/kulcsar/Bachelor/for_dataset/10000_diagnoses/falcon_model_pef โ
โ t.py:153 in run โ
โ โ
โ 150 โ โ packing=True, โ
โ 151 โ โ data_collator=DataCollatorForSeq2Seq(tokenizer, pad_to_multipl โ
โ 152 โ โ args=training_arguments) โ
โ โฑ 153 โ trainer.train() โ
โ 154 โ โ
โ 155 โ logging.info("Run Train loop") โ
โ 156 โ #model_updated=train(model, dataset, args.seed, args.batch_size, a โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/transformers/trainer.py:1537 in train โ
โ โ
โ 1534 โ โ inner_training_loop = find_executable_batch_size( โ
โ 1535 โ โ โ self._inner_training_loop, self._train_batch_size, args.a โ
โ 1536 โ โ ) โ
โ โฑ 1537 โ โ return inner_training_loop( โ
โ 1538 โ โ โ args=args, โ
โ 1539 โ โ โ resume_from_checkpoint=resume_from_checkpoint, โ
โ 1540 โ โ โ trial=trial, โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/transformers/trainer.py:1802 in _inner_training_loop โ
โ โ
โ 1799 โ โ โ โ โ self.control = self.callback_handler.on_step_begi โ
โ 1800 โ โ โ โ โ
โ 1801 โ โ โ โ with self.accelerator.accumulate(model): โ
โ โฑ 1802 โ โ โ | https://github.com/huggingface/trl/issues/503 | closed | [] | 2023-07-06T22:19:21Z | 2023-08-14T15:05:10Z | null | MaggieK410 |
huggingface/transformers.js | 182 | Website and extension using same model | Per the chrome extension example, you pack the model with the extension. Is there a way for a website and chrome extension to use the same cached model? If my project has both a website and extension, I hope they could use a single model instead of having store 2 on the user's machine.
| https://github.com/huggingface/transformers.js/issues/182 | open | [
"question"
] | 2023-07-06T17:43:48Z | 2023-07-16T17:26:09Z | null | escottgoodwin |
huggingface/chat-ui | 331 | How to send model name as a input to API endpoint | I want to host two models and query them by switching between . The problem is I'm not able to send model name as a parameter from UI to API endpoints.
Can someone help on this? | https://github.com/huggingface/chat-ui/issues/331 | closed | [
"question"
] | 2023-07-06T13:04:04Z | 2023-09-18T14:03:18Z | null | sankethgadadinni |
huggingface/transformers | 24,685 | How to get the last 4 Hidden states from the feature extraction pipeline | I have defined a pipeline for Feature extraction
```
# Create the pipeline
p = pipeline(
task="feature-extraction",
tokenizer="microsoft/biogpt",
model="microsoft/biogpt",
framework="pt",
device=0
)
bio_gpt = AutoModel.from_pretrained("microsoft/biogpt", output_hidden_states= True)
bio_gpt = bio_gpt.to(device)
```
and I want to extract the embeddings of the last token of the last hidden state, and the Average Pooling of the last 4 layers using the pipeline approach I am doing it like this
_Last token of the last hidden state:_
```
def extract_last_token(last_hidden_states):
last_hidden_states = np.array(last_hidden_states)
return last_hidden_states[:,-1,:]
# Process the data using the pipeline
results = p([row["text"] for _, row in df2.iterrows()])
# Extract the last token of the last hidden state
embeddings = [extract_last_token(hidden_state) for hidden_state in results]
# Create a DataFrame to store the results
df2["embeddings2"] = embeddings
```
_Average pooling of the last 4 layers:_
```
def mean_pooling(last_hidden_states, ):
last_4_layers = last_hidden_states[-4:] # Consider the last 4 layers
return np.mean(last_4_layers, axis=1)
# Process the data using the pipeline
results = p([row["text"] for _, row in df2.iterrows()])
features = np.squeeze(results)
print(features.shape)
# Perform mean pooling on the last hidden states
embeddings = [mean_pooling(hidden_state) for hidden_state in results]
# Create a DataFrame to store the results
df2["embeddings4"] = embeddings
```
The issues are:
1. When I extract the embeddings of the 4 last layers or the 12 last layers the embeddings are always the same

2. The embeddings of the last token of the last hidden state are different from the same embeddings using the "manual" method

Weardly in the above picture the 2 of the embeddings are the same but opposite row ids, this indicates another problem I don't see it if you can spot this I appreciate it.
Here is the code of how I did the manual version
```
output = bio_gpt(**model_inputs)
# Get the last state
last_state = output.last_hidden_state
cls_embeddings = last_state[:, -1, :]
# Print the last state
print(cls_embeddings)
# Assign cls_embeddings to "embeddings4" column in df2
df2["embeddings_manual"] = [cls_embeddings[i].cpu().detach().numpy() for i in range(len(df2))]
``` | https://github.com/huggingface/transformers/issues/24685 | closed | [] | 2023-07-06T08:45:08Z | 2023-08-14T15:02:35Z | null | Luke-4 |
huggingface/setfit | 393 | AttributeError: 'list' object has no attribute 'shuffle' | I am getting the "AttributeError: 'list' object has no attribute 'shuffle'" error when I try to use setfit.
The dataset has two columns; one text and the second is the label column. | https://github.com/huggingface/setfit/issues/393 | closed | [
"question"
] | 2023-07-05T16:47:17Z | 2023-12-05T14:41:13Z | null | gpirge |
huggingface/datasets | 6,008 | Dataset.from_generator consistently freezes at ~1000 rows | ### Describe the bug
Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I
Somehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset.
I've let it run in the frozen state for way longer than it can possibly take to load the actual dataset.
Let me know if you have ideas how to resolve it!
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
def gen():
for row in range(10000):
yield {"i": np.random.rand(512, 512, 3)}
Dataset.from_generator(gen)
# -> 90% of the time gets stuck around 1000 rows
```
### Expected behavior
Should continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 12.0.1
- Pandas version: 1.5.1
| https://github.com/huggingface/datasets/issues/6008 | closed | [] | 2023-07-05T16:06:48Z | 2023-07-10T13:46:39Z | 3 | andreemic |
huggingface/dataset-viewer | 1,482 | diagnose why the mongo server uses so much CPU | we have many alerts on the use of CPU on the mongo server.
```
System: CPU (User) % has gone above 95
```
Why? | https://github.com/huggingface/dataset-viewer/issues/1482 | closed | [
"question",
"infra",
"improvement / optimization",
"P1"
] | 2023-07-04T16:04:06Z | 2024-02-06T14:49:20Z | null | severo |
huggingface/text-generation-inference | 536 | How to enable vllm | ### Feature request
How to enable vllm
### Motivation
How to enable vllm
### Your contribution
How to enable vllm | https://github.com/huggingface/text-generation-inference/issues/536 | closed | [] | 2023-07-04T05:20:21Z | 2023-07-04T10:56:29Z | null | lucasjinreal |
huggingface/transformers.js | 180 | [Question] Running transformers.js in a browser extension | Hello,
I'm trying to build a chrome extension that uses Transformers.js. When I try to import it in the background worker script, I first get an error that says process is not available, because apparently someone decided browser plugins shouldn't use process.env anymore. I found a solution that said to put
```
define: {
'process.env': {}
}
```
in my vite.config.js, which worked to get me past that, but the next error is:
```
Error: Dynamic require of "../bin/napi-v3/undefined/undefined/onnxruntime_binding.node" is not supported
```
Has anyone gotten this working in a browser environment yet? I saw a video about tensorflow.js in the browser, but I'd prefer to use transformers.js because you already provided me with an example of how to get it to behave like Sentence Transformers. :) | https://github.com/huggingface/transformers.js/issues/180 | closed | [
"question"
] | 2023-07-04T01:09:29Z | 2023-07-16T15:58:30Z | null | davidtbo |
huggingface/datasets | 6,003 | interleave_datasets & DataCollatorForLanguageModeling having a conflict ? | ### Describe the bug
Hi everyone :)
I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:
- `tokenize()` runs fine
- `group_text()` runs fine
Everytime, on step 19, I get
```pytb
File "env/lib/python3.9/site-packages/transformers/data/data_collator.py", line 779, in torch_mask_tokens
inputs[indices_random] = random_words[indices_random]
RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.
```
I tried:
- training without interleave on dataset 1, it runs
- training without interleave on dataset 2, it runs
- training without `.to_iterable_dataset()`, it hangs then crash
- training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps.
I might have coded something wrong, but I don't get what
### Steps to reproduce the bug
I have this function:
```py
def build_dataset(path: str, percent: str):
dataset = load_dataset(
"text",
data_files={"train": [path]},
split=f"train[{percent}]"
)
dataset = dataset.map(
lambda examples: tokenize(examples["text"]),
batched=True,
num_proc=num_proc,
)
dataset = dataset.map(
group_texts,
batched=True,
num_proc=num_proc,
desc=f"Grouping texts in chunks of {tokenizer.max_seq_length}",
remove_columns=["text"]
)
print(len(dataset))
return dataset.to_iterable_dataset()
```
I hardcoded group_text:
```py
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict.
# We could add padding if the model supported it instead of this drop, you can customize this part to your needs.
total_length = (total_length // 512) * 512
# Split by chunks of max_len.
result = {
k: [t[i: i + 512] for i in range(0, total_length, 512)]
for k, t in concatenated_examples.items()
}
# result = {k: [el for el in elements if el] for k, elements in result.items()}
return result
```
And then I build datasets using the following code:
```py
train1 = build_dataset("d1.txt", ":95%")
train2 = build_dataset("d2.txt", ":95%")
dev1 = build_dataset("d1.txt", "95%:")
dev2 = build_dataset("d2.txt", "95%:")
```
and finally I run
```py
train_dataset = interleave_datasets(
[train1, train2],
probabilities=[0.8, 0.2],
seed=42
)
eval_dataset = interleave_datasets(
[dev1, dev2],
probabilities=[0.8, 0.2],
seed=42
)
```
Then I run the training part which remains mostly untouched:
> CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir /var/mlm/training-bert/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir ./logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16
### Expected behavior
The model should then train normally, but fails every time at the same step (19).
printing the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well]
### Environment info
transformers[torch] 4.30.2
Ubuntu
A100 0 CUDA 12
Driver Version: 525.116.04 | https://github.com/huggingface/datasets/issues/6003 | open | [] | 2023-07-03T17:15:31Z | 2023-07-03T17:15:31Z | 0 | PonteIneptique |
huggingface/dataset-viewer | 1,472 | How to show fan-in jobs' results in response ("pending" and "failed" keys) | In cache entries of fan-in jobs we have keys `pending` and `failed`. For example, config-level `/parquet` response has the following format (only "parquet_files" key):
```python
{
"parquet_files": [
{
"dataset": "duorc",
"config": "ParaphraseRC",
"split": "test",
"url": "https://huggingface.co/datasets/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/duorc-test.parquet",
"filename": "duorc-test.parquet",
"size": 6136591
},
... # list of parquet files
],
}
```
and for dataset-level it also has `pending` and `failed` keys:
```python
{
"parquet_files": [
{
"dataset": "duorc",
"config": "ParaphraseRC",
"split": "test",
"url": "https://huggingface.co/datasets/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/duorc-test.parquet",
"filename": "duorc-test.parquet",
"size": 6136591
},
... # list of parquet files
],
"pending": [],
"failed": []
}
```
To me, undocumented `"pending"` and `"failed"` keys look a bit too technical and unclear.
What we can do:
* document what these keys mean
* don't document it but also for these kind of endpoints show only examples where all levels are specified (currently it's not like this). So, don't show examples that return `pending` and `failed` field.
* anything else? @huggingface/datasets-server | https://github.com/huggingface/dataset-viewer/issues/1472 | open | [
"question",
"api",
"P2"
] | 2023-07-03T16:49:10Z | 2023-08-11T15:26:24Z | null | polinaeterna |
huggingface/blog | 1,281 | How to push or shere lora adapter to hugging face hub? | hi, i trained falcon model and already set push_to_hub paramter in training argument, but they not working.
```
from transformers import TrainingArguments
output_dir = "chatb_f"
per_device_train_batch_size = 4
gradient_accumulation_steps = 4
optim = "paged_adamw_32bit"
save_steps = 60
logging_steps = 10
learning_rate = 2e-4
max_grad_norm = 0.3
max_steps = 60
warmup_ratio = 0.03
lr_scheduler_type = "constant"
training_arguments = TrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
optim=optim,
save_steps=save_steps,
logging_steps=logging_steps,
learning_rate=learning_rate,
fp16=True,
max_grad_norm=max_grad_norm,
max_steps=max_steps,
warmup_ratio=warmup_ratio,
group_by_length=True,
lr_scheduler_type=lr_scheduler_type,
push_to_hub = True
)
from trl import SFTTrainer
max_seq_length = 512
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=max_seq_length,
tokenizer=tokenizer,
args=training_arguments,
)
```
| https://github.com/huggingface/blog/issues/1281 | open | [] | 2023-07-01T13:56:47Z | 2023-07-01T13:57:40Z | null | imrankh46 |
huggingface/diffusers | 3,918 | How to control the position of an object in an image using text in a txt2img model? | How to control the position of an object in an image using text in a txt2img model? I know this is easy to achieve in an img2img model, but how can it be done in a txt2img model?
Or, how can a model be fine-tuned to achieve this effect? For example, specifying x=0, y=1, which corresponds to the top-left corner.
I have tried similar approaches, but they are not sensitive to the position. I suspect it may be due to insensitivity to the text input. I tried using compel to enhance the positional features, but still couldn't control the position. Do I need to retrain the text_encoder related part for this?
In my fine-tuning code, I commented out the no_grad parts for text_encoder and others. Is this correct, and will it automatically train the text_encoder?
Thank you! | https://github.com/huggingface/diffusers/issues/3918 | closed | [
"stale"
] | 2023-07-01T02:44:24Z | 2023-08-08T15:03:15Z | null | XiaoyuZhuang |
huggingface/dataset-viewer | 1,464 | Change the way we represent ResponseAlreadyComputedError in the cache | When a "parallel" step has already been computed, an error is stored in the cache with `ResponseAlreadyComputedError`error_code, and http status 500 (ie: if `split-first-rows-from-streaming` exists, then `split-first-rows-from-parquet` does not need to be computed).
But it makes it hard to monitor the "true" errors. If we follow the analogy with the HTTP status codes, it should be 3xx instead of 5xx, ie: a redirection to another resource.
I don't know how we should change this though. Let's put ideas in the issue. | https://github.com/huggingface/dataset-viewer/issues/1464 | closed | [
"question",
"improvement / optimization",
"P2"
] | 2023-06-30T18:13:34Z | 2024-02-23T09:56:05Z | null | severo |
huggingface/transformers.js | 176 | [Question] Embeddings for the Entire Document | <!-- QUESTION GOES HERE -->
Hi Thanks for all the effort, I really appreciate it. I enjoy coding in JS and do all things in JS.
Is it a good idea to load the entire json document to get embeddings? What tokenizer should I choose? I have a tone of valuable information in my key and value pairs? or should I craft a sentence from the document?
```json
{
"id": 2053926,
"city": "New York",
"user_id": 3578165,
"price": 75,
"native_currency": "USD",
"price_native": 75,
"price_formatted": "$75",
"lat": 40.854397081884706,
"lng": -73.93876393071385,
"country": "United States",
"name": "air conditioned room w/ great view",
"smart_location": "New York, NY",
"has_double_blind_reviews": false,
"instant_bookable": false,
"bedrooms": 1,
"beds": 1,
"bathrooms": 1,
"market": "New York",
"min_nights": 1,
"neighborhood": "Washington Heights",
"person_capacity": 3,
"state": "NY",
"zipcode": "10033",
"user": {
"user": {
"id": 3578165,
"first_name": "Benjamin",
"has_profile_pic": true
}
},
"address": "Pinehurst Avenue, New York, NY 10033, United States",
"country_code": "US",
"cancellation_policy": "flexible",
"property_type": "Apartment",
"reviews_count": 14,
"room_type": "Private room",
"room_type_category": "private_room",
"picture_count": 18,
"_geoloc": {
"lat": 40.854397081884706,
"lng": -73.93876393071385
},
"objectID": "507205000"
}
``` | https://github.com/huggingface/transformers.js/issues/176 | closed | [
"question"
] | 2023-06-30T16:20:37Z | 2023-06-30T22:43:03Z | null | hadminh |
huggingface/sentence-transformers | 2,247 | how to tune hyperparameters using optuna or raytune | I want to finetune the MiniLM model and tune the hyperparameters of the same, but the model.fit function doesn't return any loss. Nor does it shows any performance metrics while training the model. What do you suggest in this case? | https://github.com/huggingface/sentence-transformers/issues/2247 | open | [] | 2023-06-30T13:16:04Z | 2023-06-30T13:16:04Z | null | nikshrimali |
huggingface/diffusers | 3,914 | how to fine-tuning the sd model in low resolutions | When fine-tuning the stable diffusion model, there is a parameter called 'resolution' which, if set to a value like 128 or 256 to reduce GPU memory usage, could potentially have negative effects on training performance and results.
Would setting the resolution to a value other than 512, such as 128 or 256, have any adverse impact on training effectiveness and the final results?
Is there a way to modify the pre-trained model's resolution to 128 or 256, or do I need to train a separate low-resolution version of the model?
I have experimented with different resolutions, and it seems that setting the resolution to 512 produces the best results. Training with lower resolutions tends to generate complex and messy outputs.
I couldn't find any similar issues on GitHub, as most discussions focus on super-resolution. Thank you for your response! | https://github.com/huggingface/diffusers/issues/3914 | closed | [
"stale"
] | 2023-06-30T12:42:12Z | 2023-08-08T15:03:16Z | null | XiaoyuZhuang |
huggingface/optimum | 1,148 | Falcon-40b-instruct on Runpod | ### System Info
```shell
2 x A100 80GB
32 vCPU 251 GB RAM
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"What does a raindrop feel when it hits the sea?:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
### Expected behavior
Expected to Run smoothly, give an output.
Error :
The model 'RWForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1259: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
Setting `pad_token_id` to `eos_token_id`:11 for open-end generation. | https://github.com/huggingface/optimum/issues/1148 | closed | [
"bug"
] | 2023-06-29T18:48:05Z | 2023-06-30T15:39:29Z | 3 | Mrin7 |
huggingface/text-generation-inference | 509 | Question: How to estimate memory requirements for a certain batch size/ | I was just wondering how the GPU memory requirements vary depending on model size/batch size of request/max tokens. In doing some experiments where I needed the server to keep running for a long time, I found that it often ran out of memory and shut down - is there a way to estimate the memory footprint based on these variables? | https://github.com/huggingface/text-generation-inference/issues/509 | closed | [] | 2023-06-29T15:39:51Z | 2023-07-03T01:41:02Z | null | vaishakkrishna |
huggingface/transformers.js | 171 | [Doc request] Add an example guide of how to use it in Svelte (and deploy to HF Spaces) | Similar to the cool React guide, would be awesome to showcase how to use transformers.js from Svelte (and how to deploy the resulting app to Spaces)
No need to do a SvelteKit version IMO, Svelte would be sufficient
Maybe a good first issue for the community? | https://github.com/huggingface/transformers.js/issues/171 | open | [
"enhancement",
"help wanted",
"good first issue"
] | 2023-06-29T10:25:10Z | 2023-08-21T20:36:59Z | null | julien-c |
huggingface/optimum | 1,145 | How to use mean pooling with ONNX export with optimum-cli | ### System Info
```shell
- `optimum` version: 1.8.8
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 2.0.1+cpu (cuda availabe: False)
- Tensorflow version (GPU?): not installed (cuda availabe: NA)
```
### Who can help?
@michaelbenayoun
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The Model card of paraphrase-MiniLM-L3-v2 at HuggingFace mentions that
**Without [sentence-transformers](https://www.sbert.net/), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.**
How to do this using the ONNX model generated using the optimum-cli?
Can we do this while generating the ONNX model?
For example, the **txtai** library does this ([https://github.com/neuml/txtai/blob/master/examples/18_Export_and_run_models_with_ONNX.ipynb])
```
onnx = HFOnnx()
embeddings = onnx("sentence-transformers/paraphrase-MiniLM-L6-v2", "pooling", "embeddings.onnx", quantize=True)
```
Or. does this needs to be done somehow after the ONNX model is generated (post-processing)?
### Expected behavior
Support for pooling in optimum_cli | https://github.com/huggingface/optimum/issues/1145 | open | [
"bug"
] | 2023-06-29T05:57:35Z | 2023-06-29T05:57:35Z | null | aunwesha |
huggingface/chat-ui | 328 | Is there a way to see all of a user's history? | I want to see the chat history of all my users. | https://github.com/huggingface/chat-ui/issues/328 | closed | [
"question"
] | 2023-06-29T05:01:55Z | 2023-07-03T10:43:53Z | null | ildoonet |
huggingface/chat-ui | 327 | Tokens limits issue | Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 603 `inputs tokens and 1024 `max_new_tokens
When deployed, the ui is working fine for like 2 or 3 promts, then every prompt we try we get a red line on top with a pop-up having this message. Please how can we remove this limitation on the code?
| https://github.com/huggingface/chat-ui/issues/327 | open | [
"question",
"back"
] | 2023-06-28T18:09:19Z | 2023-09-18T14:03:59Z | null | Billyroot |
huggingface/diffusers | 3,890 | How to apply the schedulers in diffusers to original SD | Hi! Thanks for this great work! Diffusers helps me a lot in many aspects!
Because of my recent work, I would like to know wether the schedulers in diffusers can be directly used in original SD? If yes, what should I do?
Any response will be greatly appreciated! Again, thank you all for this convenient framework! | https://github.com/huggingface/diffusers/issues/3890 | closed | [
"stale"
] | 2023-06-28T11:02:41Z | 2023-08-05T15:04:00Z | null | volcverse |
huggingface/dataset-viewer | 1,446 | Add fields `viewer` and `preview` to /is-valid | For coherence with /valid, we should add the `viewer` and `preview` fields to /is-valid.
We should also consider deprecating the current `valid` field (as in https://github.com/huggingface/datasets-server/issues/1445). Note that it's in use in https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface.co+repo%3Ahuggingface%2Fnotebooks&type=code and also in the @lewtun's evaluator if I remember correctly. | https://github.com/huggingface/dataset-viewer/issues/1446 | closed | [
"question",
"api"
] | 2023-06-28T09:19:56Z | 2023-06-29T14:13:16Z | null | severo |
huggingface/dataset-viewer | 1,445 | Remove `.valid` from `/valid` endpoint? | We recently added to fields to `/valid`:
- `viewer`: all the datasets that have a valid dataset viewer
- `preview`: all the datasets that don't have a valid dataset viewer, but have a dataset preview
And the Hub does not use the original field `valid` anymore. We still fill it with the union of both sets.
Should we remove it, as it doubles the size of the response and increases the response time, with no benefit? cc @huggingface/datasets-server
Note that it's used in the notebooks (https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface.co+repo%3Ahuggingface%2Fnotebooks&type=code), for example, so it is a breaking change.
I would vote in favor of removing it, and updating the notebooks (and the docs obviously). | https://github.com/huggingface/dataset-viewer/issues/1445 | closed | [
"question",
"api"
] | 2023-06-28T09:17:13Z | 2023-07-26T15:47:35Z | null | severo |
huggingface/diffusers | 3,882 | How to use models like chilloutmix to do inpainting task? | I tried as https://huggingface.co/docs/diffusers/api/diffusion_pipeline mentioned:
`text2img = StableDiffusionPipeline.from_pretrained("/data/cx/ysp/aigc-smart-painter/models/chilloutmix_NiPrunedFp32Fix")
inpaint = StableDiffusionInpaintPipeline(**text2img.components)
seger = RawSeger()
REST_API_URL = 'http://localhost:9900/sd/inpaint'
painter = GridPainter()
img_path = "/data/cx/ysp/aigc-smart-painter/assets/cloth1.jpg"
image = Image.open(img_path)
box = [220, 20, 500, 320]
new_image = draw_box(np.array(image), cords=box, color=(255, 0, 0), thickness=2)
show_image(new_image)
mask = seger.prompt_with_box(image, box=box, reverse=False)
mask = Image.fromarray(mask)
show_image(mask)
end = time.time()
prompt = "best quality,symmetry realistic,real life,photography,masterpiece,8K,HDR,highres,1 gril, looking at viewer"
images = inpaint(prompt=prompt, image=image, mask_image=mask, num_images_per_prompt=1,
num_inference_steps=50, guidance_scale=7.5,)
painter.image_grid(images, rows=1, cols=len(images) // 1)
painter.image_show()
print("finished")`
I got this error:
expects 4 but received `num_channels_latents`: 4 + `num_channels_mask`: 1 +
`num_channels_masked_image`: 4 = 9. Please verify the config of `pipeline.unet`
or your `mask_image` or `image` input.
Process finished with exit code 1
How can I convert model like chilloutmix to do inpainting task?
Thank you !
| https://github.com/huggingface/diffusers/issues/3882 | closed | [
"stale"
] | 2023-06-27T15:25:31Z | 2023-08-05T15:04:07Z | null | AdamMayor2018 |
huggingface/diffusers | 3,881 | How many images and how many epochs are required to fine tune LORA for stable diffusion on custom image dataset | I am trying to finetune LORA on a movie dataset , but I am using custom dataset which has 3-4 movie characters , instead of using the actual names of the actor we are using in movie name of the characters , how big the dataset would be required in terms of total number of images, and number of images per character and how many epochs would be required to fine tune this LORA model .
PS: I have already tried fine tuning with 200 images of a single character for 100,250 and 500 Epochs but the results are very bad , can anyone please provide some suggestion @patrickvonplaten @sayakpaul | https://github.com/huggingface/diffusers/issues/3881 | closed | [
"stale"
] | 2023-06-27T11:05:53Z | 2023-08-04T15:03:17Z | null | atharmzaalo2023 |
huggingface/peft | 636 | How to save full model weights and not just the adapters ? | ### System Info
peft==0.4.0.dev0
I'm not sure if this should be a bug report, so sorry if this is not convenient.
According to the `save_pretrained`method docstring, this saves the adapter model only and not the full model weights, is there an option where I can save the full model weights ? The use case is that we want to upload the full model to hf to be able to activate the inference API, however now we only save adapter weights
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
save_pretrained saves only adapters, maybe also add the option to save the full model
### Expected behavior
save_pretrained saves only adapters, maybe also add the option to save the full model | https://github.com/huggingface/peft/issues/636 | closed | [] | 2023-06-26T15:30:48Z | 2025-03-13T11:52:23Z | null | azayz |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.