repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/chat-ui | 594 | TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed | i use the lasted main version and i have error when make chat, and in GUI , it show "Sorry, something went wrong. Please try again."
TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed
at new NodeError (node:internal/errors:405:5)
at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1040:13)
at update (file:////chat-ui-main/build/server/chunks/_server.ts-38ce6e8d.js:480:22)
at file:////chat-ui-main/build/server/chunks/_server.ts-38ce6e8d.js:492:15
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.start (file:////chat-ui-main/build/server/chunks/_server.ts-38ce6e8d.js:585:9) {
code: 'ERR_INVALID_STATE'
can any one help me to fix this problem | https://github.com/huggingface/chat-ui/issues/594 | closed | [
"support"
] | 2023-11-29T04:28:27Z | 2024-06-17T12:48:45Z | 18 | AlexBlack2202 |
huggingface/chat-ui | 593 | Show image in chat box | Can I show a image by http link on chat box? | https://github.com/huggingface/chat-ui/issues/593 | open | [
"support"
] | 2023-11-29T03:17:17Z | 2023-11-30T17:57:32Z | 3 | ntqnhanguyen |
huggingface/optimum | 1,554 | ORT Models Failing because of the latest fsdp changes on transformers Trainer. | ### System Info
```shell
optimum from source
transformers from source
```
### Who can help?
@JingyaHuang
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
when trying to run training using ortmodule all models will fail due to latest changes on transformers trainer.
fsdp was removed as an attribute and it included other changes.
I can work on the fix if you guys don't have the bandwith.
@JingyaHuang
We also been getting a lot of this types errors, can we work on some CI pipeline to spot these failures so we can fix them fast?
Thanks.
### Expected behavior
`AttributeError: 'ORTTrainer' object has no attribute 'fsdp'
`
| https://github.com/huggingface/optimum/issues/1554 | closed | [
"bug"
] | 2023-11-28T20:22:40Z | 2023-12-26T18:15:02Z | 6 | AdamLouly |
huggingface/chat-ui | 592 | Authentication Doc and Code may be out-of-date/not working | ## Description
Hello,
Following the doc in the `README`: https://github.com/huggingface/chat-ui#basic-and-bearer. The UI should support (if setup in the `.env.local` file) `Basic` and `Bearer` authentication, however, what I noticed since the requests have been moved to the `huggingface` module is that the authorization flow has changed.
In the module:
```js
#huggingface/inference/dist/index.mjs
[...]
const { accessToken, model: _model, ...otherArgs } = args;
let { model } = args;
const { forceTask: task, includeCredentials, taskHint, ...otherOptions } = options ?? {};
const headers = {};
if (accessToken) {
headers["Authorization"] = `Bearer ${accessToken}`;
}
[...]
```
If I define a custom chat endpoint in this way:
```
"endpoints": [{"url": "URL/generate_stream", "type" : "tgi", "accessToken": "<bearer-token-only>"}]
```
then the `accessToken` is properly propagated, but the suggested `"authorization": "Bearer/Basic <string>"` does not work.
If this is intended:
1. I would be happy to open a quick PR to change the README to something like:
```suggestion
#### Bearer
Custom endpoints may require authorization, depending on how you configure them. Chat-UI support `Bearer` authentication.
You can use a token, which can be grabbed from [here](https://huggingface.co/settings/tokens).
You can then add the generated information and the `accessToken` parameter to your `.env.local`.
```env
"endpoints": [
{
"url": "https://HOST:PORT",
"accessToken": "<bearer-token>",
}
]
**NOTE**: currently, `Basic` authentication is not supported
```
Please let me know what do you think, and if I am missing something.
Thanks,
Guido
| https://github.com/huggingface/chat-ui/issues/592 | open | [
"bug",
"documentation",
"back"
] | 2023-11-28T18:50:15Z | 2023-11-29T13:29:22Z | 1 | muscionig |
huggingface/transformers.js | 421 | [Question] FeatureExtractionPipeline input length | @xenova : First of all thank you so much for your amazing work with this open source library. It opens up many possibilities.
One thing that caught my attention which is [FeatureExtractionPipeline](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FeatureExtractionPipeline) can accept any amount of input regardless of the models' [sequence lengths](https://huggingface.co/spaces/mteb/leaderboard). Does it truncate or tokenize the data internally before applying it to the model? Is there documentation or an explanation about the implementation details? | https://github.com/huggingface/transformers.js/issues/421 | closed | [
"question"
] | 2023-11-28T17:28:28Z | 2023-12-02T11:20:52Z | null | devfacet |
huggingface/sentence-transformers | 2,361 | How to divide long texts into chunks using sentence-transformers? | Hello, I encounter the issue of my texts exceeding the maximum lengths allowed by pretrained models. So I intend to divide my texts into smaller chunks and then calculate the average embeddings over them.
However, I find this process is not as straightforward as I initially thought.
In order to properly chunk the texts, I need to obtain the tokenized version of each text to determine the exact number of tokens.
Unfortunately, it seems that the tokenizers in sentence-transformers are not standalone, meaning they can not tokenize long texts.
So what is the best way to solve this problem?
| https://github.com/huggingface/sentence-transformers/issues/2361 | closed | [] | 2023-11-28T16:35:44Z | 2023-12-25T12:38:42Z | null | srhouyu |
huggingface/alignment-handbook | 56 | Why does the alignment-handbook account for user & system Inputs in loss calculation | I noticed that the alignment-handbook doesn't ignore the loss calculated from both the user and system inputs Based on my knowledge, many SFT choose to ignore these. I'm curious about the reasoning behind this difference. | https://github.com/huggingface/alignment-handbook/issues/56 | open | [] | 2023-11-28T06:03:53Z | 2024-05-30T07:45:29Z | 3 | xffxff |
huggingface/transformers | 27,737 | How to save the generated output of BarkModel to an npz file? | Hello there!
I'm using the BarkModel from Hugging Face Transformers and I'm wondering how to save the generated results to an npz file. I'd like to use these saved results as history prompts for the next generation.
In the [suno-ai/bark](https://github.com/suno-ai/bark) , when using the [`semantic_to_waveform`](https://github.com/suno-ai/bark/blob/main/bark/api.py#L35) method, I can pass `output_full = True`. This allows me to save the output to an npz file using `numpy.savez`.
However, as I transition to using the BarkModel within the transformers framework, I am uncertain about the equivalent process. Could you kindly provide guidance on how to save the generated results of the BarkModel to an npz file in the Transformers library?
Any assistance or code examples you could offer would be greatly appreciated.
Thank you for your time and support. | https://github.com/huggingface/transformers/issues/27737 | closed | [] | 2023-11-28T03:55:19Z | 2024-01-10T08:03:57Z | null | chet-chen |
huggingface/alignment-handbook | 55 | Running on single GPU(16GB) | Hi,
What is the best way to run this on my high performance laptop?
Should this somehow work? Can i calculate how many days/weeks it will run?
Thanks in advance
Specs:
> OS: Win 11 (WSL2)
> CPU: Intel Core i7 12850HX
> Make: Lenovo Thinkpad P16 gen 1
> Memory: 128GB DDR5-4800 (2400MHz)
> GPU: Nvidia RTX A5500 16GB
I found that this command would work on my laptop it seems:
`ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml --load_in_4bit=true --gradient_accumulation_steps=1024 --per_device_eval_batch_size=1 --per_device_train_batch_size=1`
how now run it for 1-2 hours ish:
> ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml --load_in_4bit=true --gradient_accumulation_steps=1024 --per_device_eval_batch_size=1 --per_device_train_batch_size=1
> INFO:root:Using nproc_per_node=1.
> 2023-11-27 15:41:33.914308: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
> 2023-11-27 15:41:33.941565: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
> To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
> 2023-11-27 15:41:34.582753: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
> [2023-11-27 15:41:35,164] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
> /usr/local/lib/python3.11/dist-packages/trl/trainer/ppo_config.py:141: UserWarning: The `optimize_cuda_cache` arguement will be deprecated soon, please use `optimize_device_cache` instead.
> warnings.warn(
> 2023-11-27 15:41:35 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1 distributed training: True, 16-bits training: False
> 2023-11-27 15:41:35 - INFO - __main__ - Model parameters ModelArguments(base_model_revision=None, model_name_or_path='mistralai/Mistral-7B-v0.1', model_revision='main', model_code_revision=None, torch_dtype='auto', trust_remote_code=False, use_flash_attention_2=True, use_peft=True, lora_r=64, lora_alpha=16, lora_dropout=0.1, lora_target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj'], lora_modules_to_save=None, load_in_8bit=False, load_in_4bit=True, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False)
> 2023-11-27 15:41:35 - INFO - __main__ - Data parameters DataArguments(chat_template=None, dataset_mixer={'HuggingFaceH4/ultrachat_200k': 1.0}, dataset_splits=['train_sft', 'test_sft'], max_train_samples=None, max_eval_samples=None, preprocessing_num_workers=12, truncation_side=None)
> 2023-11-27 15:41:35 - INFO - __main__ - Training/evaluation parameters SFTConfig(
> _n_gpu=1,
> adafactor=False,
> adam_beta1=0.9,
> adam_beta2=0.999,
> adam_epsilon=1e-08,
> auto_find_batch_size=False,
> bf16=True,
> bf16_full_eval=False,
> data_seed=None,
> dataloader_drop_last=False,
> dataloader_num_workers=0,
> dataloader_pin_memory=True,
> ddp_backend=None,
> ddp_broadcast_buffers=None,
> ddp_bucket_cap_mb=None,
> ddp_find_unused_parameters=None,
> ddp_timeout=1800,
> debug=[],
> deepspeed=None,
> disable_tqdm=False,
> dispatch_batches=None,
> do_eval=True,
> do_predict=False,
> do_train=False,
> eval_accumulation_steps=None,
> eval_delay=0,
> eval_steps=None,
> evaluation_strategy=IntervalStrategy.EPOCH,
> fp16=False,
> fp16_backend=auto,
> fp16_full_eval=False,
> fp16_opt_level=O1,
> fsdp=[],
> fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},
> fsdp_min_num_params=0,
> fsdp_transformer_layer_cls_to_wrap=None,
> full_determinism=False,
> gradient_accumulation_steps=1024,
> gradient_checkpointing=True,
> gradient_checkpointing_kwargs={'use_reentrant': False},
> greater_is_better=None,
> group_by_length=False,
> half_precision_backend=auto,
> hub_always_push=False,
> hub_model_id=zephyr-7b-sft-lora,
> hub_private_repo=False,
> hub_strategy=HubStrategy.EVERY_SAVE,
> hub_token=<HUB_TOKEN>,
> ignore_data_skip=False,
> include_inputs_for_metrics=False,
> include_tokens_per_second=False,
> jit_mode_eval=False,
> label_names=None,
> label_smoothing_factor=0.0,
> learning_rate=2e-05,
> length_column_name=length,
> load_best_model_at_end=False,
> local_rank=0,
> log_level=info,
> log_level_replica=warning,
> log_on_each_node=True,
> logging_dir=data/zephyr-7b-sft-lora/runs/Nov27_15-41-35,
> logging_first_step=True,
> logging_nan_inf_filter=True,
> logging_steps=5,
> logg | https://github.com/huggingface/alignment-handbook/issues/55 | open | [] | 2023-11-27T19:50:12Z | 2023-12-13T14:58:31Z | 1 | patchie |
huggingface/chat-ui | 588 | Hallucinations when using web search | I have tried to run a mistral model with the search api but the web results don't seem to be making it to the model.
I'm hosting the model through text-gen-webui and encountering the exact same issue as #571.
I've given it a go with [openhermes-2.5-mistral-7b.Q5_K_M.gguf](https://imgur.com/a/HQV1lGD), [it seems to use the search tool just fine](https://imgur.com/a/GN9ycZY) but fails to incorporate the results into its answer.
Any idea how to fix this issue or at least how I could help with debugging. | https://github.com/huggingface/chat-ui/issues/588 | open | [
"support",
"websearch"
] | 2023-11-27T17:12:22Z | 2023-12-27T21:25:42Z | 2 | NasonZ |
huggingface/chat-ui | 587 | How do I format the ChatPromptTemplate ? | I currently have a working setup with llamacpp+mistral 7b instruct with the following loca.env :
```
MODELS=`[
{
"name": "Mistral",
"chatPromptTemplate": "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s> {{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 4096,
"max_new_tokens": 4096,
"stop": ["</s>"]
},
"endpoints": [{
"url": "http://127.0.0.1:8080",
"type": "llamacpp"
}
/
]
}
]`
```
I am trying to set up the model "Neural Chat" by intel , and the tamplate is:
### System:
{system_message}
### User:
{prompt}
### Assistant:
How can I set the chatPromptTemplate to match it? and so it knows to summarize and search the web correctly?
Im having some issues to understand how to format it, and where to put ### User ETC.
Thanks | https://github.com/huggingface/chat-ui/issues/587 | open | [
"support",
"models"
] | 2023-11-27T15:21:17Z | 2023-12-19T07:21:50Z | 5 | iChristGit |
huggingface/candle | 1,379 | Help request: How to compile CUDA kernels with `cc-rs`? | Hello everybody,
In the process of adding PagedAttention to candle-vllm, I need to compile some CUDA kernels. I am currently trying to use `cc-rs` in a `build.rs` to automatically build the kernels. However, I am not making much progress as I have run into issues that seem to be tied to the build stage.
I would really appreciate some pointers on how to use either `nvcc` or `cc-rs` to build these CUDA kernels. I have opened an issue with vllm: vllm-project/vllm#1793.
Thanks,
Eric | https://github.com/huggingface/candle/issues/1379 | closed | [] | 2023-11-27T14:32:10Z | 2023-11-27T20:57:11Z | null | EricLBuehler |
huggingface/transformers | 27,726 | How to load PixArtAlphaPipeline in 8bit? | I know there is example but I couldn't make it work. I am trying to make an auto installer and gradio interface for Pix Art Alpha Pipeline so common people can install and use on their Windows PCs
Currently my below code working and I want to make it load in 8 bit is that possible?
```
if torch.cuda.is_available():
pipe = PixArtAlphaPipeline.from_pretrained(
"PixArt-alpha/PixArt-XL-2-1024-MS",
torch_dtype=torch.float16,
use_safetensors=True,
)
if ENABLE_CPU_OFFLOAD:
pipe.enable_model_cpu_offload()
else:
pipe.to(device)
print("Loaded on Device!")
# speed-up T5
pipe.text_encoder.to_bettertransformer()
if USE_TORCH_COMPILE:
pipe.transformer = torch.compile(pipe.transformer, mode="reduce-overhead", fullgraph=True)
print("Model Compiled!")
```
```
seed = int(randomize_seed_fn(seed, randomize_seed))
generator = torch.Generator().manual_seed(seed)
if schedule == 'DPM-Solver':
if not isinstance(pipe.scheduler, DPMSolverMultistepScheduler):
pipe.scheduler = DPMSolverMultistepScheduler()
num_inference_steps = dpms_inference_steps
guidance_scale = dpms_guidance_scale
elif schedule == "SA-Solver":
if not isinstance(pipe.scheduler, SASolverScheduler):
pipe.scheduler = SASolverScheduler.from_config(pipe.scheduler.config, algorithm_type='data_prediction', tau_func=lambda t: 1 if 200 <= t <= 800 else 0, predictor_order=2, corrector_order=2)
num_inference_steps = sas_inference_steps
guidance_scale = sas_guidance_scale
else:
raise ValueError(f"Unknown schedule: {schedule}")
if not use_negative_prompt:
negative_prompt = None # type: ignore
prompt, negative_prompt = apply_style(style, prompt, negative_prompt)
images = pipe(
prompt=prompt,
width=width,
height=height,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
generator=generator,
num_images_per_prompt=NUM_IMAGES_PER_PROMPT,
use_resolution_binning=use_resolution_binning,
output_type="pil",
).images
```
### Who can help?
@sayakpaul @Narsil @SunMarc @younesbelkada @gante
I tried below but it broken the app
```
text_encoder = T5EncoderModel.from_pretrained(
"PixArt-alpha/PixArt-XL-2-1024-MS",
subfolder="text_encoder",
load_in_8bit=True,
device_map="auto",
)
pipe = PixArtAlphaPipeline.from_pretrained(
"PixArt-alpha/PixArt-XL-2-1024-MS",
text_encoder=text_encoder,
transformer=None,
device_map="auto"
)
```
The error I am getting is like below
```
Downloading shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<?, ?it/s]
bin G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll
Loading checkpoint shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:06<00:00, 3.09s/it]
Loading pipeline components...: 0%| | 0/4 [00:00<?, ?it/s]Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading pipeline components...: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 9.50it/s]
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
batch_count 1
Traceback (most recent call last):
File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\queueing.py", line 427, in call_prediction
output = await route_utils.call_process_api(
File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\blocks.py", line 1484, in process_api
result = await self.call_function(
File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\blocks.py", line 1106, in call_function
prediction = await anyio.to_thread.run_sync(
File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "G:\pixArt installer\P | https://github.com/huggingface/transformers/issues/27726 | closed | [] | 2023-11-27T11:36:44Z | 2024-01-05T08:03:56Z | null | FurkanGozukara |
huggingface/diffusers | 5,942 | How to prepare dataset for text-guided image to image generation | As the title suggests, I want to use stable diffusion to fine-tune my own dataset. How should I build it? I have tried:
--input_image
--xx.jpg
--xx.jpg
--output_image
--yy.jpg
--yy.jpg
metadata.csv
but it did't work ,can anybody help? | https://github.com/huggingface/diffusers/issues/5942 | closed | [
"stale"
] | 2023-11-27T06:58:57Z | 2024-01-09T15:06:12Z | null | feelme0461 |
huggingface/alignment-handbook | 52 | What about the system prompt? | It seems that the system prompt is left to be `\n` or rather blank.
Inspecting UltraChat (https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k?row=5), seems that no system prompt is added to the dataset.
There must be something that I missed in regards to addition of system prompts to the dataset for training, especially since the officially deployed model is able to adhere to system prompt intent (like 'You are a pirate', etc) | https://github.com/huggingface/alignment-handbook/issues/52 | open | [] | 2023-11-27T02:55:38Z | 2023-11-27T02:55:38Z | 0 | timothylimyl |
huggingface/alignment-handbook | 50 | What is the expected "global batch size"? | In the recipes README there is this statement:
> If you scale up/down the number of GPUs, we recommend also scaling up the per-device batch size or number of gradient accumulation steps to keep the global batch size constant (and thus replicate our results).
Q: What is the expected "global batch size"?
For example, I'm trying to run this on 2x3090s and need to know what the expected global batch size is so I can adjust the accumulation steps and per device train batch size.
Thanks much! | https://github.com/huggingface/alignment-handbook/issues/50 | closed | [] | 2023-11-26T21:47:41Z | 2023-11-27T04:14:22Z | null | ohmeow |
huggingface/transformers.js | 417 | [Question] Any examples of processing video frames of a user uploaded video (specifically for depth estimation)? | Hi there, I'm wondering if there are any examples of processing video frames of a user uploaded video? I'm specifically looking to run depth estimation on each frame of a short video, but any similar example would be useful.
If not, does this approach seem correct?
* Use one of the approaches described [here](https://stackoverflow.com/questions/32699721/javascript-extract-video-frames-reliably) to draw each frame of the video to a canvas
* Call `HTMLCanvasElement.toBlob()` on the canvas to get a `Blob`
* Pass N (10?) of those Blobs to a worker at a time
* For each of those Blobs call `const image = await RawImage.fromBlob(blob)` to get a `RawImage`
* Run depth estimation on the list of images with `await classifier([rawImage1, rawImage2, etc.])`
Thanks for any help!
| https://github.com/huggingface/transformers.js/issues/417 | open | [
"question"
] | 2023-11-26T09:18:04Z | 2023-12-10T22:51:18Z | null | jparismorgan |
huggingface/chat-ui | 583 | Option to share the web interface locally/online ? | I wish we could make the ui available on phone/mac or even outside the local network.
For example in SillyTavern (https://github.com/SillyTavern/SillyTavern)
You can either open it up to all devices in the local network or open a cloudflare tunnel to access it through a link.
Is that possible to add? | https://github.com/huggingface/chat-ui/issues/583 | open | [
"enhancement",
"back"
] | 2023-11-26T00:44:08Z | 2024-04-22T16:45:44Z | 2 | iChristGit |
huggingface/candle | 1,375 | Question: How to interface a C++ API `torch::Tensor` with `candle_core::Tensor`? | I was wondering if there is a way to use a C++ API that accepts a Pytorch `torch::Tensor` with a Candle `candle_core::Tensor`? For reference, I want to use [this](https://github.com/vllm-project/vllm/blob/main/csrc/ops.h) C++ API.
Can I convert between tensor types? @LaurentMazare, would it be possible to use [tch-rs](https://github.com/LaurentMazare/tch-rs) to make this conversion?
Thanks for any help! | https://github.com/huggingface/candle/issues/1375 | closed | [] | 2023-11-25T19:05:27Z | 2023-11-25T23:04:03Z | null | EricLBuehler |
huggingface/accelerate | 2,187 | how to collect outputs(not tensor dtype) on multi gpus | As the toy example below,
```
val_dataset = ['a', 'b', 'c', 'd', 'e']
val_dataloader = DataLoader(
val_dataset, batch_size=2
)
accelerator = Accelerator()
val_dataloader = accelerator.prepare(val_dataloader)
for step, batch in enumerate(val_dataloader):
print(batch, accelerator.device)
```
When i run this script by `CUDA_VISIBLE_DEVICES="0,1" accelerate launch --config_file="./configs/acc_mgpu_config.yaml" test_batch.py` , i will get below results, how can I get ['a', 'b', 'c', 'd', 'e'] in main process after reduce batch in all processes?
```
['a', 'b'] cuda:0
['e', 'a'] cuda:0
['c', 'd'] cuda:1
['b', 'c'] cuda:1
```
I know that accelerate have a `gather_for_metrics` can gathers input and potentially **drops duplicates** in the last batch if on a distributed system. But this function seems only works for data which is tensor type, in this example, my data is string, is there any way to achieve this?
(if i use `print(accelerator.gather_for_metrics((batch)), accelerator.device)`, it will raise error like below
```
TypeError: Unsupported types (<class 'str'>) passed to `_gpu_gather_one`. Only nested list/t
uple/dicts of objects that are valid for `is_torch_tensor` should be passed.
```
Thanks for any potential answers! | https://github.com/huggingface/accelerate/issues/2187 | closed | [] | 2023-11-25T02:51:21Z | 2023-11-27T06:07:19Z | null | shliu0 |
huggingface/chat-ui | 581 | Trying to set up with TGI | I have installed TGI using docker, I can see the api docs at http://127.0.0.1:8080/docs/
But still cannot set up the env.local file, I have tried to set it up with the example, but always failing.

Can someone who set it up correctly give me the rough idea of how to write the file ? I have tried a lot of combinations, and it always fail either internal error or the screenshot above.
| https://github.com/huggingface/chat-ui/issues/581 | open | [
"support"
] | 2023-11-24T19:20:27Z | 2023-12-19T06:02:25Z | 2 | iChristGit |
huggingface/transformers.js | 412 | [Question] Does any version support Node 14 | Hi,
I have tried downgrading the library to version 2, and even to 1, but that one was missing types.
Is there some way to be able to use it with Node 14? I have seen that mostly the issues are with nullish coalescing characters, so wanted to make sure if there could be other issues that tie it to Node 18+, and also if there have been any security and vulnerability issues from said version (that could work with Node 14).
Thanks
| https://github.com/huggingface/transformers.js/issues/412 | closed | [
"question"
] | 2023-11-24T16:01:54Z | 2023-12-04T13:16:26Z | null | Ncifra |
huggingface/hf_transfer | 20 | [Usage] How to enable the progress bar? | I've installed `hf_transfer-0.1.4`.
But when I use `huggingface-cli download`, the progress bar mentioned [here](https://huggingface.co/docs/huggingface_hub/guides/download#faster-downloads) seems to be disabled at default.
And I failed to figure out how to enable it.
Could anyone be kind enough to provide some guidance? | https://github.com/huggingface/hf_transfer/issues/20 | closed | [] | 2023-11-24T08:13:00Z | 2023-11-27T12:15:10Z | null | tongyx361 |
huggingface/gsplat.js | 39 | How to implement point clouds render? | Hi, great work! I see that this library is upon [antimatter15/splat](https://github.com/antimatter15/splat), but this library does not have the same render which is very similar to point clouds like that lib. I want to know how to implement this function base on your gsplat library? By the way, do you have any document about the config options, so I can set some render options? | https://github.com/huggingface/gsplat.js/issues/39 | open | [] | 2023-11-24T07:27:33Z | 2024-01-22T21:12:06Z | null | xinnai |
huggingface/alignment-handbook | 46 | Weird DPO loss | Hi, I would like to raise some attention to issue #38.
It seems that the DPO-Lora training loss (red line) drops abruptly at the beginning of each epoch, which seems weird. (I tried Lora model global batch size 64, multi_gpu acceleration, 8GPUs, learning rate 1e-4, others same suggested)
In the mean time, the full parameter fine tunning has no such problem (official settings).

I don't know if this is normal and **assume this is a bug associated with the lora model**. Is there any explanations? Has anyone encountered the same issue? If your rerun loss is normal, can you share your configs? | https://github.com/huggingface/alignment-handbook/issues/46 | open | [] | 2023-11-24T03:07:46Z | 2024-05-28T07:09:10Z | 1 | ChenDRAG |
huggingface/diffusers | 5,912 | How to set config in VaeImageProcessor? | I created a `StableDiffusionControlNetImg2ImgPipeline` and I want to manually set the config `do_normalize` in `VaeImageProcessor`. I wonder how can I set? I look for it in the pipe.vae.config and see nothing about it. | https://github.com/huggingface/diffusers/issues/5912 | closed | [
"stale"
] | 2023-11-23T12:54:22Z | 2023-12-26T21:29:17Z | null | youyuge34 |
huggingface/chat-ui | 576 | Cannot build using latest Chat UI Space template | Using the Dockerfile created from the ChatUI-Space template, but cloning it to a local machine and trying to build it fails at `npm run build`
> #18 [chatui-builder 12/12] RUN npm run build
#0 0.673
#0 0.673 > chat-ui@0.6.0 build
#0 0.673 > vite build
#0 0.673
#0 1.678 vite v4.3.9 building SSR bundle for production...
#0 1.678
#0 1.707 transforming...
#0 4.381 "BaseClient" and "TokenSet" are imported from external module "openid-client" but never used in "src/lib/server/auth.ts".
#0 4.381 β 210 modules transformed.
#0 4.473 rendering chunks...
#0 5.665
#0 5.665 node:internal/event_target:1036
#0 5.665 process.nextTick(() => { throw err; });
#0 5.665 ^
#0 5.666 SyntaxError [Error]: Bad control character in string literal in JSON at position 157
#0 5.666 at JSON.parse (<anonymous>)
#0 5.666 at file:///app/chat-ui/.svelte-kit/output/server/chunks/models.js:512:51
#0 5.666 at ModuleJob.run (node:internal/modules/esm/module_job:193:25)
#0 5.666 Emitted 'error' event on Worker instance at:
#0 5.666 at [kOnErrorMessage] (node:internal/worker:309:10)
#0 5.666 at [kOnMessage] (node:internal/worker:320:37)
#0 5.666 at MessagePort.<anonymous> (node:internal/worker:216:57)
#0 5.666 at [nodejs.internal.kHybridDispatch] (node:internal/event_target:761:20)
#0 5.666 at exports.emitMessage (node:internal/per_context/messageport:23:28)
#0 5.666
#0 5.666 Node.js v19.9.0
#0 5.751 npm notice
#0 5.751 npm notice New major version of npm available! 9.6.3 -> 10.2.4
#0 5.751 npm notice Changelog: <https://github.com/npm/cli/releases/tag/v10.2.4>
#0 5.751 npm notice Run `npm install -g npm@10.2.4` to update!
#0 5.751 npm notice
#18 ERROR: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1
#------
#> [chatui-builder 12/12] RUN npm run build:
#0 5.666 at MessagePort.<anonymous> (node:internal/worker:216:57)
#0 5.666 at [nodejs.internal.kHybridDispatch] (node:internal/event_target:761:20)
#0 5.666 at exports.emitMessage (node:internal/per_context/messageport:23:28)
#0 5.666
#0 5.666 Node.js v19.9.0
#0 5.751 npm notice
#0 5.751 npm notice New major version of npm available! 9.6.3 -> 10.2.4
#0 5.751 npm notice Changelog: <https://github.com/npm/cli/releases/tag/v10.2.4>
#0 5.751 npm notice Run `npm install -g npm@10.2.4` to update!
#0 5.751 npm notice
#------
#Dockerfile:49
#--------------------
#47 | npm ci
#48 |
#49 | >>> RUN npm run build
#50 |
#51 | FROM ghcr.io/huggingface/text-generation-inference:latest
#--------------------
#ERROR: failed to solve: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1 | https://github.com/huggingface/chat-ui/issues/576 | open | [
"support",
"spaces"
] | 2023-11-23T12:23:06Z | 2023-11-30T14:11:32Z | 1 | simon376 |
huggingface/transformers | 27,666 | how to remove punctuation marks. | ### System Info
i trained t5-large for translation.
the result of train was good
But when i input some sentence, the result is like that "What are you doing now?.??....."
[?.??......] <- how to delete that punctuation marks.
i put some parameter like max_length. But i can not solve that situation
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
c
### Expected behavior
cfdvf | https://github.com/huggingface/transformers/issues/27666 | closed | [] | 2023-11-23T07:21:33Z | 2023-12-31T08:03:43Z | null | chanyong-owl |
huggingface/blog | 1,655 | how to scale fine-tuning whisper in English? | I'm attempting to fine-tune whisper using the excellent hugging face tut: https://huggingface.co/blog/fine-tune-whisper. The delta between the tut's case and my case is that I am using English which has 1M more test cases (and also I'm using big GPUs so I am using `whisper-large-v3`).
No matter how much compute I throw at the core data preparation step (e.g. take a look at `num_proc`):
`common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=108)`
I still only prepare the data at about 30 examples / s. For 1M examples this doesn't scale. My last test was on an 8 GPU 112 vCPU instance and still there was no change. Indeed `htop` shows that all 112 of my vCPUs are engaged, but the actual prep speed remains flat across all compute types. The only thing I haven't tried is crazy fast storage like NVMe, which I'm going to do, but I have a feeling it has to do with either the `datasets` library configuration or something else. I've never had problems with GPUs or whisper previously so I'm a bit baffled as to what the issue could. I've followed the tutorial to a 't' except for changing the language to `en`, whisper to `whisper-large-v3` and `num_proc` to higher parallels. Any insight would be greatly appreciated! | https://github.com/huggingface/blog/issues/1655 | open | [] | 2023-11-22T22:45:29Z | 2024-03-10T06:55:47Z | null | jsteinberg-rbi |
huggingface/datasets | 6,446 | Speech Commands v2 dataset doesn't match AST-v2 config | ### Describe the bug
[According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover, the class labels themselves don't match between the model config and the dataset. It is difficult to reproduce the data used to fine tune `MIT/ast-finetuned-speech-commands-v2`.
### Steps to reproduce the bug
```
>>> model = ASTForAudioClassification.from_pretrained("MIT/ast-finetuned-speech-commands-v2")
>>> model.config.id2label
{0: 'backward', 1: 'follow', 2: 'five', 3: 'bed', 4: 'zero', 5: 'on', 6: 'learn', 7: 'two', 8: 'house', 9: 'tree', 10: 'dog', 11: 'stop', 12: 'seven', 13: 'eight', 14: 'down', 15: 'six', 16: 'forward', 17: 'cat', 18: 'right', 19: 'visual', 20: 'four', 21: 'wow', 22: 'no', 23: 'nine', 24: 'off', 25: 'three', 26: 'left', 27: 'marvin', 28: 'yes', 29: 'up', 30: 'sheila', 31: 'happy', 32: 'bird', 33: 'go', 34: 'one'}
>>> dataset = load_dataset("speech_commands", "v0.02", split="test")
>>> torch.unique(torch.Tensor(dataset['label']))
tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13.,
14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27.,
28., 29., 30., 31., 32., 33., 34., 35.])
```
If you try to explore the [dataset itself](https://huggingface.co/datasets/speech_commands/viewer/v0.02/test), you can see that the id to label does not match what is provided by `model.config.id2label`.
### Expected behavior
The labels should match completely and there should be the same number of label classes between the model config and the dataset itself.
### Environment info
datasets = 2.14.6, transformers = 4.33.3 | https://github.com/huggingface/datasets/issues/6446 | closed | [] | 2023-11-22T20:46:36Z | 2023-11-28T14:46:08Z | 3 | vymao |
huggingface/alignment-handbook | 45 | Reproducing of Lora Model Result on MT-Bench | Recently, I attempted to fit the DPO on my own dataset.
Initially, I tried to reproduce the results of your LORA model( 7.43 on MT-Bench).
However, I encountered some issues.
Despite using all your parameters and data, here are my results on MT-Bench:
| Model | MT-Bench |
|--------|--------|
| Zephyr-SFT-Lora-Own | 6.37 |
| Zephyr-DPO-Lora-Own | 6.95 |
Then, I downloaded your models from [here](https://huggingface.co/alignment-handbook), and the results were nearly the same as mine.
| Model | MT-Bench |
|--------|--------|
| Zephyr-SFT-Lora| 6.4|
| Zephyr-DPO-Lora| 6.93 |
DPO does help improve performance on MT-Bench, but I can't achieve a score of **7.43**. Is there any difference between the model described in your paper and the model available on your homepage?
Or could it be the difference between the full and LORA?
By the way, I truly love the "yaml style" argument parser; it's clear and elegant!
@edbeeching @lewtun
| https://github.com/huggingface/alignment-handbook/issues/45 | open | [] | 2023-11-22T03:42:32Z | 2023-12-11T17:09:32Z | 27 | wlhgtc |
huggingface/optimum | 1,551 | Running llama-2-13b resulted in `Killed` | ### System Info
```shell
This is my run.py code:
import torch
import transformers
import requests
print(torch.cuda.is_available())
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load model and adapter weights from local directory
model = transformers.AutoModelForCausalLM.from_pretrained("/home/maxloo/src/pastoring/llama/llama-2-13b")
model.to(device)
adapter = transformers.AutoModelForCausalLM.from_pretrained("/home/maxloo/src/pastoring/adapter", config=transformers.configuration.AdapterConfig.from_json_file("adapter_config.json"))
model.load_state_dict(adapter.state_dict())
adapter.load_state_dict(model.state_dict())
# Define prompt
prompt = "Hello, I am a chatbot."
# Perform inference
response = model.generate(prompt, max_length=50)
# Print response
print(response)
This is my adapter_config.json code:
{
"base_model_name_or_path": "../llama/llama-2-13b/",
"bias": "none",
"enable_lora": null,
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"lora_alpha": 16,
"lora_dropout": 0.05,
"merge_weights": false,
"modules_to_save": null,
"peft_type": "LORA",
"r": 16,
"target_modules": [
"q_proj",
"k_proj",
"v_proj",
"o_proj"
],
"task_type": "CAUSAL_LM",
"task": "question_answering",
"domain": "general"
}
These are my hardware specs:
Intel Core i7-13700HX, NVIDIA RTX 4060, 32GB DDR5, 1TB SSD
I'm using Windows 11 WSL2 Bash to run this command:
python3 run.py
I have set my .wslconfig file as follows:
[wsl2]
memory=24GB
processors=24
I expect a chat message to be displayed and a prompt for my chat input, but this is the actual output:
Killed
How do I resolve this? Should I be testing llama-13b first before llama-2-13b?
```
### Who can help?
@echarlaix,
@philschmid
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
python3 run.py

### Expected behavior
I expect a chat message to be displayed and a prompt for my chat input, but this is the actual output:
Killed
How do I resolve this? Should I be testing llama-13b first before llama-2-13b? | https://github.com/huggingface/optimum/issues/1551 | closed | [
"bug"
] | 2023-11-21T13:11:40Z | 2024-01-09T15:58:09Z | 1 | maxloopinmok |
huggingface/optimum-quanto | 32 | Are threre some exmples show how to export onnx model ? torch.onnx.export | https://github.com/huggingface/optimum-quanto/issues/32 | closed | [] | 2023-11-21T11:33:37Z | 2024-03-13T08:15:51Z | null | youkiwang | |
huggingface/transformers | 27,615 | How to get the number of trainable parameters for a hf model | ### Feature request
'
peft_parameters = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=8,
bias="none",
task_type="CAUSAL_LM"
)
train_params = TrainingArguments(
output_dir="./results_modified",
num_train_epochs=1,
per_device_train_batch_size=4,
gradient_accumulation_steps=1,
optim="paged_adamw_32bit",
save_steps=25,
logging_steps=25,
learning_rate=2e-4,
weight_decay=0.001,
fp16=False,
bf16=False,
max_grad_norm=0.3,
max_steps=-1,
warmup_ratio=0.03,
group_by_length=True,
lr_scheduler_type="constant",
report_to="tensorboard"
)
fine_tuning = SFTTrainer(
model=base_model,
train_dataset=training_data,
peft_config=peft_parameters,
dataset_text_field="text",
tokenizer=llama_tokenizer,
args=train_params
)
fine_tuning.train()
I am using the above code for model training with Lora. I wonder after applying to Lora. How could I check the number of trainable parameters of the model before and after?
### Motivation
Understand the training process well
### Your contribution
I'd love to | https://github.com/huggingface/transformers/issues/27615 | closed | [] | 2023-11-21T00:37:01Z | 2023-11-21T19:28:32Z | null | mathmax12 |
huggingface/chat-ui | 571 | trying to replicate the api search with the local search option | When I try searching for information on the site (huggingface.co/chat) it works fine and gives correct information, but when doing the same thing using the same model I get hallucinations..
Ive tried all sorts of temperature settings and models.
This is the result locally:

This is with the site:

The sources look the smae on both but the actual response is always not even real information..
This is my current config:
MONGODB_URL=mongodb://localhost:27017
PUBLIC_APP_NAME=PrivateGPT
MODELS=`[
{
"name": "text-generation-webui",
"id": "text-generation-webui",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 12,
"truncate": 1000,
"max_new_tokens": 1024,
"stop": []
},
"endpoints": [{
"type" : "openai",
"baseURL": "http://127.0.0.1:5000/v1/"
}]
}
]`
TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed
at new NodeError (node:internal/errors:405:5)
at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1040:13)
at update (C:/ChatUI/src/routes/conversation/[id]/+server.ts:155:20)
at Object.start (C:/ChatUI/src/routes/conversation/[id]/+server.ts:189:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
code: 'ERR_INVALID_STATE'
}
TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed
at new NodeError (node:internal/errors:405:5)
at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1040:13)
at update (C:/ChatUI/src/routes/conversation/[id]/+server.ts:155:20)
at Object.start (C:/ChatUI/src/routes/conversation/[id]/+server.ts:189:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
| https://github.com/huggingface/chat-ui/issues/571 | closed | [
"support"
] | 2023-11-20T20:57:23Z | 2023-12-05T15:19:49Z | 29 | iChristGit |
huggingface/trl | 1,014 | How to avoid training radomness? | Iβm using the `trl.SFTTrainer` to fine-tune Vicuna, and Iβm using the same data and parameters for fine-tuning. However, Iβve noticed that even after setting:
```
def set_seed(seed=42):
# set seed for all possible avenues of stochasticity
numpy.random.seed(seed=seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
training_args = TrainingArguments(
report_to="none",
output_dir=str(ckpt_path),
do_eval=False,
save_strategy="epoch",
evaluation_strategy="no",
num_train_epochs=training_epochs,
seed=42,
)
```
the fine-tuned checkpointβs evaluation remains unstable. Every time I fine-tune with the same dataset, I get significantly different results. How can I ensure the stability of my fine-tuning?
I also tried this:
https://discuss.huggingface.co/t/fixing-the-random-seed-in-the-trainer-does-not-produce-the-same-results-across-runs/3442
But I was wrong even with this codes:
```
def model_init():
return AutoModelForCausalLM.from_pretrained(
"/data/ckpts/huggingface/models/models--lmsys--vicuna-7b-v1.5/snapshots/de56c35b1763eaae20f4d60efd64af0a9091ebe5",
device_map="auto",
torch_dtype=torch.bfloat16,
use_flash_attention_2=True,
)
training_args = TrainingArguments(
report_to="none",
output_dir=str(ckpt_path),
do_eval=False,
save_strategy="epoch",
evaluation_strategy="no",
num_train_epochs=training_epochs,
seed=42,
)
trainer = SFTTrainer(
model_init=model_init,
args=training_args,
train_dataset=mapped_dataset,
dataset_text_field="text",
data_collator=data_collator,
max_seq_length=1500,
)
```
This would end in errors.
```
Traceback (most recent call last):
File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status
response.raise_for_status()
File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/transformers/utils/hub.py", line 429, in cached_file
resolved_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1346, in hf_hub_download
raise head_call_error
File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1232, in hf_hub_download
metadata = get_hf_file_metadata(
^^^^^^^^^^^^^^^^^^^^^
File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1608, in get_hf_file_metadata
hf_raise_for_status(r)
File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 293, in hf_raise_for_status
raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-655b8b21-096243713e568c65194e1a69;8e4415fe-8069-43e1-8412-fdd028a8ebcd)
Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/cyzhao/main/test_scripts/main.py", line 402, in <module>
finetune_vicuna(
File "/home/cyzhao/main/test_scripts/main.py", line 207, in finetune_vicuna
trainer = SFTTrainer(
^^^^^^^^^^^
File "/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/trl/trainer/sft_trainer.py", line 162, in __init__
model = AutoModelForCausalLM.from_pretrained(model)
^^^^^^^ | https://github.com/huggingface/trl/issues/1014 | closed | [] | 2023-11-20T16:47:28Z | 2024-01-03T15:05:11Z | null | zhaochenyang20 |
huggingface/candle | 1,349 | How to pass bounding box instead of points in the segment-anything example? | Is it possible to pass a bounding box instead of points when using the segment-anything model? Is this just 4 points? | https://github.com/huggingface/candle/issues/1349 | open | [] | 2023-11-20T15:44:22Z | 2023-11-20T15:44:22Z | null | svelterust |
huggingface/alignment-handbook | 43 | Did you use RMSprop or AdamW as the optimizer? | Hi to whoever is reading this π€
## Question
After reading the Zephyr pre-printed paper https://arxiv.org/pdf/2310.16944.pdf and going through the configuration files here, I saw that there was a mismatch between the optimizer used in https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/dpo/config_full.yaml, and the one reported in the paper, AdamW.
So the question is, did you use RMSprop to run the full DPO fine-tuning or AdamW with no weight decay as stated in the paper?
Thanks in advance! | https://github.com/huggingface/alignment-handbook/issues/43 | closed | [] | 2023-11-20T15:23:03Z | 2024-03-07T06:55:07Z | 3 | alvarobartt |
huggingface/sentence-transformers | 2,359 | How to evaluate the result of dataset that does not have any labels | Hi,
I was trying to look at the different evaluation metrics that are provided to SentenceTransformers. I have a column of text in my dataset that I compare against a query and get the top k similarity using cosine similarity. I do not know if there is any method to evaluate the result. Should I consider the cosine similarity score as my evaluation metric as well? By evaluation, I mean, how can I show that the result I got is good? Is reasonable?
from sentence_transformers import SentenceTransformer, util
import pandas as pd
# Load a pre-trained model
model = SentenceTransformer('msmarco-distilbert-cos-v5')
# Example query
query = "Semantic search example query"
# Example corpus
corpus = ["Example sentence 1", "Example sentence 2", "Example sentence 3", ...] # Add more sentences to your corpus
# Encode the query and corpus into embeddings
query_embedding = model.encode(query, convert_to_tensor=True)
corpus_embeddings = model.encode(corpus, convert_to_tensor=True)
# Compute cosine similarities
cosine_similarities = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0]
# Get indices of the 3 nearest neighbors
indices_nearest_neighbors = pd.Series(cosine_similarities).nlargest(3).index
# Retrieve the 3 nearest neighbors
nearest_neighbors = [corpus[i] for i in indices_nearest_neighbors]
# Print the results
print(f"Query: {query}")
print("3 Nearest Neighbors:")
for neighbor in nearest_neighbors:
print("-", neighbor)
| https://github.com/huggingface/sentence-transformers/issues/2359 | open | [] | 2023-11-20T14:52:21Z | 2023-11-20T14:52:21Z | null | Yarmohamadshr |
huggingface/alignment-handbook | 42 | How to QLoRA training with ZeRO-3 on two or more GPUs? | I added a 4-bit load after the command LoRA training with ZeRO-3 on two or more GPUs to achieve a mix of QLoRA and ZeRO-3. But the program encountered the following error:
RuntimeError: expected there to be only one unique element in <generator object Init._convert_to_deepspeed_param.<locals>.all_gather_coalesced.<locals>.<genexpr> at 0x7f2ec8daf900>
The command is:
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml --num_processes=2 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml --load_in_4bit=true
| https://github.com/huggingface/alignment-handbook/issues/42 | open | [] | 2023-11-20T14:13:36Z | 2024-05-17T00:27:27Z | null | Di-Zayn |
huggingface/transformers | 27,600 | How to get input sentence embedding from Llama or Llama2? | I'm trying to get the sentence embedding that I input, I checked some common practice to do it, but I'm not sure I'm doing the it right. Who may be help? @gante Thanks if you can be help. my code is as below:
```
model = LlamaForCausalLM.from_pretrained(
args.pretrained_name_or_path,
torch_dtype=torch.float16,
device_map=device,
)
tokenizer = LlamaTokenizer.from_pretrained(args.pretrained_name_or_path, fast_tokenizer=True)
model.to(device)
model.eval()
tokenizer.pad_token_id = 0
tokenizer.padding_side = "left"
for i in range(0, len(sentences), batch_size):
batch_sentences = sentences[i: i+batch_size]
inputs = tokenizer(batch_sentences, padding=True, truncation=False, return_tensors='pt')
inputs = inputs.to(device)
with torch.no_grad():
outputs = model(**inputs, output_hidden_states=True)
hidden_states = outputs.hidden_states[-1]
sentence_embeddings = hidden_states[:, -1, :] # # here is using the **last token's** last layer hidden states as sentence embeddings,
# or sentence_embeddings = outputs.hidden_states[-1].mean(dim=1) # here use average sentence embedding.
# and I'm not sure which one is better.
embeddings.append(sentence_embeddings.cpu())
embeddings = torch.cat(embeddings, dim=0)
```
| https://github.com/huggingface/transformers/issues/27600 | closed | [] | 2023-11-20T13:18:08Z | 2023-11-22T14:32:26Z | null | waterluck |
huggingface/transformers | 27,592 | How to always use initial prompt in Whisper? | I checked this PR (#22496 ) but still can't figure out how to always use the initial prompt. is it possible to provide a use case? | https://github.com/huggingface/transformers/issues/27592 | closed | [] | 2023-11-19T18:35:23Z | 2023-11-20T08:29:41Z | null | GanymedeNil |
huggingface/pytorch-image-models | 2,038 | how to run the efficientmit.py | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| https://github.com/huggingface/pytorch-image-models/issues/2038 | closed | [
"enhancement"
] | 2023-11-19T02:50:59Z | 2023-11-19T17:16:48Z | null | 1377534928 |
huggingface/chat-ui | 566 | Is Chat-UI gonna support the new Assistant API? | They store the threads, and there's also multi-modal support | https://github.com/huggingface/chat-ui/issues/566 | open | [
"enhancement",
"models"
] | 2023-11-19T02:06:44Z | 2023-11-20T08:42:49Z | 1 | wayliums |
huggingface/alignment-handbook | 40 | How do I get the training scrips to utilize all my GPUs? | Hello there,
I'm running this script:
```
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml
```
... but on my machine with 2x3090s ... only GPU 0 is being utilized.
What do I need to change to utlize both of my 3090s for the training run?
Thanks | https://github.com/huggingface/alignment-handbook/issues/40 | closed | [] | 2023-11-19T00:11:24Z | 2023-11-19T01:20:21Z | null | ohmeow |
huggingface/transformers.js | 401 | [Question | Bug] What am I doing wrong while using the `question-answering` model? | ## The Problem
I'm trying to use `question-answering` model to answer simple questions in a given context. But I always get a TypeError about floats. I guess that's an internal issue, because at top level of code I am not using floating point numbers. But maybe I am doing something wrong.
By the way, I'm using TypeScript and I was following the [docs for this model](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.QuestionAnsweringPipeline).
## Code
```ts
/** THIS CODE IS WRAPPED BY AN ASYNC FUNCTION */
const { pipeline } = await import("@xenova/transformers");
const answerer = await pipeline(
"question-answering",
"Xenova/distilbert-base-uncased-distilled-squad"
);
const results = await answerer(
"Who is Dominic Toretto?",
"Dominic Toretto is part of the family."
);
```
## Error
TypeError: A float32 tensor's data must be type of function Float32Array()


| https://github.com/huggingface/transformers.js/issues/401 | closed | [
"question"
] | 2023-11-18T12:58:50Z | 2023-11-19T12:44:00Z | null | AyresMonteiro |
huggingface/transformers.js | 399 | [Question] Is it possible to encode and decode with `AutoTokenizer.from_pretrained` and keep spaces? | I'm trying to build a pure JS online tokenizer, visually similar to https://github.com/1rgs/tokenwiz (but without the Python backend)
I'm doing something like:
```js
const model = await AutoTokenizer.from_pretrained('mistralai/Mistral-7B-v0.1')
const textInput = `[INST] <<SYS>>
You are a friendly Llama.
<</SYS>>
Do you spit at people? [/INST]`
const tokens = model.encode(textInput)
const tokenizedText = model.batch_decode(
tokens.map((token) => [token]),
{ clean_up_tokenization_spaces: false }
)
console.log(tokenizedText)
```
And get:
```js
0: "<s>"
1: "["
2: "INST"
3: "]"
4: "<<"
5: "SYS"
6: ">>"
7: "\n"
8: "You"
9: "are"
10: "a"
11: "friendly"
12: "L"
13: "l"
14: "ama"
15: "."
16: "\n"
17: "<"
18: "</"
19: "SYS"
20: ">>"
21: "\n"
22: "\n"
23: "Do"
24: "you"
25: "sp"
26: "it"
27: "at"
28: "people"
29: "?"
30: "["
31: "/"
32: "INST"
33: "]"
```
So while newlines are there, all the spaces are gone. Is there any way to get the original text back but with token boundaries for visualisation? | https://github.com/huggingface/transformers.js/issues/399 | closed | [
"question"
] | 2023-11-17T18:46:05Z | 2023-11-17T20:18:02Z | null | daaain |
huggingface/alignment-handbook | 39 | Why zephyr-7b-dpo-lora is finetuned from mistralai/Mistral-7B-v0.1 instead of zepher-7b-sft model? | There is a misalignment between zephyr-7b-dpo-lora and zephyr-7b-dpo-full.
The former one is finetuned from mistralai/Mistral-7B-v0.1.
The latter is finetuned from zephyr-7b-dpo-full.
I wonder what causes this misalignment ?
Also, have you benchmarked performance improvement of the lora finetunning script? In my experiment, lora finetunning seems do not provide any performance improvement compared with the base model on MT-bench. I think maybe some parameters are incorrect. | https://github.com/huggingface/alignment-handbook/issues/39 | open | [] | 2023-11-17T18:11:59Z | 2024-03-21T19:18:08Z | 2 | ChenDRAG |
huggingface/optimum | 1,545 | Add support to export facebook encodec models to ONNX | ### Feature request
When I try to use optimum-cli to export the facebook/encodec_32khz model I get this error:
```
% optimum-cli export onnx --model facebook/encodec_32khz encodec.onnx
Framework not specified. Using pt to export to ONNX.
/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
Traceback (most recent call last):
File "/Users/micchig/micromamba/envs/music-representation/bin/optimum-cli", line 10, in <module>
sys.exit(main())
^^^^^^
File "/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/optimum/commands/optimum_cli.py", line 163, in main
service.run()
File "/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/optimum/commands/export/onnx.py", line 246, in run
main_export(
File "/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/optimum/exporters/onnx/__main__.py", line 408, in main_export
raise ValueError(
ValueError: Trying to export a encodec model, that is a custom or unsupported architecture for the task feature-extraction, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type encodec to be supported natively in the ONNX export.
```
I am following the advice in the message and opening an issue here. :)
### Motivation
I want to use the encodec model for inference and I'd much rather use ONNX than importing the pretrained model from transformers every time and run it in pytorch as ONNX is much faster.
### Your contribution
I'm afraid I can't contribute to this personally | https://github.com/huggingface/optimum/issues/1545 | open | [
"feature-request",
"onnx"
] | 2023-11-17T11:16:01Z | 2025-12-12T06:23:33Z | 6 | giamic |
huggingface/peft | 1,142 | How to do Gradient Checkpoint + LoRA | ### System Info
<img width="570" alt="image" src="https://github.com/huggingface/peft/assets/18441985/9b3ae040-d78a-477b-a9ec-6ab26b687a68">
### Who can help?
I need help with using LoRA + gradient checkpointing.
Using the reentrant option appears to be the solution, but it slows down training a lot, for LLama-7b it's more than 2x the training time of a full fine-tune on the same hardware (A100).
<img width="817" alt="image" src="https://github.com/huggingface/peft/assets/18441985/6c58b8b2-eb3c-472a-8643-dcec6193dfe6">
We should be able to just use vanilla gradient checkpoint.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoModelForCausalLM
from peft import LoraConfig, get_peft_model
# model_id, vocab = 'meta-llama/Llama-2-7b-hf', 32000
model_id, vocab = "stas/tiny-random-llama-2", 3000
seq_len = 1024
bs=8
use_lora=True
model_config = dict(
pretrained_model_name_or_path=model_id,
device_map=0,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.bfloat16,
use_cache=False,
)
model = AutoModelForCausalLM.from_pretrained(**model_config)
# Just freeze embeddings for small memory decrease
model.model.embed_tokens.weight.requires_grad_(False);
if use_lora:
lora_config = LoraConfig(
r=2, # the rank of the LoRA matrices
lora_alpha=16, # the weight
lora_dropout=0.1, # dropout to add to the LoRA layers
bias="none", # add bias to the nn.Linear layers?
task_type="CAUSAL_LM",
target_modules=["q_proj", "k_proj","v_proj","o_proj"], # the name of the layers to add LoRA
)
model = get_peft_model(model, lora_config)
example = {"input_ids": torch.randint(0, vocab, size=(bs,seq_len), device="cuda:0"),
"labels":torch.randint(0, vocab, size=(bs,seq_len), device="cuda:0")}
import torch, peft, accelerate, transformers
for lib in [torch, peft, accelerate, transformers]:
print(f"{lib.__name__}: {lib.__version__}")
model.train()
def call_forward():
with torch.amp.autocast("cuda", dtype=torch.bfloat16):
out = model(**example)
loss = out.loss
return loss
%timeit loss=call_forward()
loss=call_forward()
loss.requires_grad
# 5.48 ms Β± 31.1 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each)
# True
model.gradient_checkpointing_enable()
%timeit loss=call_forward()
loss=call_forward()
loss.requires_grad
# 5.13 ms Β± 33.6 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each)
# False
model.gradient_checkpointing_enable(dict(use_reentrant=False))
%timeit loss=call_forward()
loss=call_forward()
loss.requires_grad
# 7.23 ms Β± 40.1 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each)
# True
```
### Expected behavior
Nothing to add here. | https://github.com/huggingface/peft/issues/1142 | closed | [] | 2023-11-17T09:34:16Z | 2025-10-06T10:22:58Z | null | tcapelle |
huggingface/accelerate | 2,164 | how to get same timestamp in different subprocesses while using accelerate launch | I would like to get a unique timestamp to name my result folder like below
```
def get_time_string() -> str:
x = datetime.datetime.now()
return f"{(x.year - 2000):02d}{x.month:02d}{x.day:02d}-{x.hour:02d}{x.minute:02d}{x.second:02d}"
```
, however, it sometimes will get a different timestamp in different subprocesses, is there anyway to get a unique timestamp?
Thanks very much for your time! | https://github.com/huggingface/accelerate/issues/2164 | closed | [] | 2023-11-17T06:36:00Z | 2023-11-29T07:30:04Z | null | shliu0 |
huggingface/open_asr_leaderboard | 14 | How to run calc_rtf.py? Cannot reproduce rtf results. | There is no guide on how to execute calc_rtf.py. For example, this one https://github.com/huggingface/open_asr_leaderboard/blob/main/transformers/calc_rtf.py references 4469669.mp3. But there is no such file in the repo from what I see.
So the results are not reproducible.
Same for https://github.com/huggingface/open_asr_leaderboard/blob/main/nemo_asr/calc_rtf.py What is /disk3/datasets/speech-datasets/earnings22/media/4469669.wav?
BTW, I don't recommend simply copying the same sample multiple times for an evaluation. It can cause performance that looks too good compared to running in production. While the data won't be cached, the same chunks of external language models will get hit multiple times, giving better-than-reality results, as one example. What that means is that, for example, the whisper models are never diverging across elements in the batch in the sequence they are producing. This can cause the embedding lookup to be better than it really should be.
I got my RTFx results in https://arxiv.org/abs/2311.04996 by cahcing the entire dataset in memory https://github.com/nvidia-riva/riva-asrlib-decoder/blob/8282368816552a7ee22c9340dce7b9c3c8d1f193/src/riva/asrlib/decoder/test_graph_construction.py#L77-L89 This is what we do at MLPerf Inference benchmarks as well. Which is the gold standard for benchmarking. | https://github.com/huggingface/open_asr_leaderboard/issues/14 | open | [] | 2023-11-16T21:14:31Z | 2023-11-16T21:14:31Z | null | galv |
huggingface/transformers.js | 397 | [Question] Tokenizing a base64 for string is very slow? | Hi! I happened to be encoding some files using transformers.js and one of the files happened to have some base64 in it. What I noticed is that base64 takes an enormously long time, relative to the number of tokens produced. Tokenizing a string of english text to the same number of tokens is far quicker.
For example:
```javascript
const testBase64 =
"VGhlIFNwYW5pc2ggQ2l2aWwgV2FyIChTcGFuaXNoOiBHdWVycmEgQ2l2aWwgRXNwYcOxb2xhKVtub3RlIDJdIHdhcyBmb3VnaHQgZnJvbSAxOTM2IHRvIDE5MzkgYmV0d2VlbiB0aGUgUmVwdWJsaWNhbnMgYW5kIHRoZSBOYXRpb25hbGlzdHMuIFJlcHVibGljYW5zIHdlcmUgbG95YWwgdG8gdGhlIGxlZnQtbGVhbmluZyBQb3B1bGFyIEZyb250IGdvdmVybm1lbnQgb2YgdGhlIFNlY29uZCBTcGFuaXNoIFJlcHVibGljLCBhbmQgY29uc2lzdGVkIG9mIHZhcmlvdXMgc29jaWFsaXN0LCBjb21tdW5pc3QsIHNlcGFyYXRpc3QsIGFuYXJjaGlzdCwgYW5kIHJlcHVibGljYW4gcGFydGllcywgc29tZSBvZiB3aGljaCBoYWQgb3Bwb3NlZCB0aGUgZ292ZXJubWVudCBpbiB0aGUgcHJlLXdhciBwZXJpb2QuWzEyXSBUaGUgb3Bwb3NpbmcgTmF0aW9uYWxpc3RzIHdlcmUgYW4gYWxsaWFuY2Ugb2YgRmFsYW5naXN0cywgbW9uYXJjaGlzdHMsIGNvbnNlcnZhdGl2ZXMsIGFuZCB0cmFkaXRpb25hbGlzdHMgbGVkIGJ5IGEgbWlsaXRhcnkganVudGEgYW1vbmcgd2hvbSBHZW5lcmFsIEZyYW5jaXNjbyBGcmFuY28gcXVpY2tseSBhY2hpZXZlZCBhIHByZXBvbmRlcmFudCByb2xlLiBEdWUgdG8gdGhlIGludGVybmF0aW9uYWwgcG9saXRpY2FsIGNsaW1hdGUgYXQgdGhlIHRpbWUsIHRoZSB3YXIgaGFkIG1hbnkgZmFjZXRzIGFuZCB3YXMgdmFyaW91c2x5IHZpZXdlZCBhcyBjbGFzcyBzdHJ1Z2dsZSwgYSByZWxpZ2lvdXMgc3RydWdnbGUsIGEgc3RydWdnbGUgYmV0d2VlbiBkaWN0YXRvcnNoaXAgYW5kIHJlcHVibGljYW4gZGVtb2NyYWN5LCBiZXR3ZWVuIHJldm9sdXRpb24gYW5kIGNvdW50ZXJyZXZvbHV0aW9uLCBhbmQgYmV0d2VlbiBmYXNjaXNtIGFuZCBjb21tdW5pc20uWzEzXSBBY2NvcmRpbmcgdG8gQ2xhdWRlIEJvd2VycywgVS5TLiBhbWJhc3NhZG9yIHRvIFNwYWluIGR1cmluZyB0aGUgd2FyLCBpdCB3YXMgdGhlICJkcmVzcyByZWhlYXJzYWwiIGZvciBXb3JsZCBXYXIgSUkuWzE0XSBUaGUgTmF0aW9uYWxpc3RzIHdvbiB0aGUgd2FyLCB3aGljaCBlbmRlZCBpbiBlYXJseSAxOTM5LCBhbmQgcnVsZWQgU3BhaW4gdW50aWwgRnJhbmNvJ3MgZGVhdGggaW4gTm92ZW1iZXIgMTk3NS4KClRoZSB3YXIgYmVnYW4gYWZ0ZXIgdGhlIHBhcnRpYWwgZmFpbHVyZSBvZiB0aGUgY291cCBkJ8OpdGF0IG9mIEp1bHkgMTkzNiBhZ2FpbnN0IHRoZSBSZXB1YmxpY2FuIGdvdmVybm1lbnQgYnkgYSBncm91cCBvZiBnZW5lcmFscyBvZiB0aGUgU3BhbmlzaCBSZXB1YmxpY2FuIEFybWVkIEZvcmNlcywgd2l0aCBHZW5lcmFsIEVtaWxpbyBNb2xhIGFzIHRoZSBwcmltYXJ5IHBsYW5uZXIgYW5kIGxlYWRlciBhbmQgaGF2aW5nIEdlbmVyYWwgSm9zw6kgU2FuanVyam8gYXMgYSBmaWd1cmVoZWFkLiBUaGUgZ292ZXJubWVudCBhdCB0aGUgdGltZSB3YXMgYSBjb2FsaXRpb24gb2YgUmVwdWJsaWNhbnMsIHN1cHBvcnRlZCBpbiB0aGUgQ29ydGVzIGJ5IGNvbW11bmlzdCBhbmQgc29jaWFsaXN0IHBhcnRpZXMsIHVuZGVyIHRoZSBsZWFkZXJzaGlwIG9mIGNlbnRyZS1sZWZ0IFByZXNpZGVudCBNYW51ZWwgQXphw7FhLlsxNV1bMTZdIFRoZSBOYXRpb25hbGlzdCBmYWN0aW9uIHdhcyBzdXBwb3J0ZWQgYnkgYSBudW1iZXIgb2YgY29uc2VydmF0aXZlIGdyb3VwcywgaW5jbHVkaW5nIENFREEsIG1vbmFyY2hpc3RzLCBpbmNsdWRpbmcgYm90aCB0aGUgb3Bwb3NpbmcgQWxmb25zaXN0cyBhbmQgdGhlIHJlbGlnaW91cyBjb25zZXJ2YXRpdmUgQ2FybGlzdHMsIGFuZCB0aGUgRmFsYW5nZSBFc3Bhw7FvbGEgZGUgbGFzIEpPTlMsIGEgZmFzY2lzdCBwb2xpdGljYWwgcGFydHkuWzE3XSBBZnRlciB0aGUgZGVhdGhzIG9mIFNhbmp1cmpvLCBFbWlsaW8gTW9sYSBhbmQgTWFudWVsIEdvZGVkIExsb3BpcywgRnJhbmNvIGVtZXJnZWQgYXMgdGhlIHJlbWFpbmluZyBsZWFkZXIgb2YgdGhlIE5hdGlvbmFsaXN0IHNpZGUuCgpUaGUgY291cCB3YXMgc3VwcG9ydGVkIGJ5IG1pbGl0YXJ5IHVuaXRzIGluIE1vcm9jY28sIFBhbXBsb25hLCBCdXJnb3MsIFphcmFnb3phLCBWYWxsYWRvbGlkLCBDw6FkaXosIEPDs3Jkb2JhLCBhbmQgU2V2aWxsZS4gSG93ZXZlciwgcmViZWxsaW5nIHVuaXRzIGluIGFsbW9zdCBhbGwgaW1wb3J0YW50IGNpdGllc+KAlHN1Y2ggYXMgTWFkcmlkLCBCYXJjZWxvbmEsIFZhbGVuY2lhLCBCaWxiYW8sIGFuZCBNw6FsYWdh4oCUZGlkIG5vdCBnYWluIGNvbnRyb2wsIGFuZCB0aG9zZSBjaXRpZXMgcmVtYWluZWQgdW5kZXIgdGhlIGNvbnRyb2wgb2YgdGhlIGdvdmVybm1lbnQuIFRoaXMgbGVmdCBTcGFpbiBtaWxpdGFyaWx5IGFuZCBwb2xpdGljYWxseSBkaXZpZGVkLiBUaGUgTmF0aW9uYWxpc3RzIGFuZCB0aGUgUmVwdWJsaWNhbiBnb3Zlcm5tZW50IGZvdWdodCBmb3IgY29udHJvbCBvZiB0aGUgY291bnRyeS4gVGhlIE5hdGlvbmFsaXN0IGZvcmNlcyByZWNlaXZlZCBtdW5pdGlvbnMsIHNvbGRpZXJzLCBhbmQgYWlyIHN1cHBvcnQgZnJvbSBGYXNjaXN0IEl0YWx5LCBOYXppIEdlcm1hbnkgYW5kIFBvcnR1Z2FsLCB3aGlsZSB0aGUgUmVwdWJsaWNhbiBzaWRlIHJlY2VpdmVkIHN1cHBvcnQgZnJvbSB0aGUgU292aWV0IFVuaW9uIGFuZCBNZXhpY28uIE90aGVyIGNvdW50cmllcywgc3VjaCBhcyB0aGUgVW5pdGVkIEtpbmdkb20sIEZyYW5jZSwgYW5kIHRoZSBVbml0ZWQgU3RhdGVzLCBjb250aW51ZWQgdG8gcmVjb2duaXNlIHRoZSBSZXB1YmxpY2FuIGdvdmVybm1lbnQgYnV0IGZvbGxvd2VkIGFuIG9mZmljaWFsIHBvbGljeSBvZiBub24taW50ZXJ2ZW50aW9uLiBEZXNwaXRlIHRoaXMgcG9saWN5LCB0ZW5zIG9mIHRob3VzYW5kcyBvZiBjaXRpemVucyBmcm9tIG5vbi1pbnRlcnZlbnRpb25pc3QgY291bnRyaWVzIGRpcmVjdGx5IHBhcnRpY2lwYXRlZCBpbiB0aGUgY29uZmxpY3QuIFRoZXkgZm91Z2h0IG1vc3RseSBpbiB0aGUgcHJvLVJlcHVibGljYW4gSW50ZXJuYXRpb25hbCBCcmlnYWRlcywgd2hpY2ggYWxzbyBpbmNsdWRlZCBzZXZlcmFsIHRob3VzYW5kIGV4aWxlcyBmcm9tIHByby1OYXRpb25hbGlzdCByZWdpbWVzLg==";
const { AutoTokenizer } = await import("@xenova/transformers");
const tokenizer = await AutoTokenizer.from_pretrained(
"Xenova/all-MiniLM-L6-v2"
);
const startTime = Date.now();
const tokenized = tokenizer.encode(testBase64);
const endTime = Date.now();
console.log("It took ", endTime - startTime, "ms to tokenize");
const decoded = tokenizer.decode(tokenized);
console.log("Decoded: ", decoded);
```
Takes 56 seconds to tokenize and when decoded returns the same input string.
Interestingly, similar logic | https://github.com/huggingface/transformers.js/issues/397 | closed | [
"question"
] | 2023-11-16T20:27:51Z | 2023-11-17T19:48:57Z | null | samlhuillier |
huggingface/transformers.js | 396 | [Question] How to use transformer.js in langchain | Hi all, I'm writing a custom LLM to use transformer.js with langchain. Does a structure like this make sense? Any advice for optimizing it or best practices to apply?
Any suggestions or feedback would be greatly appreciated π π
```
import { pipeline } from "@xenova/transformers";
import { LLM } from "langchain/llms/base";
class MyHF extends LLM {
static instance = null;
constructor(modelTask = "text2text-generation", modelName = "Xenova/LaMini-Flan-T5-783M") {
super({ maxConcurrency: 1 });
this.modelTask = modelTask;
this.modelName = modelName;
this.llmModel = MyHF.getInstance(this.modelTask, this.modelName);
}
static async getInstance(modelTask, modelName, progress_callback = null) {
if (this.instance === null) {
this.instance = pipeline(modelTask, modelName, { progress_callback });
}
return this.instance;
}
_llmType() {
return "hf";
}
async _call(prompt, options = { topk: 1 }) {
const executor = await MyHF.getInstance(this.modelTask, this.modelName);
const { generated_text } = await executor(prompt, options);
return generated_text
}
}
export default MyHF;
``` | https://github.com/huggingface/transformers.js/issues/396 | open | [
"question"
] | 2023-11-16T17:27:52Z | 2023-12-21T16:27:28Z | null | mrddter |
huggingface/autotrain-advanced | 349 | How to reload the checkpoints for LLM finetuning? | May I ask how to resume from the latest checkpoint using `autotrain llm` if it crashed. I only found one from the `dreambooth` trainers, but I cannot find the `resume_from_checkpoint` anywhere else.
I was wondering if it has currently not fully supported this feature yet or I was missing something? It would be super helpful if anyone can kindly pointing out how to do that using autotrain?
Many thanks! | https://github.com/huggingface/autotrain-advanced/issues/349 | closed | [
"stale"
] | 2023-11-16T11:51:25Z | 2024-02-02T08:58:47Z | null | xihajun |
huggingface/trl | 1,004 | Guidance on how to fix the scheduler and ConstantLengthDataset | Hello,
I want to fix the issue related to the `ConstantLengthDataset` not knowing the dataset's length in advance.
Besides having a broken progressbar and a wrong epoch count, the only problem I see is related to the scheduler, as most of us are training using cosine with warmup; if we want a complete cycle, the scheduler needs the total number of steps to adjust the ratios accordingly.
One solution would be to "guess" how many batches/iteration of packed data we will see by grabbing some samples and estimating the total length. A function tries to do something like this by computing a char/tok ratio.
Do you have any advice so I can draft a PR?
Ohh I just saw that @lvwerra has a [PR](https://github.com/huggingface/trl/pull/979) in the works, but only for "finite" dataset.
| https://github.com/huggingface/trl/issues/1004 | closed | [] | 2023-11-16T10:58:30Z | 2024-01-05T15:05:18Z | null | tcapelle |
huggingface/diffusers | 5,816 | low attention to prompt in SDXL | Hi,
One of the difference between DALLE3 and SDXL is that SDXL pay less attention to prompt,
Is there a way to solve this problem? I don't Know. for example changing the text encoder to other can help to solve this problem ?
Thanks
| https://github.com/huggingface/diffusers/issues/5816 | closed | [
"question",
"stale"
] | 2023-11-16T07:24:15Z | 2024-01-09T15:06:55Z | null | saeedkhanehgir |
huggingface/transformers | 27,526 | How to preupgrade transformer cache and build the upgraded into docker image? | ### System Info
Linux ubuntu 22.04
Docker 24.05
I am not sure if this is the right place for this issue. Apology if it isn't and please direct me to the right place.
I have been using transformer in docker images that are deployed at runpod/replicate. The containers of the images could go cold and be relaunched again and again. Each time the container would waste 20 to 40 seconds for the blow cache upgrade.
```
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
```
It would take around 20 to 40 seconds, which is a significant waste of our GPU time and container startup time.
I have tried to find out how to preupgrade the cache and build the upgrade cache into docker image by google but I couldn't find a way to do it.
Please advise how to preupgrade the cache and build the upgraded cache in docker image.
Many thanks.
### Expected behavior
The cache for model files is preupgraded and built into container image to avoid upgrade each time a container is launched.
| https://github.com/huggingface/transformers/issues/27526 | closed | [] | 2023-11-16T02:53:54Z | 2023-12-24T08:03:44Z | null | lanyusan |
huggingface/optimum | 1,538 | Optimum supports AMDGPUγοΌ | ### Feature request
Onnxruntime supports AMD-ROCM οΌ
how to compile on optimum
### Motivation
Our company is currently testing amdgpu and has learned that optim can accelerate inference on CUDA. We are not sure if it will support ROCM in the future?
### Your contribution
none | https://github.com/huggingface/optimum/issues/1538 | closed | [] | 2023-11-15T04:15:21Z | 2024-01-09T16:10:39Z | 1 | taikai-zz |
huggingface/tokenizers | 1,391 | How to split special token in encode? | i have converted a slow tokenizer into PreTrainedTokenizerFast, and get a tokenizer.json file.But i found that this tokenizer did not split special tokens.Here is my add_tokens in tokenizer.json:
` tokenizer.add_special_tokens(
[
AddedToken("[gMASK]", normalized=True, single_word=False),
AddedToken("sop", normalized=True, single_word=False),
]
)
`
| https://github.com/huggingface/tokenizers/issues/1391 | closed | [] | 2023-11-15T03:41:22Z | 2024-01-04T06:26:38Z | null | leizhao1234 |
huggingface/diffusers | 5,786 | How to load a precomputed dataset in the cache folder on a different machine? | **Is your feature request related to a problem? Please describe.**
Some slurm cluster may have a limit on time allocation, so I'd like to precompute the dataset on my local machine then move it to a location on the cluster to directly reuse it.
**Describe the solution you'd like**
I saw load dataset automatically create arrow files inside ~/.cache/imagefolder, and the dataset folder path is translated into some hash code. So I hope I can copy the dataset here and pass it to --dataset_name in training SDXL unet. Or perhaps I'm not aware now, some ways to let me reuse the precomputed cached dataset on a different machine.
**Describe alternatives you've considered**
please see above.
**Additional context**
please see above | https://github.com/huggingface/diffusers/issues/5786 | closed | [
"question",
"stale"
] | 2023-11-14T02:26:00Z | 2024-01-09T15:07:14Z | null | linnanwang |
huggingface/alignment-handbook | 22 | How to perform full parameter finetuning without A100 GPUs | Hi, thank you for your great work! I'd like to reproduce full parameter fine-tuning of dpo training. However I only have 10 * Nvidia A40 GPUs (46 Gbs memory each).
I tried the command
`CUDA_VISIBLE_DEVICES=2,3,4,5,6,7,8,9 ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml --main_process_port 6000 scripts/run_dpo.py recipes/zephyr-7b-beta/dpo/config_full.yaml`
and it reported OOM error, even if I set batch size to 1.
I don't mind the program runs a bit slower (e.g., use smaller batchsize and more gradient accumulation steps). However, I don't know if there is a way to successfully deploy the full-dpo code.
Can you help me, please?
Also, I'm wondering how large is the performance gap between lora and full parameter finetunning. | https://github.com/huggingface/alignment-handbook/issues/22 | open | [] | 2023-11-14T01:33:41Z | 2024-02-14T13:47:16Z | null | ChenDRAG |
huggingface/controlnet_aux | 83 | How to get keypoints output .json file like original OpenPose ? | https://github.com/huggingface/controlnet_aux/issues/83 | open | [] | 2023-11-13T21:55:35Z | 2023-11-17T21:04:49Z | null | mayank64ce | |
huggingface/chat-ui | 550 | Can this ui be run on a colab? | I am wondering if this ui can be used inside a colab. | https://github.com/huggingface/chat-ui/issues/550 | closed | [
"question"
] | 2023-11-13T16:58:35Z | 2023-11-15T16:17:10Z | null | amida47 |
huggingface/text-generation-inference | 1,258 | How to deal with bias=True Model | ### Feature request
How to deploy model within bias=True. Example: vinai/PhoGPT-7B5-Instruct
### Motivation
.
### Your contribution
. | https://github.com/huggingface/text-generation-inference/issues/1258 | closed | [
"Stale"
] | 2023-11-13T09:20:08Z | 2024-01-20T01:46:38Z | null | anhnh2002 |
huggingface/trl | 985 | how to setup epoch number in SFTTrainer? | there my example code
from datasets import load_dataset
from trl import SFTTrainer
dataset = load_dataset("IMDB", split="train")
trainer = SFTTrainer(
"sshleifer/tiny-gpt2",
train_dataset=dataset,
dataset_text_field="text",
max_seq_length=512,
)
trainer.train() | https://github.com/huggingface/trl/issues/985 | closed | [] | 2023-11-12T20:02:31Z | 2023-11-14T18:29:53Z | null | KlausikPL |
huggingface/diffusers | 5,774 | How to fine tune Stable Diffusion on custom dataset {caption, image}? | I need to do the task that fine tuning SD on custom dataset {caption, image} and custom size? Could you please give me a tutorial for this task? | https://github.com/huggingface/diffusers/issues/5774 | closed | [
"stale"
] | 2023-11-12T14:52:23Z | 2024-01-09T15:07:21Z | null | npk7264 |
huggingface/diffusers | 5,772 | Does webdataset faster than default huggingface datasets? | ### Describe the bug
Hi, I see there is a large scale training example https://github.com/huggingface/diffusers/blob/controlnet_webdatasets/examples/controlnet/train_controlnet_webdatasets.py using webdatasets, which suggests that webdatasets may have better data loading performance than huggingface datasets that is organized with Apache Arrow.
Then, I'm wondering whether or not webdatasets is a good choice for me. I have a image dataset with 350k images, the size of the image is 768 * 768. I use a batch size of 64 or 192. Does webdataset is for me? Any help would be appreciated!
### Reproduction
.
### Logs
_No response_
### System Info
.
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/5772 | closed | [
"question",
"stale"
] | 2023-11-12T08:40:22Z | 2024-01-09T15:07:23Z | null | Luciennnnnnn |
huggingface/chat-ui | 549 | How can I use this offline with local models? | I really like the web_search feature, can I somehow use it with local models? I tried but I dont see any bat files to launch it. | https://github.com/huggingface/chat-ui/issues/549 | closed | [
"support"
] | 2023-11-11T23:59:09Z | 2023-11-20T21:38:27Z | 9 | iChristGit |
huggingface/diffusers | 5,766 | Image+Image+Text to Image | Maybe a dumb question but I can't seem to find good ways to have multiple images to image modeling. I looked into Multi-ControlNet but I can't tell how to use it. I'm trying to train a model that takes in 2 images and a prompt:
1. a template base image (e.g. a photo of a room in someone's house with a painting on the wall)
2. a photo of a painting someone made (e.g. not a famous one like a Van Gogh, just someone's painting)
3. an optional text prompt describing the 2nd image...may not be necessary but curious what people here say
And I want to place image2 in image1 to replace the painting on the wall with the new one. Is this the right forum / model to use? I thought maybe creating a custom dataset and then simply feeding 2 image controls in would do the job but really could use some experts' guidance here. | https://github.com/huggingface/diffusers/issues/5766 | closed | [
"question",
"stale"
] | 2023-11-11T20:15:27Z | 2024-01-09T15:07:25Z | null | tval2 |
huggingface/optimum | 1,531 | Pytorch + TensorRT support | ### Feature request
Is it possible to start supporting Pytorch and TensorRT inference optimizations? There are a lot of use cases where it could be useful, and optimum seems to already have a lot of good tooling to enable this.
### Motivation
Using Pytorch or TensorRT in production is painful today, and requires a lot of custom optimizations.
### Your contribution
I could help with a PR. | https://github.com/huggingface/optimum/issues/1531 | closed | [
"feature-request",
"Stale"
] | 2023-11-11T17:27:47Z | 2025-02-27T02:04:37Z | 2 | youssefadr |
huggingface/optimum | 1,530 | AnimateDiff support? | ### Feature request
Hi!
can u guys please support animatediff for onnx in the future? it will be great for both gpu directml and cpu too
kind regards
### Motivation
not a bug, just a feature that i really would like to see for us directml and cpu users for onnx
### Your contribution
i would but i don't know anything about coding. i'm just a casual user | https://github.com/huggingface/optimum/issues/1530 | closed | [
"feature-request",
"Stale"
] | 2023-11-11T14:21:25Z | 2025-03-01T02:08:38Z | 1 | Amin456789 |
huggingface/autotrain-advanced | 338 | How to | I successfully trained the mistral 7B sharded model on google colab using the autotrain
Now, how can I do inference , I am unable to merger the adapter with the base model , can someone please share the code for inference with me . Please help | https://github.com/huggingface/autotrain-advanced/issues/338 | closed | [
"stale"
] | 2023-11-11T12:58:24Z | 2024-05-06T13:35:52Z | null | eviIgenius |
huggingface/diffusers | 5,761 | The cost of consistency decoder | ### Describe the bug
I replace original VAE decoder of a stable diffusion model with Consistency Decoder, then CUDA out of memory occurs. My question is that How large of Consistency Decoder is compared to original VAE decoder.
- `diffusers` version: 0.23.0
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Huggingface_hub version: 0.17.3
- Transformers version: 4.34.0
- Accelerate version: 0.23.0
- xFormers version: 0.0.18
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Reproduction
Decode a large latent
### Logs
_No response_
### System Info
..
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/5761 | closed | [
"question",
"stale"
] | 2023-11-11T03:54:20Z | 2024-01-09T15:07:30Z | null | Luciennnnnnn |
huggingface/candle | 1,319 | Question: How to edit specific indices of a tensor? | Hello everybody,
While developing beam search for candle-sampling, I have run into a small issue where it appears there is no way to edit specific indices of a tensor after creation. For example, in Python the following works for lists (and very similar for pytorch tensors):
```python
values = [[1,2,3],[4,5,6]]
values[0][0] = 0
print(values) #[[0,2,3],[4,5,6]]
```
Is there an equivalent in `Candle` which I can use to edit specific indices of a tensor without creating a new tensor? | https://github.com/huggingface/candle/issues/1319 | closed | [] | 2023-11-11T01:10:42Z | 2023-11-26T15:53:19Z | null | EricLBuehler |
huggingface/datasets | 6,400 | Safely load datasets by disabling execution of dataset loading script | ### Feature request
Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution.
Any suggested workarounds are welcome as well.
### Motivation
This is a security vulnerability that could lead to arbitrary code execution.
### Your contribution
n/a | https://github.com/huggingface/datasets/issues/6400 | closed | [
"enhancement"
] | 2023-11-10T23:48:29Z | 2024-06-13T15:56:13Z | 4 | irenedea |
huggingface/diffusers | 5,758 | how to run huggingface model in replicate | ### Describe the bug
i am trying to run https://medium.com/ai-artistry/streamlining-ai-agent-development-with-autogen-and-llava-b84fb0d25262 code by adding https://huggingface.co/LLaVA-VL/llava_plus_v0_7b instead of replicate code.
My Question is: Challenges running the huggingface model using replicate?
something like this π
```
response = replicate.run(
"yorickvp/llava-13b:2facb4a474a0462c15041b78b1ad70952ea46b5ec6ad29583c0b29dbd4249591",
input={"image": img, "prompt": prompt.replace("<image>", " ")}
)
```
i tried
```
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/LLaVA-VL/llava_plus_v0_7b", additional_tools={"prompt": "Show me a tree"})
agent.run(return_code=True)
```
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[15], line 4
1 from transformers import HfAgent
2 agent = HfAgent("https://api-inference.huggingface.co/models/LLaVA-VL/llava_plus_v0_7b", additional_tools={"prompt": "Show me a tree"})
----> 4 agent.run( return_code=True)
TypeError: Agent.run() missing 1 required positional argument: 'task'
```
### Reproduction
Challenges running the huggingface model using replicate
something like this π
```
response = replicate.run(
"yorickvp/llava-13b:2facb4a474a0462c15041b78b1ad70952ea46b5ec6ad29583c0b29dbd4249591",
input={"image": img, "prompt": prompt.replace("<image>", " ")}
)
```
### Logs
_No response_
### System Info
RTX 3090
### Who can help?
@patrickvonplaten @sayakpaul @williamberman | https://github.com/huggingface/diffusers/issues/5758 | closed | [
"bug"
] | 2023-11-10T20:31:04Z | 2023-11-11T03:33:51Z | null | andysingal |
huggingface/diffusers | 5,756 | How to we generate LCM LoRA of an existing model? | I generated a DreamBooth model from SDXL base 1.0
To get the speed boost of LCM I need to generate a LCM LoRA from this model
How we do it? I don't see documentation | https://github.com/huggingface/diffusers/issues/5756 | closed | [
"stale"
] | 2023-11-10T15:44:52Z | 2023-12-27T13:28:38Z | null | FurkanGozukara |
huggingface/chat-ui | 548 | MaxListenersExceededWarning: Possible EventEmitter memory leak detected. | Running dev, and no errors until i try to write into the chat interface on the website locally hosted in WSL2 (win11).
Worked before i updated to version v.0.6.0
error message in web ui:

Error message in terminal:
> root@xxxxxxxxx:/mnt/c/WSL/HuggingChat test/AI# npm run dev-chat-ui
>
> > ai@1.0.0 dev-chat-ui
> > cd ../chat-ui && npm run dev -- --host 0.0.0.0
>
>
> > chat-ui@0.6.0 dev
> > vite dev --host 0.0.0.0
>
>
>
> VITE v4.3.9 ready in 15775 ms
>
> β Local: http://localhost:5173/
> β Network: http://172.xx.142.227:5173/
> β press h to show help
> (node:80446) **MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [TLSSocket]. Use emitter.setMaxListeners() to increase limit**
> (Use `node --trace-warnings ...` to show where the warning was created)
> 2:44:12 PM [vite] Error when evaluating SSR module /src/lib/server/websearch/sentenceSimilarity.ts:
> |- TypeError: fetch failed
> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)
> at processTicksAndRejections (node:internal/process/task_queues:95:5)
> at runNextTicks (node:internal/process/task_queues:64:3)
> at listOnTimeout (node:internal/timers:540:9)
> at process.processTimers (node:internal/timers:514:7)
> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
> at async Promise.all (index 0)
> at async loadTokenizer (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)
> at async AutoTokenizer.from_pretrained (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)
>
> 2:44:12 PM [vite] Error when evaluating SSR module /src/lib/server/websearch/runWebSearch.ts: failed to import "/src/lib/server/websearch/sentenceSimilarity.ts"
> |- TypeError: fetch failed
> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)
> at processTicksAndRejections (node:internal/process/task_queues:95:5)
> at runNextTicks (node:internal/process/task_queues:64:3)
> at listOnTimeout (node:internal/timers:540:9)
> at process.processTimers (node:internal/timers:514:7)
> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
> at async Promise.all (index 0)
> at async loadTokenizer (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)
> at async AutoTokenizer.from_pretrained (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)
>
> 2:44:12 PM [vite] Error when evaluating SSR module /mnt/c/WSL/HuggingChat test/chat-ui/src/routes/conversation/[id]/+server.ts: failed to import "/src/lib/server/websearch/runWebSearch.ts"
> |- TypeError: fetch failed
> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)
> at processTicksAndRejections (node:internal/process/task_queues:95:5)
> at runNextTicks (node:internal/process/task_queues:64:3)
> at listOnTimeout (node:internal/timers:540:9)
> at process.processTimers (node:internal/timers:514:7)
> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
> at async Promise.all (index 0)
> at async loadTokenizer (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)
> at async AutoTokenizer.from_pretrained (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)
>
> TypeError: fetch failed
> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)
> at processTicksAndRejections (node:internal/process/task_queues:95:5)
> at runNextTicks (node:internal/process/task_queues:64:3)
> at listOnTimeout (node:internal/timers:540:9)
> at process.processTimers (node:internal/timers:514:7)
> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)
> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)
> at async Pr | https://github.com/huggingface/chat-ui/issues/548 | closed | [
"support"
] | 2023-11-10T13:56:03Z | 2023-11-16T20:02:07Z | 7 | patchie |
huggingface/sentence-transformers | 2,355 | How to Finetune a Clip Model with Custom Data | I want to do my custom data training to get high accuracy embeddings of my image data.
Are there any scripts or documentation that would be helpful?
thank you. | https://github.com/huggingface/sentence-transformers/issues/2355 | closed | [] | 2023-11-10T07:27:23Z | 2023-12-25T03:23:20Z | null | unmo |
huggingface/diffusers | 5,742 | where is the Parameter Description? | https://github.com/huggingface/diffusers/issues/5742 | closed | [] | 2023-11-10T07:07:03Z | 2023-11-13T18:01:56Z | null | MRG-DOT | |
huggingface/setfit | 436 | γquestionγcould you tell me the latest embedding model which usable by setfit? | Hi!
This is not bug report but question.
From my understand, when we use SetFit, we have to choose one of embedding model from sentense transformer.
But now, I feel those models are kind of old and would like to know the latest model for embedding which can be used by setfit
Thank you in adv | https://github.com/huggingface/setfit/issues/436 | closed | [
"question"
] | 2023-11-10T02:10:01Z | 2023-11-12T01:02:24Z | null | Yongtae723 |
huggingface/datasets | 6,394 | TorchFormatter images (H, W, C) instead of (C, H, W) format | ### Describe the bug
Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy.
However, pytorch normally uses (C, H, W) format.
Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways.
If not using the format it is possible to directly use torchvision transforms but any non-transformed value will not be a tensor.
Is there a reason for this choice?
### Steps to reproduce the bug
```python
from datasets import Dataset, Features, Audio, Image
images = ["path/to/image.png"] * 10
features = Features({"image": Image()})
ds = Dataset.from_dict({"image": images}, features=features)
ds = ds.with_format("torch")
ds[0]["image"].shape
```
```python
torch.Size([512, 512, 4])
```
### Expected behavior
```python
from datasets import Dataset, Features, Audio, Image
images = ["path/to/image.png"] * 10
features = Features({"image": Image()})
ds = Dataset.from_dict({"image": images}, features=features)
ds = ds.with_format("torch")
ds[0]["image"].shape
```
```python
torch.Size([4, 512, 512])
```
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-6.5.9-100.fc37.x86_64-x86_64-with-glibc2.31
- Python version: 3.11.6
- Huggingface_hub version: 0.18.0
- PyArrow version: 14.0.1
- Pandas version: 2.1.2 | https://github.com/huggingface/datasets/issues/6394 | closed | [] | 2023-11-09T16:02:15Z | 2024-04-11T12:40:16Z | 9 | Modexus |
huggingface/transformers.js | 386 | [Question] Any plan to rewrite js in typescript ? | I'm doing it for my own usage although I'm loosing the benfit of upgrades.
Typings are usefull you know :)
While doing it I found this,
in models.js, line 1027 :
```javascript
let sampledTokens = sampler(logits);
```
should be
```javascript
let sampledTokens = sampler.sample(logits);
``` | https://github.com/huggingface/transformers.js/issues/386 | closed | [
"question"
] | 2023-11-09T13:41:10Z | 2023-11-15T18:18:39Z | null | pnocera |
huggingface/candle | 1,304 | How to repeat_interleave on Tensor? | There is [repeat_interleave](https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html) function, but I can't find analog in candle.
I need convert `tensor([[6110, 1]])` to `tensor([[6110, 1], [6110, 1], [6110, 1]])`
I found some examples [like](https://github.com/huggingface/candle/blob/f772213e844fdfcc8dbaf662fc11819f4028dc78/candle-transformers/src/models/segment_anything/mask_decoder.rs#L234) this and [this](https://github.com/huggingface/candle/blob/73d02f4f57c788c43f3e11991635bc15701c25c0/candle-transformers/src/models/mpt.rs#L137). But in my case the result is `tensor([6110, 6110, 6110, 1, 1, 1])`.
Looks like I do something wrong: :-D I expect result the same as from python https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L3090C31-L3090C31
How I can repeat python example in current candle version? | https://github.com/huggingface/candle/issues/1304 | closed | [] | 2023-11-09T06:31:04Z | 2023-11-09T08:16:19Z | null | bragovo |
huggingface/diffusers | 5,709 | How to run stable diffusion pipeline using multithreading in fastapi ? | Hi.. I have created an stable diffusion API using Fastapi and it is working perfectly fine if sequential request are been made. I have tried to implement multithreading in the api to concurrently run multiple request, but the problem is every request output generation time is dependent on total number of request that are made. For Eg. if one request takes 5 secs to run, and if 5 request are made simultaneously then it will take 5*5 = 25 secs for every request to get output. After researching about these problem, I get know that GIL (Global Interpreter Lock) in python is allowing only one thread to execute per process. So we will get same output as single thread if we use multithreading in these purpose. Also, I have tried multiprocessing to overcome this issue but it is loading multiple instances of the same model for each process and its become very hard to load all model in 16 GB RAM.
Do you know how to get output in same time for every requests that are made. If 5 requests are made concurrently then every request should get output in 5 seconds only. Also do gpu configuration matters tp gets results in quick time based on number of request ?
GPU Configuration:
Nvidia 3050 8GB RAM
@sayakpaul @patrickvonplaten | https://github.com/huggingface/diffusers/issues/5709 | closed | [
"stale"
] | 2023-11-08T16:19:45Z | 2024-01-09T15:07:46Z | null | minkvirparia |
huggingface/gsplat.js | 23 | How do you set up initial camera position? | When loading a splat file, I'd like to set the initial camera position to a specific location. How can this be achieved? | https://github.com/huggingface/gsplat.js/issues/23 | closed | [
"enhancement",
"question"
] | 2023-11-08T16:04:04Z | 2023-11-11T16:35:57Z | null | reconlabs-chris |
huggingface/safetensors | 381 | Would a CLI to perform convert operation be useful? | ### Feature request
Could it be possible to add to this repo a CLI tool that would use the library to convert files stored in different format and convert them to safetensors.
It would be useful to have also from the command line a way to introspect a model and find some property about it (layers, metadata, ...)
### Motivation
I'm frustrated when I got a lot of example models on my disk that I'm not too sure about and I would like to have a quick and easy way from the command line to inspect them, convert them, compress them and do all the tasks I need to perform straight from the command line with completion support.
### Your contribution
I could contribute design suggestions about the interface but I have no particular knowledge of Rust and I'm learning transformers and ML in general. | https://github.com/huggingface/safetensors/issues/381 | closed | [
"Stale"
] | 2023-11-08T15:39:02Z | 2024-01-02T01:48:28Z | 2 | remyleone |
huggingface/transformers | 27,361 | Add how to preprocess mask for finetuning with SAM | ### Feature request
The [SAM image processor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam/image_processing_sam.py) takes images as input and resizes them so that the longest edge is 1024 (using default values). This is the size expect as input fo the SAM model.
For inference, this works fine as only the images need resizing but for fine-tuning as per [this tutorial](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb), you need to resize both your images and your masks as the SAM model produces `pred_masks` with size 256x256. If I don't resize my masks I get `ground truth has different shape (torch.Size([2, 1, 768, 1024])) from input (torch.Size([2, 1, 256, 256]))` when trying to calculate loss.
To fix this, I've currently written a resize and pad function into my code:
```
from PIL import Image
def resize_mask(image):
longest_edge = 256
# get new size
w, h = image.size
scale = longest_edge * 1.0 / max(h, w)
new_h, new_w = h * scale, w * scale
new_h = int(new_h + 0.5)
new_w = int(new_w + 0.5)
resized_image = image.resize((new_w, new_h), resample=Image.Resampling.BILINEAR)
return resized_image
def pad_mask(image):
pad_height = 256 - image.height
pad_width = 256 - image.width
padding = ((0, pad_height), (0, pad_width))
padded_image = np.pad(image, padding, mode="constant")
return padded_image
def process_mask(image):
resized_mask = resize_mask(image)
padded_mask = pad_mask(resized_mask)
return padded_mask
```
and then have added this to my definition of SAMDataset:
```
class SAMDataset(Dataset):
def __init__(self, dataset, processor, transform = None):
self.dataset = dataset
self.processor = processor
self.transform = transform
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
item = self.dataset[idx]
if self.transform:
image = self.transform(item["pixel_values"])
else:
image = item["pixel_values"]
# get bounding box prompt
padded_mask = process_mask(item["label"])
prompt = get_bounding_box(padded_mask)
# prepare image and prompt for the model
inputs = self.processor(image, input_boxes=[[prompt]], return_tensors="pt")
# remove batch dimension which the processor adds by default
inputs = {k:v.squeeze(0) for k,v in inputs.items()}
# add ground truth segmentation
inputs["ground_truth_mask"] = padded_mask
return inputs
```
This seems to work fine.
What I think would be good is to allow input of masks in the SAM image processor. For example, the [Segformer image processor](https://github.com/huggingface/transformers/blob/v4.35.0/src/transformers/models/segformer/image_processing_segformer.py#L305) takes images and masks as inputs and resizes both to the size expected by the Segformer model.
I have also seen there is a 'post_process_mask' method in the SAM image processor but I am unsure how to implement this in the tutorial I'm following. If you think this is a better way vs. what I am suggesting then please could you explain where I would add this in the code from the tutorial notebook.
### Motivation
Easier fine tuning of SAM model.
### Your contribution
I could try write a PR for this and/or make a PR to update the [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb) instead . | https://github.com/huggingface/transformers/issues/27361 | closed | [
"Feature request",
"Vision"
] | 2023-11-08T11:53:31Z | 2024-01-08T16:40:38Z | null | rwood-97 |
huggingface/chat-ui | 546 | Custom Theme | I want to change the UI layout yet still be able to update the code in order to enjoy the new features as they are released.
Is there a way to add my changes in a way that would be similar to a theme? or an outside addon?
| https://github.com/huggingface/chat-ui/issues/546 | closed | [] | 2023-11-08T08:26:43Z | 2023-11-15T09:32:22Z | 2 | kaplanyaniv |
huggingface/datasets | 6,388 | How to create 3d medical imgae dataset? | ### Feature request
I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii')
### Motivation
help us to upload 3d medical dataset to huggingface!
### Your contribution
I'll submit a PR if I find a way to add this feature | https://github.com/huggingface/datasets/issues/6388 | open | [
"enhancement"
] | 2023-11-07T11:27:36Z | 2023-11-07T11:28:53Z | null | QingYunA |
huggingface/datasets | 6,387 | How to load existing downloaded dataset ? | Hi @mariosasko @lhoestq @katielink
Thanks for your contribution and hard work.
### Feature request
First, I download a dataset as normal by:
```
from datasets import load_dataset
dataset = load_dataset('username/data_name', cache_dir='data')
```
The dataset format in `data` directory will be:
```
-data
|-data_name
|-test-00000-of-00001-bf4c733542e35fcb.parquet
|-train-00000-of-00001-2a1df75c6bce91ab.parquet
```
Then I use SCP to clone this dataset into another machine, and then try:
```
from datasets import load_dataset
dataset = load_dataset('data/data_name') # load from local path
```
This leads to re-generating training and validation split for each time, and the disk quota will be duplicated occupation.
How can I just load the dataset without generating and saving these splits again?
### Motivation
I do not want to download the same dataset in two machines, scp is much faster and better than HuggingFace API. I hope we can directly load the downloaded datasets (.parquest)
### Your contribution
Please refer to the feature | https://github.com/huggingface/datasets/issues/6387 | closed | [
"enhancement"
] | 2023-11-06T22:51:44Z | 2023-11-16T18:07:01Z | null | liming-ai |
huggingface/gsplat.js | 15 | Does it work with polycam models? | Hello! Thank you for your work, it looks very promising. Got it working with the README file... Just tried it with a .ply object out of polycam and got error
```
Uncaught (in promise) RangeError: byte length of Float32Array should be a multiple of 4
at new Float32Array (<anonymous>)
at R.setData (Scene.ts:43:25)
at W.LoadAsync (Loader.ts:31:15)
at async main (main.ts:11:5)
```
with what file type is it compatible? Thanks! | https://github.com/huggingface/gsplat.js/issues/15 | closed | [
"question"
] | 2023-11-06T21:15:51Z | 2023-11-10T18:26:55Z | null | karen-pal |
huggingface/chat-ui | 545 | Chat-UI throws an 403 forbidden when access settings | When viewing the settings page after first setup the settings page fives the error: ```Failed to load resource: the server responded with a status of 403 (Forbidden) settings:1``` in the console. Without any explanation of what and why.
Setup:
```yaml
services:
# Chat ui webserver
chat-ui:
container_name: chat
build:
context: ./
dockerfile: Dockerfile
ports:
- 8080:3000
networks:
default:
ipv4_address: 172.25.0.2
# Mongo database
database:
container_name: mongo-chatui
image: "mongo:latest"
ports:
- 27017:27017
restart: always
environment:
- MONGO_INITDB_DATABASE=chat-ui
networks:
default:
ipv4_address: 172.25.0.3
networks:
default:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.25.0.0/28
gateway: 172.25.0.1
```
And my .env.local:
```
MONGODB_URL=mongodb://172.25.0.3:27017
PUBLIC_ORIGIN=http://localhost:3030
HF_ACCESS_TOKEN=recacted
MODELS=recated
```
What are the steps to take here?
The database connections gets accepted according to the mongoDB instance | https://github.com/huggingface/chat-ui/issues/545 | closed | [
"support"
] | 2023-11-06T15:09:33Z | 2024-02-15T21:03:04Z | 5 | IT-Guy007 |
huggingface/alignment-handbook | 9 | How to finetune or lora on custom dataset | How to finetune or lora on custom dataset | https://github.com/huggingface/alignment-handbook/issues/9 | open | [] | 2023-11-05T02:38:33Z | 2024-11-11T07:52:57Z | null | universewill |
huggingface/peft | 1,080 | Add docs on how to merge adapters after 4bit QLoRA with PEFT 0.6 | ### Feature request
there has been some controversy on how to correctly **merge the adapters with the base model after 4bit LoRA** training.
to me it seems there are two ways to merge and save:
- ChrisHayduk https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930
- TheBloke https://github.com/TheBlokeAI/AIScripts/blob/main/merge_peft_adapters.py
What is the correct way to merge the adapters now (with PEFT 0.6 and [PR 851](https://github.com/huggingface/peft/pull/851) merged) after training a 4-bit quantized model ?
### Motivation
no docs, at least i haven't found any
### Your contribution
example:
**quantize and train**
```
modelpath="models/Mistral-7B-v0.1"
model = AutoModelForCausalLM.from_pretrained(
modelpath,
load_in_4bit=True,
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
),
torch_dtype=torch.bfloat16,
)
model = prepare_model_for_kbit_training(model)
config = LoraConfig(
r=64,
lora_alpha=16,
target_modules =
['q_proj',
'k_proj',
'down_proj',
'v_proj',
'gate_proj',
'o_proj',
'up_proj'],
lora_dropout=0.1,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config)
train ...
```
**merge and save**
```
base_model = AutoModelForCausalLM.from_pretrained(
"models/Mistral-7B-v0.1",
return_dict=True,
torch_dtype=torch.bfloat16,
)
model = PeftModel.from_pretrained(base_model, "some-checkpoint")
model = model.merge_and_unload()
model.save_pretrained(args.out, safe_serialization=True)
```
is this the proper way to do it? if yes/no, it would be nice to have this documented somwhere! π€
| https://github.com/huggingface/peft/issues/1080 | closed | [] | 2023-11-04T10:07:16Z | 2023-11-17T22:22:06Z | null | geronimi73 |
huggingface/huggingface_hub | 1,801 | Entire operation get cancelled when 1 file fails when using api.upload_folder - how to make it iterative | I am using below code. Uploaded like 80 GB file and the entire operation failed just because of 1 png failed to upload for some reason
I see uploaded repo has 0 changes
How can I make it iterative? So after each file upload it is committed to the repo
I don't need commit or file history. Just upload newer files and overwrite if newer
```
from huggingface_hub import HfApi
api = HfApi()
# Upload all the content from the local folder to your remote Space.
# By default, files are uploaded at the root of the repo
api.upload_folder(
folder_path="/workspace/path",
repo_id="username/repo",
repo_type="model",
)
```
### Reproduction
_No response_
### Logs
_No response_
### System info
```shell
- huggingface_hub version: 0.16.4
- Platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /root/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: ME
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.0.1+cu118
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.5.0
- hf_transfer: N/A
- gradio: 3.41.2
- tensorboard: N/A
- numpy: 1.23.5
- pydantic: 1.10.12
- aiohttp: 3.8.5
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets
- HF_TOKEN_PATH: /root/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
```
| https://github.com/huggingface/huggingface_hub/issues/1801 | closed | [
"bug"
] | 2023-11-04T00:20:00Z | 2023-11-26T09:09:35Z | null | FurkanGozukara |
huggingface/transformers.js | 378 | Security issue - content security policy - script unsafe-eval | Context:
I use @xenova/transformers 2.6.2 npm package from a web application to do image classifcations. Here is the gist of my setup:
```js
const modelPath = 'own-domain/models-and-wasm/'
env.localModelPath = "/";
env.useBrowserCache = true;
env.backends.onnx.wasm.wasmPaths = modelPath;
const classifier = await pipeline("image-classification", modelPath, { quantized: true });
const output = await classifier(imagePath, { topk: 5 });
```
Everything works code-wise but when I remove unsafe-inline in CSP, it fails with this warning in the browser console:
```js
Failed to asynchronously prepare wasm:
CompileError: WebAssembly.instantiate(): Refused to compile or instantiate WebAssembly module because 'unsafe-eval' is not an allowed source of script in the following Content Security Policy directive
```
I **cannot** allow script-src: unsafe-eval in my web application (corporate rules). Do I have any alternatives? | https://github.com/huggingface/transformers.js/issues/378 | open | [
"question"
] | 2023-11-03T13:50:30Z | 2023-11-06T13:44:57Z | null | stiano |
huggingface/diffusers | 5,643 | How to use the ip adapter controlnet? | Hi, I can't use this specific controlnet because it's from here: https://huggingface.co/lllyasviel/sd_control_collection/tree/main
and the format doesn't allow from_pretrained. When I use from_single_file, I get:
```
stable_diffusion/convert_from_ckpt.py", line 422, in convert_ldm_unet_checkpoint
new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"]
KeyError: 'time_embed.0.weight'
```
I used this to get the error:
`ControlNetModel.from_single_file("./ip-adapter_sd15_plus.pth", torch_dtype=torch.float32,local_files_only=True).to('cuda')`
a similar error was raised and the response was: "just don't use from_single_file" https://github.com/huggingface/diffusers/issues/5577 | https://github.com/huggingface/diffusers/issues/5643 | closed | [] | 2023-11-03T13:34:44Z | 2023-11-13T15:12:29Z | null | alexblattner |
huggingface/dataset-viewer | 2,050 | Should we support video datasets? | Like https://huggingface.co/datasets/commaai/commavq
There was a previous intent in datasets: https://github.com/huggingface/datasets/pull/5339 | https://github.com/huggingface/dataset-viewer/issues/2050 | closed | [
"question",
"feature request"
] | 2023-11-03T13:33:00Z | 2023-12-11T15:04:08Z | null | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.