repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 8,700 | [PAG] add `StableDiffusionXLControlNetPAGImg2ImgPipeline` | We recently integrated PAG into diffusers! See the PR here: https://github.com/huggingface/diffusers/pull/7944
Does anyone want to add a `StableDiffusionXLControlNetPAGImg2ImgPipeline`?
1. You should put it under the [pag folder](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/pag)
2. you can use the implementation of [`StableDiffusionXLControlNetPAGPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_xl.py) and [`StableDiffusionXLPAGImg2ImgPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl_img2img.py) as reference
3. you need to add AutoPipeline so that you can use this API to create it
```python
AutoPipelineForImage2Image.from_pretrained(repo_id, controlnet=controlnet, enable_pag=True ...)
```
4. tests and docs
| https://github.com/huggingface/diffusers/issues/8700 | closed | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-06-25T18:52:18Z | 2024-08-21T17:24:23Z | 6 | yiyixuxu |
huggingface/sentence-transformers | 2,779 | what is the default tokenizer when "No sentence-transformers model found with name"? | I'm trying to use the sentence-transformer dangvantuan/sentence-camembert-large model and I'm getting a "no model found" error. This error is probably because some Sentence-Transformers-specific files are missing in their Huggingface (modules.json and config_sentence_transformers.json).
But then, Sentence Transformer warns it will create a new model with mean pooling, and this model performs really well on my data (!).
So, I would like to know what the tokeniser's model is when the model name hasn't been found? | https://github.com/huggingface/sentence-transformers/issues/2779 | closed | [] | 2024-06-25T15:17:58Z | 2024-07-05T10:42:27Z | null | Hortatori |
huggingface/accelerate | 2,891 | How to set a custom Config in python code using Accelerate? | Hello everyone!
Could you please advise how to replace the console command for setting a config
```
accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2}
```
with code in the Python file script_name.py?
I am expecting something like the following functionality:
```
from accelerate import Accelerator
accelerator = Accelerator()
accelerator.set_config_file('path/to/config/my_config_file.yaml')
```
I would like to run the script through Python and use all the benefits of launching with the Accelerate launch command with config file:
```
python script_name.py
```
| https://github.com/huggingface/accelerate/issues/2891 | closed | [] | 2024-06-25T11:56:10Z | 2024-10-07T15:08:01Z | null | konstantinator |
huggingface/diffusers | 8,693 | SD3 + SDXL refine fix lying on grass. How to do in diffusers colab workflow? | this is comfy workflow

how can i do in diffusers colab workflow? | https://github.com/huggingface/diffusers/issues/8693 | closed | [
"stale"
] | 2024-06-25T07:30:55Z | 2024-09-23T11:37:25Z | null | s9anus98a |
huggingface/text-generation-inference | 2,113 | how to launch a service using downloaded model weights? | ### System Info
I have downloaded model weights of bge-models, and I want to launch a model service using TGI, the command is :
```
model=/storage/nfs2/ModelHub/embedding/BAAI/bge-small-zh-v1.5
revision=refs/pr/5
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all \
-p 3001:3001 -v $volume:/data text-embeddings-inference:1.2 \
--model-id $model --port 3001 --revision $revision
```
but I got the follwing error:
```
2024-06-25T03:13:34.201754Z INFO text_embeddings_router: router/src/main.rs:140: Args { model_id: "BAA*/***-*****-**-v1.5", revision: Some("refs/pr/5"), tokenization_workers: None, dtype: None, pooling: None, max_concurrent_requests: 512, max_batch_tokens: 16384, max_batch_requests: None, max_client_batch_size: 32, auto_truncate: false, hf_api_token: None, hostname: "54903bb17567", port: 3001, uds_path: "/tmp/text-embeddings-inference-server", huggingface_hub_cache: Some("/data"), payload_limit: 2000000, api_key: None, json_output: false, otlp_endpoint: None, cors_allow_origin: None }
2024-06-25T03:13:34.201950Z INFO hf_hub: /root/.cargo/git/checkouts/hf-hub-1aadb4c6e2cbe1ba/b167f69/src/lib.rs:55: Token file not found "/root/.cache/huggingface/token"
2024-06-25T03:13:36.546198Z INFO download_artifacts: text_embeddings_core::download: core/src/download.rs:20: Starting download
Error: Could not download model artifacts
Caused by:
0: request error: error sending request for url (https://huggingface.co/BAAI/bge-large-zh-v1.5/resolve/refs%2Fpr%2F5/config.json): error trying to connect: Connection reset by peer (os error 104)
1: error sending request for url (https://huggingface.co/BAAI/bge-large-zh-v1.5/resolve/refs%2Fpr%2F5/config.json): error trying to connect: Connection reset by peer (os error 104)
2: error trying to connect: Connection reset by peer (os error 104)
3: Connection reset by peer (os error 104)
4: Connection reset by peer (os error 104)
```
It seems to download model from huggingface but I want to use my private model weight.
my privatre weight:
```
>> ls /storage/nfs2/ModelHub/embedding/BAAI/bge-small-zh-v1.5
1_Pooling model.safetensors README.md tokenizer_config.json
config.json modules.json sentence_bert_config.json tokenizer.json
config_sentence_transformers.json pytorch_model.bin special_tokens_map.json vocab.txt
```
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
docker run --gpus all \
-p 3001:3001 -v $volume:/data text-embeddings-inference:1.2 \
--model-id $model --port 3001 --revision $revision
### Expected behavior
luanch the service successfully | https://github.com/huggingface/text-generation-inference/issues/2113 | closed | [] | 2024-06-25T03:18:14Z | 2024-06-28T03:50:10Z | null | chenchunhui97 |
huggingface/chat-ui | 1,302 | Assistant feature: Send user query as part of template variable GET request | Trying to integrate RAG as an assistant. Thinking of using a template variable that makes a GET request (with the prompt as the request body), to get the relevant documents as context. Is this possible (i.e. there is a special variable in the system prompt page for the user query), or is there a better way of doing this? | https://github.com/huggingface/chat-ui/issues/1302 | closed | [] | 2024-06-24T22:27:02Z | 2025-01-02T12:09:23Z | 2 | ethayu |
huggingface/diffusers | 8,683 | Why do Diffusers schedulers produce lower quality outputs compared to ComfyUI? | ### Discussed in https://github.com/huggingface/diffusers/discussions/8682
<sup>Originally posted by **nducthang** June 24, 2024</sup>
Hi,
I'm encountering an issue when comparing the quality of ComfyUI and Diffusers. I've noticed that the output of Diffusers is consistently lower than ComfyUI in many cases, despite using the same settings and seed. For the base Diffusers, I've utilized: https://github.com/huggingface/diffusers/blob/main/examples/community/lpw_stable_diffusion_xl.py.
Upon closer inspection, I've identified differences in the scheduler/ksampler between the two base codes. I've also observed variations in CLIP Embedding between the two base codes, but in my experiments, this hasn't significantly impacted the output. The main issue seems to lie with the KSampler.
Has anyone else encountered this issue or have any ideas on improving the Scheduler algorithm of Diffusers?
Here are some prompts I've experimented:
Model: RVXL - Size: (896, 1152)
Positive prompt:
```
female, attractive woman, pretty middle-aged woman, thick hair, (((Caucasian, European, Scandinavian female))), ((hazel eyes, HazelEyed)). (Brunette (Light-Brown-Hair)), ((((long rectangular face, elongated face, oblong face shape, angular chiseled face)), ((wide jaw, big strong chin)))). (((1980s magazine advertisement. Living room. CRT Televesion. 1980s aesthetic. 1980s interior design.))) [object Object] . high quality, dim lighting, soft lighting, sharp focus, f5.6, dslr, High Detail, detailed, ((wide shot))
```
Negative prompt:
```
(((male))), (small chin, receding-chin, puffy face), (((Asian, Chinese, Korean, Japanese, Indian, Pakistani, Black, African, Persian, Arab, Middle Eastern, Hispanic, Latino))), (small chin, receding-chin, puffy face), (blurry), (BadDream:1.2), (UnrealisticDream:1.2), ((bad-hands-5)), (strabismus, cross-eyed:1.2), (signature, watermark, name), (worst quality, poor quality, low quality), ((deformed)), (extra limbs), (extra arms), (extra legs), disfigured, malformed, (nude:1.4), (naked:1.4), (nsfw:1.4), (bikini:1.4), (lingerie:1.4), (underwear:1.4), (teen:1.4), (tween:1.4), (teenage:1.4), (kid:1.6), (child:1.6), (topless, shirtless:1.4), (((greyscale))), (cleavage:1.2), (nipples:1.4)
``` | https://github.com/huggingface/diffusers/issues/8683 | closed | [] | 2024-06-24T14:37:19Z | 2024-06-25T06:06:12Z | 20 | nducthang |
huggingface/alignment-handbook | 174 | Question about torch_dtype when runnging run_orpo.py | I have been using `run_orpo.py` with my personal data successfully. However, as I use it, I have a question.
When I look at the code for `run_orpo.py`, I see that there is a code to match torch_dtype to the dtype of the pretrained model. However, when I actually train and save the model, even if the pretrained model's dtype was `bf16`, it gets changed to `fp32`. Why is this happening? | https://github.com/huggingface/alignment-handbook/issues/174 | closed | [] | 2024-06-23T08:28:02Z | 2024-07-30T05:05:03Z | 6 | sylee96 |
huggingface/diffusers | 8,666 | Attention api changes no documentation ? | how can i see ur previous changes on attention ?
u have rename`` _slice_size , _sliced_attention and _attention`` attribute from attention
need to know what are alternative using of its ? | https://github.com/huggingface/diffusers/issues/8666 | closed | [] | 2024-06-23T07:08:58Z | 2024-06-23T11:31:47Z | 4 | xalteropsx |
huggingface/transformers.js | 819 | Blog on walkthrough with transformers js | ### Question
Hey, So I am writing this blog part of sharing knowledge in a blog series called Running AI/ML in the client. I am using transformer js example walkthrough in this part to validate some concepts. Can I get some feedback before it goes live? How do we connect? | https://github.com/huggingface/transformers.js/issues/819 | closed | [
"question"
] | 2024-06-23T06:06:42Z | 2024-06-27T19:10:05Z | null | ArijitCloud |
huggingface/trl | 1,763 | What is the difference between PPOv2Trainer and PPOTrainer? | What is the difference between PPOv2Trainer and PPOTrainer? And in trl\examples\scripts\ppo\ppo.py and trl\examples\scripts\ppo.py , there are two dpo.py files, can you tell me what is different between them? | https://github.com/huggingface/trl/issues/1763 | closed | [] | 2024-06-22T14:48:38Z | 2024-08-24T09:25:52Z | null | mst272 |
huggingface/diffusers | 8,649 | SD3 - num_images_per_prompt no longer honoured (throws error) | ### Describe the bug
With models prior to SD3, the parameter num_images_per_prompt is honoured, enabling generation of several images per prompt. With sd3-medium an error is generated.
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.
Note: I have insufficient VRAM to run tests without clearing text_encoder_3 and tokenizer_3 and am not sure how to use the
sd3_medium_incl_clips_t5xxlfp8.safetensors variant in a normal diffusers workflow. It is always possible that clearing the T5-xxl has a side-effect of breaking num_images_per_prompt.
### Reproduction
```
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers",
text_encoder_3=None,
tokenizer_3=None,
torch_dtype=torch.float16
)
pipe.to("cuda")
image = pipe(
"A cat holding a sign that says hello world",
negative_prompt="",
num_inference_steps=28,
num_images_per_prompt=2,
guidance_scale=7.0,
).images[0]
image.save("sd3_hello_world-no-T5.png")
```
### Logs
```shell
Traceback (most recent call last):
File "/home/developer/src/hug_test_txt2img_sd3.py", line 12, in <module>
image = pipe(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py", line 778, in __call__
) = self.encode_prompt(
File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py", line 413, in encode_prompt
prompt_embeds = torch.cat([clip_prompt_embeds, t5_prompt_embed], dim=-2)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.
```
### System Info
- 🤗 Diffusers version: 0.29.0
- Platform: Linux-6.8.0-35-generic-x86_64-with-glibc2.35
- Running on a notebook?: No
- Running on Google Colab?: No
- Python version: 3.10.12
- PyTorch version (GPU?): 2.3.1+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.23.4
- Transformers version: 4.41.2
- Accelerate version: 0.31.0
- PEFT version: 0.11.1
- Bitsandbytes version: not installed
- Safetensors version: 0.4.3
- xFormers version: 0.0.27+133d7f1.d20240619
- Accelerator: NVIDIA GeForce RTX 3060, 12288 MiB VRAM
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/8649 | closed | [
"bug"
] | 2024-06-20T11:28:22Z | 2024-06-29T13:05:28Z | 4 | zagglez |
huggingface/transformers.js | 814 | Consultation on the use of the library with chatbot models | ### Question
Hello, Greetings Vladimir, programmer in a web environment with PHP, JS, AJAX, first I apologize for my English, my native language is Latin Spanish, I am not very good at writing it, I have used a translator, I wanted to consult, how can I use this interesting and useful tool, to be able to create a chatbot that can respond with personalized information from PDFs, the query is more like using the library, how to use the models both from Hugging Face and downloaded from the script that you share in the documentation and which models would be the most useful for this task considering that you will have to speak in Spanish, I remain attentive | https://github.com/huggingface/transformers.js/issues/814 | open | [
"question"
] | 2024-06-20T03:24:34Z | 2024-07-29T10:47:24Z | null | mate07 |
huggingface/optimum | 1,912 | Could you provide the official onnx model of Qwen-VL-Chat(-Int4)? | ### Feature request
Qwen-VL-Chat(-Int4) is useful to image-to-text model.
### Motivation
The image-to-text LMM model just like Qwen-VL-Chat(-Int4) is very useful.
### Your contribution
Not yet. | https://github.com/huggingface/optimum/issues/1912 | open | [
"feature-request",
"quantization"
] | 2024-06-19T08:43:58Z | 2024-10-09T07:52:54Z | 0 | yzq1990 |
huggingface/diffusers | 8,626 | More thorough guidance for multiple IP adapter images/masks and a single IP Adapter | ### Describe the bug
I'm trying to use a single IP adapter with multiple IP adapter images and masks. This section of the docs gives an example of how I could do that: https://huggingface.co/docs/diffusers/v0.29.0/en/using-diffusers/ip_adapter#ip-adapter-masking
The docs provide the following code:
```python
from diffusers.image_processor import IPAdapterMaskProcessor
mask1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask1.png")
mask2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask2.png")
output_height = 1024
output_width = 1024
processor = IPAdapterMaskProcessor()
masks = processor.preprocess([mask1, mask2], height=output_height, width=output_width)
pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name=["ip-adapter-plus-face_sdxl_vit-h.safetensors"])
pipeline.set_ip_adapter_scale([[0.7, 0.7]]) # one scale for each image-mask pair
face_image1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png")
face_image2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl2.png")
ip_images = [[face_image1, face_image2]]
masks = [masks.reshape(1, masks.shape[0], masks.shape[2], masks.shape[3])]
generator = torch.Generator(device="cpu").manual_seed(0)
num_images = 1
image = pipeline(
prompt="2 girls",
ip_adapter_image=ip_images,
negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality",
num_inference_steps=20,
num_images_per_prompt=num_images,
generator=generator,
cross_attention_kwargs={"ip_adapter_masks": masks}
).images[0]
```
One important point that should be highlighted is that images/scales/masks must be _lists of lists_ , otherwise we get the following error: `Cannot assign 2 scale_configs to 1 IP-Adapter`.
That error message is intuitive enough, however this gets confusing in other sections of the documentation, such as the `set_ip_adapter_scale()` function:
```python
# To use original IP-Adapter
scale = 1.0
pipeline.set_ip_adapter_scale(scale)
# To use style block only
scale = {
"up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)
# To use style+layout blocks
scale = {
"down": {"block_2": [0.0, 1.0]},
"up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)
# To use style and layout from 2 reference images
scales = [{"down": {"block_2": [0.0, 1.0]}}, {"up": {"block_0": [0.0, 1.0, 0.0]}}]
pipeline.set_ip_adapter_scale(scales)
```
Is it possible to use the style and layout from 2 reference images _with a single IP Adapter_?
I tried doing something like the following, which _builds on the knowledge of needing to use a list of lists_:
```python
# List of lists to support multiple images/scales/masks with a single IP Adapter
scales = [[{"down": {"block_2": [0.0, 1.0]}}, {"up": {"block_0": [0.0, 1.0, 0.0]}}]]
pipeline.set_ip_adapter_scale(scales)
# OR
# Use layout and style from InstantStyle for one image, but also use a numerical scale value for the other
scale = {
"down": {"block_2": [0.0, 1.0]},
"up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale([[0.5, scale]])
```
but I get the following error:
```
TypeError: unsupported operand type(s) for *: 'dict' and 'Tensor'\n
At:
/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py(2725): __call__
/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py(549): forward
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\n /usr/local/lib/python3.10/dist-packages/diffusers/models/attention.py(366): forward\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\n /usr/local/lib/python3.10/dist-packages/diffusers/models/transformers/transformer_2d.py(440): forward\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\n /usr/local/lib/python3.10/dist-packages/diffusers/models/unets/unet_2d_blocks.py(1288): forward\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\n /usr/local/lib/python3.10/dist-packages/diffusers/models/unets/unet_2d_condition.py(1220): forward\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\n /usr/local/lib/python3.10/dist-packages/torch/nn/mod | https://github.com/huggingface/diffusers/issues/8626 | closed | [
"bug",
"stale"
] | 2024-06-18T18:06:37Z | 2024-09-23T11:36:10Z | 11 | chrismaltais |
huggingface/datasets | 6,979 | How can I load partial parquet files only? | I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it.
dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet")
How can I just using 000 - 100 from a 00314 from all partially?
I search whole net didn't found a solution, **this is stupid if they didn't support it, and I swear I wont using stupid parquet any more**
| https://github.com/huggingface/datasets/issues/6979 | closed | [] | 2024-06-18T15:44:16Z | 2024-06-21T17:09:32Z | 12 | lucasjinreal |
huggingface/pytorch-image-models | 2,211 | How to Replicate Official Model Accuracy | Based on the accuracy provided by the official source, how can one replicate and train these models?
For example, for mobilenetv4_hybrid_large.e600_r384_in1k with a top-1 accuracy of 84.266
where can one find the training hyperparameters such as epochs, scheduler, warmup epochs, learning rate, batch size, and other parameters to replicate the model's performance? | https://github.com/huggingface/pytorch-image-models/issues/2211 | closed | [
"enhancement"
] | 2024-06-18T05:30:59Z | 2024-06-24T23:36:45Z | null | usergxx |
huggingface/chat-ui | 1,290 | ERROR: Exception in ASGI application | Hello everyone, I have the following problem when using Huggingface ChatUI with FastChat. How can I change the configuration? Use npm to start development mode.
Thanks
```
MODELS=`[
{
"name": "Infinirc-7b-Llama2",
"id": "Infinirc-7b-Llama2",
"model": "Infinirc-7b-Llama2",
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1024,
"stop": []
},
"endpoints": [{
"type" : "openai",
"baseURL": "http://69.30.85.183:22152/v1",
"accessToken": "x"
}]
}
]`
```
FastChat:
```
`2024-06-18 01:07:42 | INFO | stdout | INFO: 59.125.15.126:60166 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
2024-06-18 01:07:42 | ERROR | stderr | ERROR: Exception in ASGI application
2024-06-18 01:07:42 | ERROR | stderr | Traceback (most recent call last):
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
2024-06-18 01:07:42 | ERROR | stderr | result = await app( # type: ignore[func-returns-value]
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
2024-06-18 01:07:42 | ERROR | stderr | return await self.app(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await super().__call__(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 123, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await self.middleware_stack(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 186, in __call__
2024-06-18 01:07:42 | ERROR | stderr | raise exc
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 164, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await self.app(scope, receive, _send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/cors.py", line 85, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await self.app(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 65, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-06-18 01:07:42 | ERROR | stderr | raise exc
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-06-18 01:07:42 | ERROR | stderr | await app(scope, receive, sender)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 756, in __call__
2024-06-18 01:07:42 | ERROR | stderr | await self.middleware_stack(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 776, in app
2024-06-18 01:07:42 | ERROR | stderr | await route.handle(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 297, in handle
2024-06-18 01:07:42 | ERROR | stderr | await self.app(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 77, in app
2024-06-18 01:07:42 | ERROR | stderr | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-06-18 01:07:42 | ERROR | stderr | raise exc
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-06-18 01:07:42 | ERROR | stderr | await app(scope, receive, sender)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 72, in app
2024-06-18 01:07:42 | ERROR | stderr | response = await func(request)
2024-06-18 01:07:42 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 278, in app
2024-06-18 01:07:42 | ERROR | stderr | raw_response = await run_endpoint_function(
2024-06-18 01:07:42 | ERRO | https://github.com/huggingface/chat-ui/issues/1290 | open | [
"support"
] | 2024-06-18T02:07:50Z | 2024-06-23T13:26:59Z | 1 | rickychen-infinirc |
huggingface/autotrain-advanced | 684 | Where is the fine-tuned model output? | I’m new to using AutoTrain on Hugging Face and I encountered an issue during my first attempt at fine-tuning a model. I have a free account, because I want to see whether I can get something to work before I start paying for training. Here’s a summary of what I did and the problem I’m facing:
Training Configuration:
I trained using Mistral-7B-Instruct-v0.2 and also openai-community/gpt2.
Dataset: I uploaded a tiny JSONL file (24 records) with a single “text” field for training.
Training Parameters: I set the training to run for one epoch.
Training Process:
The training ran for a couple of seconds.
I received a message that the space was paused, which I assumed meant the training had completed.
Issue:
After the training supposedly completed, I can’t find any output files or trained models.
I checked all available tabs and sections in the AutoTrain interface but didn’t see anything labeled “Models,” “Artifacts,” “Results,” or similar.
I reviewed the logs but didn’t find any clear indications of where the output is stored.
I checked my Hugging Face profile under the “Models” heading, but it says “None yet.”
Questions:
Where should I look in the AutoTrain interface to find the trained model and output files?
Are there any additional steps I need to take to ensure the trained model is saved and accessible?
With a free account, I don’t have any GPUs assigned. But is that a problem with only 24 short training samples and one epoch?
Any guidance or tips would be greatly appreciated!
| https://github.com/huggingface/autotrain-advanced/issues/684 | closed | [] | 2024-06-17T23:01:53Z | 2024-06-22T03:49:27Z | null | RonPisaturo |
huggingface/transformers | 31,453 | How to build and evaluate a vanilla transformer? | ### Model description
"Attention Is All You Need" is a landmark 2017 research paper authored by eight scientists working at Google, responsible for expanding 2014 attention mechanisms proposed by Bahdanau et al. into a new deep learning architecture known as the transformer with an encoder, cross-attention, and a decoder.
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
EncoderDecoderModels are supported via the huggingface API. Though it isn't possible to evaluate them properly: https://github.com/huggingface/transformers/issues/28721
How is it possible to build and evaluate a vanilla transformer with an encoder, cross-attention, and a decoder in huggingface? | https://github.com/huggingface/transformers/issues/31453 | closed | [] | 2024-06-17T17:17:11Z | 2024-11-04T13:56:06Z | null | Bachstelze |
huggingface/parler-tts | 74 | How to do with flan-t5 when i want to finetune based on Mini v0.1 but not from scratch? Flan t5 can not deal my language. | https://github.com/huggingface/parler-tts/issues/74 | open | [] | 2024-06-17T06:39:24Z | 2024-06-17T06:39:24Z | null | lyt719 | |
huggingface/candle | 2,269 | How to select which GPU to use | We are working with the stable diffusion example. How do we select which GPU device on our system to use for the rendering?
thanks. | https://github.com/huggingface/candle/issues/2269 | open | [] | 2024-06-16T19:53:18Z | 2024-06-21T19:29:31Z | null | donkey-donkey |
huggingface/chat-ui | 1,283 | SELF_SIGNED_CERT_IN_CHAIN | I am experiencing this error. I'm on a corporate VPN and I tried turning it off and still the same error. The TLS reject is set to false as well.
SELF_SIGNED_CERT_IN_CHAIN
71.61
npm error errno SELF_SIGNED_CERT_IN_CHAIN
71.61
npm error request to https://registry.npmjs.org/failed, reason: self-signed certificate in certificate chain | https://github.com/huggingface/chat-ui/issues/1283 | open | [
"support"
] | 2024-06-14T04:03:48Z | 2024-06-17T06:50:29Z | 2 | solanki-aman |
huggingface/diffusers | 8,527 | how to add controlnet in sd3! | I currently use inpainting controlnet in sdxl because it uses unet to easily support controlnet. And I am curious about how to add controlnet in sd3 with transforms model structure. | https://github.com/huggingface/diffusers/issues/8527 | closed | [] | 2024-06-13T10:14:38Z | 2024-08-24T04:20:28Z | null | appleyang123 |
huggingface/lerobot | 266 | Question - how to handle additional sensory input | Hi guys, sorry to bother you again :wink:
and thanks for your work, I'm very excited by Lerobot!
I'm currently collecting some teleop data where the robot has tactile sensors on the fingertips, as well as a FT sensor on the wrist and I was wondering how I would integrate this best into a Lerobot Dataset.
One way would be to concatenate them into the `observation.state`, as this is the hardcoded location for non-image observations. But I want to train both with and without the tactile sensors and FT sensors as inputs to quantify the benefits of the other sensors, so I would then have to make separate datasets for each sensor combination which feels cumbersome.
Are there any plans in the near future to support 'dynamic configuration' of the state inputs for the policies? Or is my best option to just create different datasets for each combination?
| https://github.com/huggingface/lerobot/issues/266 | closed | [
"question",
"dataset",
"stale"
] | 2024-06-13T08:39:26Z | 2025-10-23T02:29:29Z | null | tlpss |
huggingface/nanotron | 196 | how to run benchmark tests | Hi,
I can build this project with your commands, but there is no "pyaottriton" when ran the benchmark test like: benchmark_forward.py or benchmark_backward.py.
anything I missed?
Thanks | https://github.com/huggingface/nanotron/issues/196 | closed | [] | 2024-06-13T08:31:06Z | 2024-06-13T08:38:24Z | null | jinsong-mao |
huggingface/chat-ui | 1,277 | Difficulties with chat-ui promp to text-generation-webui openai api endpoint | Hello,
I'm trying my best to get the huggingface ```chat-ui``` working with the API endpoint of ```text-generation-webui```.
I would be really happy if I could get a hint what I am doing wrong.
Here is a reverse proxied test instance: https://chat-ui-test.pischem.com/
I can't get my prompt that I input into the chat-ui to pass to the text-generation-webui. Every prompt will be ignored and a random answer is returned.
Here is the command I start ```text-generation-webui```:
<details>
```./start_linux.sh --listen --listen-port 8000 --api --api-port 8001 --verbose --model NTQAI_Nxcode-CQ-7B-orpo```
</details>
Here is my current ```.local.env``` of the ```chat-ui``` and the command I run it with:
<details>
```npm run dev -- --host```
```
MODELS=`[
{
"name": "text-generation-webui",
"id": "text-generation-webui",
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"max_new_tokens": 1024,
"stop": []
},
"endpoints": [{
"type" : "openai",
"baseURL": "http://172.16.0.169:8001/v1",
"extraBody": {
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000
}
}]
}
]`
MONGODB_URL=`mongodb://localhost:27017`
DEBUG=`true`
```
</details>
Here are the logs what happen when I write a prompt:
```chatui```:
<details>
```
> chat-ui@0.9.1 dev
> vite dev --host
VITE v4.5.3 ready in 777 ms
➜ Local: http://localhost:5173/
➜ Network: http://172.16.0.135:5173/
➜ Network: http://172.17.0.1:5173/
➜ press h to show help
(node:6250) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
[13:58:52.476] INFO (6250): [MIGRATIONS] Begin check...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Update search assistants" already applied. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Update deprecated models in assistants with the default model" should not be applied for this run. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Add empty 'tools' record in settings" already applied. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Convert message updates to the new schema" already applied. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Convert message files to the new schema" already applied. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] "Trim message updates to reduce stored size" already applied. Skipping...
[13:58:52.478] INFO (6250): [MIGRATIONS] All migrations applied. Releasing lock
[13:58:52.498] INFO (6250): Metrics server listening on port 5565
Browserslist: caniuse-lite is outdated. Please run:
npx update-browserslist-db@latest
Why you should do it regularly: https://github.com/browserslist/update-db#readme
(node:6250) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
(node:6250) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
Source path: /opt/chat-ui/src/lib/components/chat/FileDropzone.svelte?svelte&type=style&lang.css
Setting up new context...
Source path: /opt/chat-ui/src/lib/components/chat/ChatInput.svelte?svelte&type=style&lang.css
Source path: /opt/chat-ui/src/lib/components/ToolsMenu.svelte?svelte&type=style&lang.css
Source path: /opt/chat-ui/src/lib/components/chat/ChatMessage.svelte?svelte&type=style&lang.css
JIT TOTAL: 265.317ms
(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()
(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()
(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()
(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()
(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()
(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()
Source path: /opt/chat-ui/src/lib/components/OpenWebSearchResults.svelte?svelte&type=style&lang.css
Source path: /opt/chat-ui/src/lib/components/chat/ToolUpdate.svelte?svelte&type=style&lang.css
JIT TOTAL: 1.355ms
(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()
(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()
Source path: /opt/chat-ui/src/styles/main.css
Setting up new context...
Finding changed files: 8.775ms
Reading changed files: 158.906ms
Sorting candidates: 7.72ms
Generate rules: 397.398ms
Build stylesheet: 11.899ms
Potential classes: 8755
Active contexts: 2
JIT TOTAL: 767.815ms
Source path: /opt/chat-ui/src/styles/main.css?inline=
Setting up new context...
Finding changed files: 3.466ms
Reading changed files: 119.942ms
Sorting candidates: 7.852ms
Generate rules: 339.343ms
Build stylesheet: 6.497ms
Potential classes: 8755
Active contexts: 3
JIT TOTAL: 635.226ms | https://github.com/huggingface/chat-ui/issues/1277 | closed | [
"support"
] | 2024-06-12T14:18:12Z | 2025-01-30T18:46:22Z | 7 | Monviech |
huggingface/chat-ui | 1,275 | Feature Request - support for session sharing, archiving, and collaboration | AFAIK, HuggingChat (HC) currently has no support for session sharing, archiving, and collaboration. At least, neither the HC server nor my GitHub (GH) searching found anything like this. So, if this doesn't exist, please consider how it could be implemented. For example, if I wanted to publish an HC session, maybe I could ask HC to send me a transcript in a form suitable for sharing (e.g., as a GH repo). To reduce friction, perhaps I could simply ask HC to create (or update) a repo.
Making it easy for HC users (and researchers) to examine and/or collaborate on sessions seems to me to be a Good Thing... | https://github.com/huggingface/chat-ui/issues/1275 | open | [
"question"
] | 2024-06-12T11:35:31Z | 2024-06-14T05:24:08Z | null | RichMorin |
huggingface/lerobot | 263 | Seeking advice on how to choose between ACT and DP algorithms | Hello,
Thank you very much for the work you have done in bringing together the current excellent imitation learning collections for convenient use. Regarding the ACT algorithm and DP algorithm, besides the basic differences in the algorithms themselves, how should one choose between them for different tasks? Do they have specific types of tasks they are particularly suited for? I have just started using your project and am unsure how to select the appropriate algorithm. I would greatly appreciate any advice you can provide.
Thank you! | https://github.com/huggingface/lerobot/issues/263 | closed | [
"question"
] | 2024-06-12T07:45:39Z | 2024-06-19T14:02:43Z | null | le-wei |
huggingface/dataset-viewer | 2,899 | Standardize access to metrics and healthcheck | In some apps, the metrics and healthcheck are public:
- https://datasets-server.huggingface.co/admin/metrics
- https://datasets-server.huggingface.co/sse/metrics
- https://datasets-server.huggingface.co/sse/healthcheck
- https://datasets-server.huggingface.co/healthcheck
- On others, it’s forbidden or not found:
- https://datasets-server.huggingface.co/metrics
- https://datasets-server.huggingface.co/filter/metrics
As @severo suggests, it should be coherent among all the services. (Do we want the metrics to be public, or not?)
| https://github.com/huggingface/dataset-viewer/issues/2899 | open | [
"question",
"infra",
"P2"
] | 2024-06-11T14:39:10Z | 2024-07-11T15:38:17Z | null | AndreaFrancis |
huggingface/lerobot | 261 | Which low cost robot with teleoperation to test the library ? | Firstly, thank you for all the work. At my company we would like to obtain results on real robots from this repository. However, the original setups are either quite expensive (around ~30k for Aloha) or require reconstruction for the UMI interface from Colombia via 3D printing, which would be time-consuming considering we don't have direct experience in the subject.
**Do you have any recommendations for one or more robots with a low-cost teleoperation setup on which we could test and iterate quickly on these algorithms?** I have seen some people doing things with low-cost robots on LinkedIn, and I will reach out to them, but apparently, they do not seem to be selling them.
Thanks, | https://github.com/huggingface/lerobot/issues/261 | closed | [
"question"
] | 2024-06-11T13:21:32Z | 2024-07-23T07:55:15Z | null | RochMollero |
huggingface/diarizers | 11 | How can I save the model locally before pushing it to the Hub ?! | https://github.com/huggingface/diarizers/issues/11 | closed | [] | 2024-06-11T06:37:45Z | 2024-06-13T16:24:19Z | null | ma-mohsen | |
huggingface/parler-tts | 68 | How to predict after finetune? There is no config.json in checkpoint dir. | https://github.com/huggingface/parler-tts/issues/68 | open | [] | 2024-06-11T03:30:04Z | 2024-06-17T01:57:04Z | null | lyt719 | |
huggingface/transformers.js | 802 | Long running transcription using webgpu-whisper | ### Question
Noob question - the [webgpu-whisper](https://github.com/xenova/transformers.js/tree/v3/examples/webgpu-whisper) demo does real time transcription, however it doesn't build out a full transcript from the start ie. 2 mins into transcription, the first few transcribed lines disappear.
Transcript at time x 👇
```
Cool, let's test this out. We'll see how this works. So turns out that the transcription when I try to access it is actually just empty. And so the only thing that actually comes through is. So yeah, so the output that's getting cut is basically coming from the
```
Transcript at time x+1 👇
```
this out, we'll see how this works. So turns out that the transcription when I try to access it is actually just empty. And so the only thing that actually comes through is. So yeah, so the output that's getting cut is basically coming from the work
```
Note how the "Cool, let's test" is missing from the start of the second transcript.
I'm wondering what it would take to keep building the transcript for a long running meeting without losing any of the previously transcribed stuff?
I tried a naive appending approach and that just results in a transcript full of repetition.
So I'm very curious about what it would take to build out a streaming transcription similar to what something like [Deepgram](https://developers.deepgram.com/docs/node-sdk-streaming-transcription) would offer. Would that require a change to the pipeline? Are there models that can take an appended transcript with lots of repetition and trim it down to a clean transcript?
Please let me know if my questions are unclear. Just looking for some direction so that I can potentially put up a PR for this (if needed).
| https://github.com/huggingface/transformers.js/issues/802 | open | [
"question"
] | 2024-06-10T16:44:01Z | 2025-05-30T05:52:37Z | null | iamhitarth |
huggingface/sentence-transformers | 2,738 | How is `max_length` taken into account compared to models setting | What happens under the hood, if I set max_length > than model's max_length?
it seems to work, but are inputs truncated or doi you apply RoPE-Extension? | https://github.com/huggingface/sentence-transformers/issues/2738 | open | [] | 2024-06-09T15:59:09Z | 2024-06-10T06:45:49Z | null | l4b4r4b4b4 |
huggingface/datasets | 6,961 | Manual downloads should count as downloads | ### Feature request
I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats
### Motivation
This would ensure that downloads are accurately reported to end users.
### Your contribution
N/A | https://github.com/huggingface/datasets/issues/6961 | open | [
"enhancement"
] | 2024-06-09T04:52:06Z | 2024-06-13T16:05:00Z | 1 | umarbutler |
huggingface/diffusers | 8,439 | How to use EDM2 model with diffusers? | model safetensors: https://huggingface.co/RedRocket/Fluffyrock-Unbound/blob/main/Fluffyrock-Unbound-v1-1.safetensors
yaml: https://huggingface.co/RedRocket/Fluffyrock-Unbound/raw/main/Fluffyrock-Unbound-v1-1.yaml
colab demo:
https://colab.research.google.com/drive/1LSGvjWXNVjs6Tthcpf0F5VwuTFJ_d-oB
results:

| https://github.com/huggingface/diffusers/issues/8439 | open | [
"stale"
] | 2024-06-09T03:39:05Z | 2024-09-14T15:10:19Z | null | s9anus98a |
huggingface/transformers | 31,323 | Language modeling examples do not show how to do multi-gpu training / fine-tuning | ### System Info
- `transformers` version: 4.41.2
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.9.18
- Huggingface_hub version: 0.23.3
- Safetensors version: 0.4.2
- Accelerate version: 0.31.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@muellerz @stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
n/a
### Expected behavior
The `run_clm.py` and other related scripts in:
`https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling`
notionally support training / fine-tuning of models whose gradients are too large to fit on a single GPU, if you believe their CLI. However there is no example showing how to actually do that.
For instance, `accelerate estimate-memory` says training the Mistral-7B family with Adam takes roughly 55 GB with float16, which is more memory than a single 40GB A100 has. So I'd need to use more than one GPU.
Would it be possible to modify the language_modeling documentation to explain how to do that?
| https://github.com/huggingface/transformers/issues/31323 | closed | [
"Documentation"
] | 2024-06-07T18:49:35Z | 2024-12-02T08:11:31Z | null | csiefer2 |
huggingface/candle | 2,258 | How to Implement New Operators Using CUDA Host Functions Along with Thrust and CUB Libraries | As stated, the CUDA code in the candle-kernels repository seems to only contain kernel functions. When I want to implement new operators (such as nonzero), it seems I'm only able to use Rust for higher-level functionality, which means I cannot utilize the device_vector from Thrust or the flagged APIs from CUB. This poses a significant challenge for implementing my algorithms. For example, to implement nonzero, it seems I would have to reimplement algorithms like exclusive_scan and scatter using the current approach?
I am hoping for a better way to utilize the CUDA ecosystem!
Specifically, I'm interested in how to:
1. Incorporate host functions in CUDA code to facilitate the use of libraries like Thrust and CUB.
2. Effectively leverage these libraries to implement algorithms and operators that are not natively supported in the current codebase.
Any guidance or best practices for achieving this would be greatly appreciated.
(Translate from Chinese using LLM, Might be a little bit.. formal^_^) | https://github.com/huggingface/candle/issues/2258 | open | [] | 2024-06-07T16:52:44Z | 2024-06-09T15:56:36Z | null | chenwanqq |
huggingface/text-generation-inference | 2,035 | What is TGI's graceful shutdown behavior? | When SIGKILL arrives,
- does TGI process all pending inputs?
- does TGI blocks incoming inputs?
I saw a PR that adds graceful shutdown but it did not specify the exact program behavior. | https://github.com/huggingface/text-generation-inference/issues/2035 | closed | [] | 2024-06-07T06:24:00Z | 2024-06-07T08:08:51Z | null | seongminp |
huggingface/tokenizers | 1,549 | How to use `TokenizerBuilder`? | I expected `TokenizerBuilder` to produce a `Tokenizer` from the `build()` result, but instead `Tokenizer` wraps `TokenizerImpl`.
No problem, I see that it impl `From<TokenizerImpl> for Tokenizer`, but it's attempting to do quite a bit more for some reason? Meanwhile I cannot use `Tokenizer(unwrapped_build_result_here)` as the struct is private 🤔 (_while the `Tokenizer::new()` method won't take this in either_)
---
```rs
let mut tokenizer = Tokenizer::from(TokenizerBuilder::new()
.with_model(unigram)
.with_decoder(Some(decoder))
.with_normalizer(Some(normalizer))
.build()
.map_err(anyhow::Error::msg)?
);
```
```rs
error[E0283]: type annotations needed
--> mistralrs-core/src/pipeline/gguf_tokenizer.rs:139:41
|
139 | let mut tokenizer = Tokenizer::from(TokenizerBuilder::new()
| ^^^^^^^^^^^^^^^^^^^^^ cannot infer type of the type parameter `PT` declared on the struct `TokenizerBuilder`
|
= note: cannot satisfy `_: tokenizers::PreTokenizer`
= help: the following types implement trait `tokenizers::PreTokenizer`:
tokenizers::pre_tokenizers::bert::BertPreTokenizer
tokenizers::decoders::byte_level::ByteLevel
tokenizers::pre_tokenizers::delimiter::CharDelimiterSplit
tokenizers::pre_tokenizers::digits::Digits
tokenizers::decoders::metaspace::Metaspace
tokenizers::pre_tokenizers::punctuation::Punctuation
tokenizers::pre_tokenizers::sequence::Sequence
tokenizers::pre_tokenizers::split::Split
and 4 others
note: required by a bound in `tokenizers::TokenizerBuilder::<M, N, PT, PP, D>::new`
--> /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.19.1/src/tokenizer/mod.rs:314:9
|
314 | PT: PreTokenizer,
| ^^^^^^^^^^^^ required by this bound in `TokenizerBuilder::<M, N, PT, PP, D>::new`
...
319 | pub fn new() -> Self {
| --- required by a bound in this associated function
help: consider specifying the generic arguments
|
139 | let mut tokenizer = Tokenizer::from(TokenizerBuilder::<tokenizers::models::unigram::Unigram, tokenizers::NormalizerWrapper, PT, PP, tokenizers::DecoderWrapper>::new()
| +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
```
Why is this an issue? Isn't the point of the builder so that you don't have to specify the optional types not explicitly set?
> ```
> cannot infer type of the type parameter `PT` declared on the struct `TokenizerBuilder`
> ```
I had a glance over the source on github but didn't see an example or test for using this API and the docs don't really cover it either.
---
Meanwhile with `Tokenizer` instead of `TokenizerBuilder` this works:
```rs
let mut tokenizer = Tokenizer::new(tokenizers::ModelWrapper::Unigram(unigram));
tokenizer.with_decoder(decoder);
tokenizer.with_normalizer(normalizer);
```
| https://github.com/huggingface/tokenizers/issues/1549 | closed | [
"Stale"
] | 2024-06-07T01:18:07Z | 2024-07-20T01:52:03Z | null | polarathene |
huggingface/transformers.js | 796 | No performance gain on using WebGPU | ### Question
I want to use the model: https://huggingface.co/Xenova/clip-vit-large-patch14 with WebGPU for quick inference in the browser. I ran the WebGPU benchmark to observe the performance increase and indeed it showed a ~7x improvement in speed on my device.
But when I run the clip model linked above, there's barely any difference between performance with and without WebGPU. | https://github.com/huggingface/transformers.js/issues/796 | closed | [
"question"
] | 2024-06-06T20:16:07Z | 2024-06-09T01:44:17Z | null | mr-sarthakgupta |
huggingface/optimum | 1,895 | Lift upper version limit of transformers for habana | ### Feature request
optimium currently limits transformers to `>= 4.38.0, < 4.39.0`. @regisss bumped the upper version limit in PR #1851 a month ago. Is there any technical reason to limit the upper version to `< 4.39`? Other dependencies allow for more recent versions. For example neuronx allows `< 4.42.0`, see #1881.
### Motivation
We would like to use newer versions of transformers and tokenizers in InstructLab. The upper version limit for optimum makes this harder on us. We need optimum-habana for Intel Gaudi support.
### Your contribution
I can create a PR. It's a trivial one line change.
Testing is less trivial. I have access to an 8-way Gaudi 2 system, but the system is currently busy. I can do some testing in about two weeks from now after I have updated the system from 1.15.1 to 1.16.0. | https://github.com/huggingface/optimum/issues/1895 | closed | [] | 2024-06-06T07:52:41Z | 2024-06-24T08:53:27Z | 4 | tiran |
huggingface/peft | 1,829 | How to change to PEFT model dynamically? | python==3.7.12
PEFT==0.3.0
@BenjaminBossan
I fine-tune the eleventh transformer of Bert as below:
```bash
target_modules = []
target_modules.append("11.attention.self.query")
target_modules.append("11.attention.self.value")
lora_config = LoraConfig(
r = self.args.lora_rank,
lora_alpha = self.args.lora_alpha,
target_modules = target_modules,
lora_dropout = 0.05,
bias = "none"
)
```
After training for a few epochs, I also want to fine-tune the first transformer. How to achieve this?
| https://github.com/huggingface/peft/issues/1829 | closed | [] | 2024-06-05T13:24:40Z | 2024-06-06T00:37:06Z | null | whr819987540 |
huggingface/transformers.js | 792 | Feature request: YOLO-World/Grounding DINO (Zero shot object detection) | ### Question
Hi!
I'm trying out some of the zero shot capabilities and I've been working with the owlv2 but I was wondering, is support for yolo-world and grounding Dino coming? They seem to be faster than owlv2.
Thanks! | https://github.com/huggingface/transformers.js/issues/792 | open | [
"question"
] | 2024-06-04T21:39:18Z | 2024-06-24T07:04:27Z | null | rogueturnip |
huggingface/transformers.js | 791 | env.allowLocalModels and env.allowRemoteModels | ### Question
When I set env.allowLocalModels = true and look at the env object I see both
env.allowLocalModels and env.allowRemoteModels set to true. Does this mean that it will look for models locally first and then if not found go to the remoteHost? | https://github.com/huggingface/transformers.js/issues/791 | open | [
"question"
] | 2024-06-04T17:07:38Z | 2024-09-15T14:00:48Z | null | mram0509 |
huggingface/diffusers | 8,400 | how can we load model to lora from singlefile ? | pipe.load_lora_weights("lora/aesthetic_anime_v1s.safetensors")
File "Z:\software\python11\Lib\site-packages\diffusers\loaders\lora.py", line 1230, in load_lora_weights
raise ValueError("PEFT backend is required for this method.")
ValueError: PEFT backend is required for this method.
pipe.load_lora_weights("lora/aesthetic_anime_v1s.safetensors")
how can i use this model https://civitai.com/models/295100?modelVersionId=331598
| https://github.com/huggingface/diffusers/issues/8400 | closed | [] | 2024-06-04T13:54:56Z | 2024-06-04T15:53:32Z | null | xalteropsx |
huggingface/datasets | 6,953 | Remove canonical datasets from docs | Remove canonical datasets from docs, now that we no longer have canonical datasets. | https://github.com/huggingface/datasets/issues/6953 | closed | [
"documentation"
] | 2024-06-04T12:09:03Z | 2024-07-01T11:31:25Z | 1 | albertvillanova |
huggingface/datasets | 6,951 | load_dataset() should load all subsets, if no specific subset is specified | ### Feature request
Currently load_dataset() is forcing users to specify a subset. Example
`from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")`
```---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-10-c0cb49385da6>](https://localhost:8080/#) in <cell line: 2>()
1 from datasets import load_dataset
----> 2 dataset = load_dataset("m-a-p/COIG-CQIA")
3 frames
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _create_builder_config(self, config_name, custom_features, **config_kwargs)
582 if not config_kwargs:
583 example_of_usage = f"load_dataset('{self.dataset_name}', '{self.BUILDER_CONFIGS[0].name}')"
--> 584 raise ValueError(
585 "Config name is missing."
586 f"\nPlease pick one among the available configs: {list(self.builder_configs.keys())}"
ValueError: Config name is missing.
Please pick one among the available configs: ['chinese_traditional', 'coig_pc', 'exam', 'finance', 'douban', 'human_value', 'logi_qa', 'ruozhiba', 'segmentfault', 'wiki', 'wikihow', 'xhs', 'zhihu']
Example of usage:
`load_dataset('coig-cqia', 'chinese_traditional')`
```
This means a dataset cannot contain all the subsets at the same time. I guess one workaround is to manually specify the subset files like in [here](https://huggingface.co/datasets/m-a-p/COIG-CQIA/discussions/1#658698b44bb41498f75c5622), which is clumsy.
### Motivation
Ideally, if not subset is specified, the API should just try to load all subsets. This makes it much easier to handle datasets w/ subsets.
### Your contribution
Not sure since I'm not familiar w/ the lib src. | https://github.com/huggingface/datasets/issues/6951 | closed | [
"enhancement"
] | 2024-06-04T11:02:33Z | 2024-11-26T08:32:18Z | 5 | windmaple |
huggingface/datasets | 6,950 | `Dataset.with_format` behaves inconsistently with documentation | ### Describe the bug
The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation.
https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays
https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays
> If your dataset consists of N-dimensional arrays, you will see that by default they are considered as nested lists.
> In particular, a PyTorch formatted dataset outputs nested lists instead of a single tensor.
> A TensorFlow formatted dataset outputs a RaggedTensor instead of a single tensor.
But I get a single tensor by default, which is inconsistent with the description.
Actually the current behavior seems more reasonable to me. Therefore, the document needs to be modified.
### Steps to reproduce the bug
```python
>>> from datasets import Dataset
>>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]]
>>> ds = Dataset.from_dict({"data": data})
>>> ds = ds.with_format("torch")
>>> ds[0]
{'data': tensor([[1, 2],
[3, 4]])}
>>> ds = ds.with_format("tf")
>>> ds[0]
{'data': <tf.Tensor: shape=(2, 2), dtype=int64, numpy=
array([[1, 2],
[3, 4]])>}
```
### Expected behavior
```python
>>> from datasets import Dataset
>>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]]
>>> ds = Dataset.from_dict({"data": data})
>>> ds = ds.with_format("torch")
>>> ds[0]
{'data': [tensor([1, 2]), tensor([3, 4])]}
>>> ds = ds.with_format("tf")
>>> ds[0]
{'data': <tf.RaggedTensor [[1, 2], [3, 4]]>}
```
### Environment info
datasets==2.19.1
torch==2.1.0
tensorflow==2.13.1 | https://github.com/huggingface/datasets/issues/6950 | closed | [
"documentation"
] | 2024-06-04T09:18:32Z | 2024-06-25T08:05:49Z | 2 | iansheng |
huggingface/sentence-transformers | 2,708 | What is the training order in the multi-task learning example? | hello. In the case of multi-task learning in the example below, what is the learning order? The example below is taken from https://www.sbert.net/examples/training/quora_duplicate_questions/README.html.
Regarding the dataset below, I know that the learning results are good if you learn mnrl after learning the cl dataset. Does the learning proceed sequentially like this? Or does it go the other way? Simply put, which of the three below is your learning order?
1. cl -> mnrl
2. mnrl -> cl
3. shuffled two datasets
```
Multi-Task-Learning
[ContrastiveLoss]
(https://www.sbert.net/docs/package_reference/sentence_transformer/losses.html#sentence_transformers.losses.ContrastiveLoss) works well for pair classification, i.e., given two pairs, are these duplicates or not. It pushes negative pairs far away in vector space, so that the distinguishing between duplicate and non-duplicate pairs works good.
[MultipleNegativesRankingLoss]
(https://www.sbert.net/docs/package_reference/sentence_transformer/losses.html#sentence_transformers.losses.MultipleNegativesRankingLoss) on the other sides mainly reduces the distance between positive pairs out of large set of possible candidates. However, the distance between non-duplicate questions is not so large, so that this loss does not work that well for pair classification.
In [training_multi-task-learning.py](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/quora_duplicate_questions/training_multi-task-learning.py) I demonstrate how we can train the network with both losses. The essential code is to define both losses and to pass it to the fit method.
```
```py
from datasets import load_dataset
from sentence_transformers.losses import ContrastiveLoss, MultipleNegativesRankingLoss
from sentence_transformers import SentenceTransformerTrainer, SentenceTransformer
model_name = "stsb-distilbert-base"
model = SentenceTransformer(model_name)
# https://huggingface.co/datasets/sentence-transformers/quora-duplicates
mnrl_dataset = load_dataset(
"sentence-transformers/quora-duplicates", "triplet", split="train"
) # The "pair" subset also works
mnrl_train_dataset = mnrl_dataset.select(range(100000))
mnrl_eval_dataset = mnrl_dataset.select(range(100000, 101000))
mnrl_train_loss = MultipleNegativesRankingLoss(model=model)
# https://huggingface.co/datasets/sentence-transformers/quora-duplicates
cl_dataset = load_dataset("sentence-transformers/quora-duplicates", "pair-class", split="train")
cl_train_dataset = cl_dataset.select(range(100000))
cl_eval_dataset = cl_dataset.select(range(100000, 101000))
cl_train_loss = ContrastiveLoss(model=model, margin=0.5)
# Create the trainer & start training
trainer = SentenceTransformerTrainer(
model=model,
train_dataset={
"mnrl": mnrl_train_dataset,
"cl": cl_train_dataset,
},
eval_dataset={
"mnrl": mnrl_eval_dataset,
"cl": cl_eval_dataset,
},
loss={
"mnrl": mnrl_train_loss,
"cl": cl_train_loss,
},
)
trainer.train()
```
| https://github.com/huggingface/sentence-transformers/issues/2708 | closed | [] | 2024-06-04T07:42:37Z | 2024-06-04T08:29:30Z | null | daegonYu |
huggingface/datasets | 6,949 | load_dataset error | ### Describe the bug
Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r').
### Steps to reproduce the bug
1. pip install datasets==2.19.2
2. from datasets import Dataset, DatasetDict, NamedSplit, Split, load_dataset
3. data = load_dataset('json', data_files='train.json')
### Expected behavior
It is able to load my json correctly
### Environment info
datasets==2.19.2 | https://github.com/huggingface/datasets/issues/6949 | closed | [] | 2024-06-04T01:24:45Z | 2024-07-01T11:33:46Z | 2 | frederichen01 |
huggingface/transformers.js | 789 | Can I use Xenova/Phi-3-mini-4k-instruct model server side? | ### Question
Hey there! I’m trying to run Xenova/Phi-3-mini-4k-instruct model using transformers.js 2.17.2 on the server in my Node.js project, but I get an error saying that Phi-3 is not supported. Can I make it work somehow? Any ideas appreciated | https://github.com/huggingface/transformers.js/issues/789 | closed | [
"question"
] | 2024-06-03T18:43:20Z | 2024-06-04T04:57:42Z | null | StepanKukharskiy |
huggingface/datasets | 6,947 | FileNotFoundError:error when loading C4 dataset | ### Describe the bug
can't load c4 datasets
When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'}
How can I fix this?
### Steps to reproduce the bug
1.from datasets import load_dataset
2.dataset = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')
3. raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at local_path/c4_val/allenai/c4/c4.py or any data file in the same directory. Couldn't find 'allenai/c4' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-validation.00003-of-00008.json.gz' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
### Expected behavior
The data was successfully imported
### Environment info
python version 3.9
datasets version 2.19.2 | https://github.com/huggingface/datasets/issues/6947 | closed | [] | 2024-06-03T13:06:33Z | 2024-06-25T06:21:28Z | 15 | W-215 |
huggingface/dataset-viewer | 2,878 | Remove or increase the 5GB limit? | The dataset viewer shows statistics and provides filter + sort + search only for the first 5GB of each split. We are also unable to provide the exact number of rows for bigger splits.
Note that we "show" all the rows for parquet-native datasets (i.e., we can access the rows randomly, i.e., we have pagination).
Should we provide a way to increase or remove this limit? | https://github.com/huggingface/dataset-viewer/issues/2878 | closed | [
"question",
"feature request"
] | 2024-06-03T08:55:08Z | 2024-07-22T11:32:49Z | null | severo |
huggingface/transformers | 31,195 | How to get back the input time series after using PatchTSTForPretraining? | ### System Info
-
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My model is PatchTSTForPretraining(
(model): PatchTSTModel(
(scaler): PatchTSTScaler(
(scaler): PatchTSTStdScaler()
)
(patchifier): PatchTSTPatchify()
(masking): PatchTSTMasking()
(encoder): PatchTSTEncoder(
(embedder): PatchTSTEmbedding(
(input_embedding): Linear(in_features=5, out_features=768, bias=True)
)
(positional_encoder): PatchTSTPositionalEncoding(
(positional_dropout): Identity()
)
(layers): ModuleList(
(0-11): 12 x PatchTSTEncoderLayer(
(self_attn): PatchTSTAttention(
(k_proj): Linear(in_features=768, out_features=768, bias=True)
(v_proj): Linear(in_features=768, out_features=768, bias=True)
(q_proj): Linear(in_features=768, out_features=768, bias=True)
(out_proj): Linear(in_features=768, out_features=768, bias=True)
)
(dropout_path1): Identity()
(norm_sublayer1): PatchTSTBatchNorm(
(batchnorm): BatchNorm1d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(ff): Sequential(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELUActivation()
(2): Identity()
(3): Linear(in_features=3072, out_features=768, bias=True)
)
(dropout_path3): Identity()
(norm_sublayer3): PatchTSTBatchNorm(
(batchnorm): BatchNorm1d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
)
)
(head): PatchTSTMaskPretrainHead(
(dropout): Dropout(p=0.0, inplace=False)
(linear): Linear(in_features=768, out_features=5, bias=True)
)
)
prediction_output = model(time_series_data)
Output:
time_series_data = tensor([[[430.3000],
[431.7600],
[431.7600],
[431.7600],
[431.7600],
[431.7600],
[431.7600],
[431.7600],
[431.7600],
[430.3000],
[430.3000],
[428.9600],
[430.3000],
[430.3000],
[430.3000]]], device='cuda:0')
prediction_output = tensor([[[[-0.2321, 0.1897, 0.4731, 0.8893, 0.6723],
[-0.5465, -0.9017, 0.0778, 0.0078, 1.3323],
[ 0.4945, 0.5145, -0.5386, -0.7045, -1.5766],
[ 0.2064, 0.6290, -0.8145, 1.0450, -0.2886]]]], device='cuda:0')
### Expected behavior
x_hat = self.head(model_output.last_hidden_state) produces output which is not consistent to the range of input time series values. I am trying to pretrain PatchTST for autoencoding. How do I get back the input time series? | https://github.com/huggingface/transformers/issues/31195 | closed | [] | 2024-06-03T06:44:31Z | 2024-10-26T07:44:56Z | null | nikhilajoshy |
huggingface/optimum | 1,885 | onnx optimum ORTOptimizer inference runs slower than setfit.export_onnx runtime.InferenceSession inference | ### System Info
Hi,
i did a test between onnx optimum export + ORTOptimizer inference vs. setfit.export_onnx + onnxruntime.InferenceSession.
it seems that onnx optimum ORTOptimizer inference runs slower than setfit.export_onnx runtime.InferenceSession inference
any idea why is that the reason?
i also changed from AutoOptimizationConfig.O2() =AutoOptimizationConfig.O4() - still onnxruntime.InferenceSession is faster.
set train_model = True - to train the finetuned model before and export it.
gpu: nvidia T4
output:
```
python setfit-onnx-optimum-example.py
Repo card metadata block was not found. Setting CardData to empty.
Model size (MB) - 86.68
Accuracy on test set - 0.888
Average latency (ms) - 6.23 +\- 0.51
Framework not specified. Using pt to export the model.
Using the export variant default. Available variants are:
- default: The default ONNX variant.
***** Exporting submodel 1/1: BertModel *****
Using framework PyTorch: 2.2.1+cu121
Overriding 1 configuration item(s)
- use_cache -> False
2024-06-02 22:27:53.640590789 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-06-02 22:27:53.640623671 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/optimum/onnxruntime/configuration.py:770: FutureWarning: disable_embed_layer_norm will be deprecated soon, use disable_embed_layer_norm_fusion instead, disable_embed_layer_norm_fusion is set to True.
warnings.warn(
Optimizing model...
Configuration saved in all-MiniLM-L6-v2_auto_opt_O2/ort_config.json
Optimized model saved at: all-MiniLM-L6-v2_auto_opt_O2 (external data format: False; saved all tensor to one file: True)
2024-06-02 22:27:55.548291362 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-06-02 22:27:55.548316947 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
Model size (MB) - 86.10
Accuracy on test set - 0.888
Average latency (ms) - 1.83 +\- 0.46
Speedup: 3.40x
2024-06-02 22:27:59.483816381 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 2 Memcpy nodes are added to the graph main_graph_ed6a60ecdb95455bac10d5392cf78d36 for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.
2024-06-02 22:27:59.485393795 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-06-02 22:27:59.485413289 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
providers: ['CUDAExecutionProvider', 'CPUExecutionProvider']
Model size (MB) - 86.23
Accuracy on test set - 0.888
Average latency (ms) - 1.40 +\- 0.17
Speedup: 4.44x
```
code:
```
# https://github.com/huggingface/setfit/blob/main/notebooks/setfit-onnx-optimum.ipynb
from pathlib import Path
from time import perf_counter
import evaluate
import numpy as np
import torch
from tqdm.auto import tqdm
import os
import matplotlib.pyplot as plt
import pandas as pd
from setfit import SetFitModel
from setfit import SetFitModel, Trainer, TrainingArguments
from datasets import load_dataset
from setfit.exporters.utils import mean_pooling
from optimum.onnxruntime import ORTModelForFeatureExtraction, AutoOptimizationConfig, ORTOptimizer
from transformers import AutoTokenizer
from setfit.exporters.onnx import export_onnx
import onnxruntime
metric = evaluate.load("accuracy")
train_model = False
class PerformanceBenchmark:
def __init__(self, model, dataset, optim_type):
self.model = model
self.dataset = dataset
self.optim_type = optim_type
def compute_accuracy(self):
preds = self.model.predict(self.dataset["text"])
labels = self.dataset["label"]
accuracy = metric.compute(predictions=preds, references=labels)
print(f"Accuracy on test set - {accuracy['accuracy']:.3f}")
return accuracy
def compute_size(self):
state_dict = self.model.model_body.state_dict()
tmp_path = Path("model.pt | https://github.com/huggingface/optimum/issues/1885 | open | [
"bug"
] | 2024-06-02T22:34:37Z | 2024-06-08T03:02:40Z | 1 | geraldstanje |
huggingface/chat-ui | 1,241 | 💻💻How to deploy to vercel | Hi,
I am currently having troubles with deploying to Vercel, I am experiencing an error 404 NOT FOUND. I think i am using the wrong build command or the wrong default directory. Can someone please help?

Thanksyou! | https://github.com/huggingface/chat-ui/issues/1241 | open | [
"support"
] | 2024-06-02T10:05:45Z | 2025-01-10T17:00:37Z | null | haydenkong |
huggingface/transformers.js | 788 | Is it possible to use transformers.js to implement audio source separation tasks? | ### Question
Hello, I have a beginner's question.
I want to implement the task of removing the human voice from the audio in the video and retaining the background sound in the browser. The idea is to load the model for audio source separation related to transformers.js to achieve the separation of the background sound and human voice, and then only return the background sound.
But I couldn't find relevant examples in the documentation, so I was wondering if this can be implemented? If so, what are the learning or research paths?
Looking forward to your reply | https://github.com/huggingface/transformers.js/issues/788 | open | [
"question"
] | 2024-06-02T04:00:55Z | 2024-12-26T06:05:26Z | null | asasas234 |
huggingface/lerobot | 238 | how to use on wslcan not visulize | how to use on wslcan not visulize | https://github.com/huggingface/lerobot/issues/238 | closed | [
"simulation"
] | 2024-06-02T03:58:44Z | 2025-10-08T08:25:31Z | null | jackylee1 |
huggingface/chat-ui | 1,236 | No Setup Deploy: Multiple models supported? | How can I make **multiple models** available on Chat UI using **No Setup Deploy**?
## Further Details
The form (see below) seems to only allow one model.
<details><summary>Form</summary>
<p>
<img width="661" alt="image" src="https://github.com/huggingface/chat-ui/assets/14152377/e5595c34-b5c5-4c09-8b83-d5a0f839016d">
</p>
</details>
## Tried so far
(Without success)
- I checked the [full tutorial](https://huggingface.co/docs/hub/spaces-sdks-docker-chatui#chatui-on-spaces) linked from the [README.md](https://github.com/huggingface/chat-ui/blob/93b39a0beb72378c76d5d146bfd3a8355c1d110d/README.md), but couldn't find neither how to use multiple models nor a note about a limitation.
- I tried deploying one model and adding an `.env.local` to the deployment on my space, but the web interface threw an error when trying to commit `.env.local` due to potential secrets included in the file. | https://github.com/huggingface/chat-ui/issues/1236 | open | [
"enhancement",
"docker"
] | 2024-06-01T11:41:22Z | 2024-06-03T07:55:12Z | 1 | rodrigobdz |
huggingface/optimum | 1,884 | Add support for porting CLIPVisionModelWithProjection | ### Feature request
Currently there is not support for porting CLIPVisionModelWithProjection class models from the transformers library to onnx through optimum. I'd like to add support for the same for which we'd need to change the optimum/exporters/onnx/model_configs.py file. I'd like ot request you to help me guide how can I try to understand the code and make this feature.
### Motivation
I need the same for a personal project and would be happy to contribute to the library as well.
### Your contribution
I would be happy to submit a PR | https://github.com/huggingface/optimum/issues/1884 | open | [
"feature-request",
"onnx"
] | 2024-05-31T22:25:45Z | 2024-10-09T07:56:28Z | 0 | mr-sarthakgupta |
huggingface/datasets | 6,940 | Enable Sharding to Equal Sized Shards | ### Feature request
Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation.
### Motivation
Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining shards will have length (n // i).". However, when using FSDP we want the shards to have the same size. This requires the user to manually handle this situation, but it will be nice if we had an option to shard the dataset into equally sized shards.
### Your contribution
For now just a PR. I can also add code that does what is needed, but probably not efficient.
Shard to equal size by duplication:
```
remainder = len(dataset) % num_shards
num_missing_examples = num_shards - remainder
duplicated = dataset.select(list(range(num_missing_examples)))
dataset = concatenate_datasets([dataset, duplicated])
shard = dataset.shard(num_shards, shard_idx)
```
Or by truncation:
```
shard = dataset.shard(num_shards, shard_idx)
num_examples_per_shard = len(dataset) // num_shards
shard = shard.select(list(range(num_examples_per_shard)))
``` | https://github.com/huggingface/datasets/issues/6940 | open | [
"enhancement"
] | 2024-05-31T21:55:50Z | 2024-06-01T07:34:12Z | 0 | yuvalkirstain |
huggingface/chat-ui | 1,225 | SyntaxError: JSON5: invalid character 'u' at 1:1 | Where can I find out more about the following error? Is there an issue with the existing template?
## Reproduction Steps
1. Deploy [Chat UI using default template](https://huggingface.co/new-space?template=huggingchat/chat-ui-template) with `MONGO_URL` set to `mongodb+srv://<USER_SECRET>:<PASSWORD_SECRET>@<CLUSTER_SECRET>`
2. Add secret called `HF_TOKEN` with access token value.
## Error Logs
Additionally to https://github.com/huggingface/chat-ui/issues/1174, the following error is shown:
```
2024-05-30T11:56:43: PM2 log: [--no-daemon] Exit on target PM2 exit pid=403
11:56:43 2|index | You have triggered an unhandledRejection, you may have forgotten to catch a Promise rejection:
11:56:43 2|index | SyntaxError: JSON5: invalid character 'u' at 1:1
11:56:43 2|index | at syntaxError (/app/node_modules/json5/lib/parse.js:1110:17)
11:56:43 2|index | at invalidChar (/app/node_modules/json5/lib/parse.js:1055:12)
11:56:43 2|index | at Object.value (/app/node_modules/json5/lib/parse.js:309:15)
11:56:43 2|index | at lex (/app/node_modules/json5/lib/parse.js:100:42)
11:56:43 2|index | at Object.parse (/app/node_modules/json5/lib/parse.js:25:17)
11:56:43 2|index | at file:///app/build/server/chunks/auth-9412170c.js:28:16
11:56:43 2|index | at ModuleJob.run (node:internal/modules/esm/module_job:222:25)
11:56:43 2|index | at async ModuleLoader.import (node:internal/modules/esm/loader:316:24)
11:56:43 2|index | at async Server.init (file:///app/build/server/index.js:4189:24)
11:56:43 2|index | at async file:///app/build/handler.js:1140:1
```
<details><summary>Full error log</summary>
<p>
```
===== Application Startup at 2024-05-30 09:52:12 =====
2024-05-30T09:54:31.991512Z INFO text_generation_launcher: Args {
model_id: "mistralai/Mistral-7B-Instruct-v0.1",
revision: None,
validation_workers: 2,
sharded: None,
num_shard: Some(
1,
),
quantize: None,
speculate: None,
dtype: None,
trust_remote_code: true,
max_concurrent_requests: 128,
max_best_of: 2,
max_stop_sequences: 4,
max_top_n_tokens: 5,
max_input_tokens: None,
max_input_length: None,
max_total_tokens: None,
waiting_served_ratio: 0.3,
max_batch_prefill_tokens: None,
max_batch_total_tokens: None,
max_waiting_tokens: 20,
max_batch_size: None,
cuda_graphs: None,
hostname: "r-center-for-humans-and-machines-llm-stresstest-ubo8g-c2578-oc7",
port: 8080,
shard_uds_path: "/tmp/text-generation-server",
master_addr: "localhost",
master_port: 29500,
huggingface_hub_cache: Some(
"/data",
),
weights_cache_override: None,
disable_custom_kernels: false,
cuda_memory_fraction: 1.0,
rope_scaling: None,
rope_factor: None,
json_output: false,
otlp_endpoint: None,
cors_allow_origin: [],
watermark_gamma: None,
watermark_delta: None,
ngrok: false,
ngrok_authtoken: None,
ngrok_edge: None,
tokenizer_config_path: None,
disable_grammar_support: false,
env: false,
max_client_batch_size: 4,
}
2024-05-30T09:54:31.991620Z INFO hf_hub: Token file not found "/home/user/.cache/huggingface/token"
2024-05-30T09:54:32.027992Z INFO text_generation_launcher: Default `max_input_tokens` to 4095
2024-05-30T09:54:32.028013Z INFO text_generation_launcher: Default `max_total_tokens` to 4096
2024-05-30T09:54:32.028016Z INFO text_generation_launcher: Default `max_batch_prefill_tokens` to 4145
2024-05-30T09:54:32.028018Z INFO text_generation_launcher: Using default cuda graphs [1, 2, 4, 8, 16, 32]
2024-05-30T09:54:32.028022Z WARN text_generation_launcher: `trust_remote_code` is set. Trusting that model `mistralai/Mistral-7B-Instruct-v0.1` do not contain malicious code.
2024-05-30T09:54:32.028109Z INFO download: text_generation_launcher: Starting download process.
{"t":{"$date":"2024-05-30T11:54:32.245+02:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":21},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":21},"outgoing":{"minWireVersion":6,"maxWireVersion":21},"isInternalClient":true}}}
{"t":{"$date":"2024-05-30T11:54:32.246+02:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2024-05-30T11:54:32.247+02:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2024-05-30T11:54:32.248+02:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService"," | https://github.com/huggingface/chat-ui/issues/1225 | open | [
"docker"
] | 2024-05-30T11:07:36Z | 2025-01-16T22:54:08Z | 8 | rodrigobdz |
huggingface/chat-ui | 1,221 | 500 Internal Server Error with chat-ui | I executed an inference server with the address http://192.168.0.185:7777/generate_stream using text-generation-inference (TGI) v.2.0.4. When executing commands with curl, the inference results are responding normally. For ease of use, I am going to use chat-ui. Below is the .env.local file's content of chat-ui.
```
$ vi .env.local
1 MONGODB_URL=mongodb://127.0.0.1:27017
2 HF_TOKEN=hf_***********************************
3 ALLOW_INSECURE_COOKIES=true
4 MODELS=`[
5 {
6 "name":"samsung-codellama3-70b-custom",
7 "endpoints":[{"type":"tgi","url":"http://192.168.0.185:7777/generate_stream"}],
8 "description":"A_Coding_Assistant_Model",
9 "userMessageToken":"<|prompter|>",
10 "assistantMessageToken":"<|assistant|>",
11 "messageEndToken":"</s>",
12 "preprompt":"It_is_an_LLM-based_AI_assistant."',
13 "parameters":{
14 "temperature":0.2,
15 "top_p":0.9,
16 "repetition_penalty":1.2,
17 "top_k":10,
18 "truncate":1000,
19 "max_new_tokens":500
20 }
21 }
22 ]`
```
Then, I run `$ docker run -p 3000:3000 --env-file .env.local -v chat-ui:/data --name chat-ui ghcr.io/huggingface/chat-ui-db` command. Unfortunately, when I visited http://localhost:3000 with the MS Edge web browser, I got the error “500: An error occurred” as shown below.
* Screenshot:

* log message:
`{"level":50,"time":1717033937576,"pid":30,"hostname":"c5e9372bf1c1","locals":{"sessionId":"f19bea94fb83ffe9b2aa5d9c3247d9dc1e819772e3b0b4557294cc9a7e884bf0"},"url":"http://localhost:3000/","params":{},"request":{},"error":{"lineNumber":1,"columnNumber":1},"errorId":"7b3df79b-b4d0-4573-b92d-4ba0c182828b"}`
I am wondering what could be causing this error. Welcome to any hints to fix this issue.
#### References
* https://github.com/huggingface/chat-ui/issues?q=is%3Aissue+%22internal+server+error%22
* https://github.com/huggingface/chat-ui/blob/main/src/lib/server/models.ts#L198
| https://github.com/huggingface/chat-ui/issues/1221 | closed | [
"support"
] | 2024-05-30T00:35:58Z | 2024-05-31T00:19:49Z | 4 | leemgs |
huggingface/transformers.js | 785 | Using AutoModel, AutoTokenizer with distilbert models | ### Question
Does transformers.js have a function to get the label after getting the logits? How to get the labels from the inference output?
let tokenizer = await AutoTokenizer.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english');
let model = await AutoModel.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english');
let inputs = await tokenizer('I love transformers!');
let { logits } = await model(inputs); | https://github.com/huggingface/transformers.js/issues/785 | open | [
"question"
] | 2024-05-29T20:35:17Z | 2024-05-30T11:09:17Z | null | mram0509 |
huggingface/chat-ui | 1,220 | A few questions about the Cloudflare integration | Howdy 👋 ,
Working on a corresponding page for this in the [Cloudflare docs](https://developers.cloudflare.com/workers-ai/) and had a few [questions that I need answered](https://github.com/cloudflare/cloudflare-docs/pull/14488#issuecomment-2101481990) in this PR.
## Questions
1. If I'm reading [this line](https://github.com/huggingface/chat-ui/blob/25d6df858f15128e6ca23214ce7ad08f176a68ed/src/lib/server/endpoints/cloudflare/endpointCloudflare.ts#L18C21-L18C29) correctly, it sounds like [their example is actually incorrect](https://github.com/huggingface/chat-ui/blob/main/README.md?plain=1#L598) and might need to be updated?
2. If ^^^ is correct, does that mean that we should also be specifying the [`model` parameter](https://github.com/huggingface/chat-ui/blob/25d6df858f15128e6ca23214ce7ad08f176a68ed/src/lib/server/endpoints/cloudflare/endpointCloudflare.ts#L19) w/in the endpoint configuration?
3. Correct assumption that this only works with models prefixed with `@hf`, think so based on [their code](https://github.com/huggingface/chat-ui/blob/25d6df858f15128e6ca23214ce7ad08f176a68ed/src/lib/server/endpoints/cloudflare/endpointCloudflare.ts#L19).
Mind helping me out so I can get this live in our docs? | https://github.com/huggingface/chat-ui/issues/1220 | closed | [
"documentation"
] | 2024-05-29T19:11:14Z | 2024-06-20T12:53:52Z | 3 | kodster28 |
huggingface/transformers.js | 784 | Shouldn't this work? #v3 | ### Question
### Issue with Transformer.js v3 and WebGPU
#### Description
Yesterday I installed `transformer.js` with the "v3" branch to test the new features with WebGPU, but I get an error.
#### Error Message
```
@xenova_transformers.js?v=3b2ad0ed:24861 Uncaught (in promise)
Error: This pipeline is not yet supported in Transformers.js v3.
```
#### My code
```javascript
const transcriber = await pipeline("automatic-speech-recognition", "Xenova/whisper-small.en", {
device: 'webgpu',
dtype: 'fp32'
});
```
#### Additional Information
With the following code, it works perfectly fine:
```javascript
const extractor = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2', {
device: 'webgpu',
dtype: 'fp32', // or 'fp16'
});
``` | https://github.com/huggingface/transformers.js/issues/784 | open | [
"question"
] | 2024-05-29T13:36:52Z | 2024-05-29T14:59:49Z | null | kalix127 |
huggingface/datasets | 6,930 | ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})} | ### Describe the bug
When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}.
However, running dataset = load_dataset('allenai/c4', streaming=True, data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') works fine. What is the issue here?
### Steps to reproduce the bug
run code:
import os
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
from datasets import load_dataset
en = load_dataset("allenai/c4", "en", streaming=True)
### Expected behavior
Successfully loaded the dataset.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.17
- Python version: 3.8.19
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.0.3
- `fsspec` version: 2024.2.0
| https://github.com/huggingface/datasets/issues/6930 | open | [] | 2024-05-29T12:40:05Z | 2024-07-23T06:25:24Z | 2 | Polarisamoon |
huggingface/datasets | 6,929 | Avoid downloading the whole dataset when only README.me has been touched on hub. | ### Feature request
`datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same.
I think the current behaviour of the load_dataset function is triggered whenever a change of the hash of latest commit on huggingface hub, but is there a clever way to only download again the dataset **if and only if** data is modified ?
### Motivation
The current behaviour is a waste of network bandwidth / disk space / research time.
### Your contribution
I don't have time to submit a PR, but I hope a simple solution will emerge from this issue ! | https://github.com/huggingface/datasets/issues/6929 | open | [
"enhancement"
] | 2024-05-29T10:36:06Z | 2024-05-29T20:51:56Z | 2 | zinc75 |
huggingface/candle | 2,226 | How to load LoRA adapter along with the GGUF model? | Hello all,
I have recently managed to convert the flan-t5 base model to GGUF #2215 . But I also have multiple LoRA adapters trained for different tasks.
@EricLBuehler @LaurentMazare So I wish to know if there is a way to also load single/multiple LoRA adapters along with the GGUF model. I am currently running an inference using the following command:
```bash
cargo run --example quantized-t5 --release -- --weight-file "flant5large_f16.gguf" \
--config-file "flan-t5-large/config.json" \
--prompt "Make this text coherent: Their flight is weak. They run quickly through the tree canopy."
```
But I have the adapter as (adapter_model.bin and adapter_config.json), which I would like load along with this model **Without Weight Merging**. | https://github.com/huggingface/candle/issues/2226 | open | [] | 2024-05-29T06:03:10Z | 2024-06-05T03:34:14Z | null | niranjanakella |
huggingface/transformers.js | 781 | Progress callback for Moondream? | ### Question
While implementing Moondream (from the excellent example) I stumbled upon a few questions.
- How can I implement a callback while Moondream is generating tokens? A normal progressCallback didn’t work?
```
self.model.generate({
...text_inputs,
...vision_inputs,
do_sample: false,
max_new_tokens: 500,
progress_callback: (progress_data) => {
console.log("progress_data: ", progress_data);
if (progress_data.status !== 'progress') return;
self.postMessage(progress_data);
},
})
```
I’ve also tried the new CallbackStreamer option, but that had no effect either.
From the [demo](https://github.com/xenova/transformers.js/issues/743) I know it should be possible. But I [couldn't find the source code](https://github.com/xenova/transformers.js/tree/v3) for it (yet). And trying to learn anything from the demo as-is was, well, difficult with all that [minifying](https://xenova-experimental-moondream-webgpu.static.hf.space/assets/worker-DHaYXnZx.js) and framework stuff.
- Is this warning in the browser console anything to worry about?
```
The number of image tokens was not set in the model configuration. Setting it to the number of features detected by the vision encoder (729).models.js:3420
```
- What would be the effect of changing these values? E.g. what would be the expected outcome of changing decoder_model_merged from from q4 to q8?
```
embed_tokens: 'fp16',
vision_encoder: 'q8', // or 'fp16'
decoder_model_merged: 'q4', // or 'q8'
```
- What's the difference between Moondream and [NanoLlava](https://huggingface.co/spaces/Xenova/experimental-nanollava-webgpu)? When should I use one over the other? | https://github.com/huggingface/transformers.js/issues/781 | closed | [
"question"
] | 2024-05-28T14:07:07Z | 2024-06-03T18:49:10Z | null | flatsiedatsie |
huggingface/competitions | 29 | How to notify awardees or contact participants? | The competition just shows the participants' id.
So, how to contact them via email to inform them of the award requirements and request additional personal information? | https://github.com/huggingface/competitions/issues/29 | closed | [] | 2024-05-28T08:11:38Z | 2024-06-09T07:03:25Z | null | shangfenghuang |
huggingface/datatrove | 196 | How to deduplicate multiple datasets? | fineweb offer a deduplication demo for one dump. If want to deduplicate more dumps, should I merge dumps before deduplication ?
| https://github.com/huggingface/datatrove/issues/196 | closed | [] | 2024-05-28T03:00:31Z | 2024-06-07T07:25:45Z | null | canghaiyunfan |
huggingface/chat-ui | 1,183 | Prompt template for WizardLM-2-8x22B? | What is the prompt template for `WizardLM-2-8x22B` in the `.env.local`?
When setting it to the default one: `<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}`
the generated output is very odd and incoherent.
When setting the prompt template to the one displayed in the [model card:](https://huggingface.co/bartowski/WizardLM-2-8x22B-GGUF) `{system_prompt} USER: {prompt} ASSISTANT: </s>`
the output gets even worse.
Can anyone help?
| https://github.com/huggingface/chat-ui/issues/1183 | open | [
"support",
"models"
] | 2024-05-27T14:28:47Z | 2024-07-29T15:27:25Z | 3 | Arche151 |
huggingface/chat-ui | 1,178 | Improve Domain Search Results for Assistants | The domain search for assistants is a great idea, but the current implementation is not really useful if the domains are less likely to be top results like Wikipedia.
This seems happen because the web is searched first, and the domain filter is applied afterward. This method can easily result in zero parseable results (especially because PDF parsing is currently not available).
Proposed solution: Change the implementation so that the search process continues until at least one parseable result is found. To avoid excessive searching, an upper limit on the number of pages to be searched makes sense (e.g. at 100), but it should definitely be more than current limit of 8 pages. | https://github.com/huggingface/chat-ui/issues/1178 | open | [
"question",
"websearch"
] | 2024-05-27T10:33:22Z | 2024-05-31T11:02:11Z | null | lueschow |
huggingface/datatrove | 195 | What is the difference between tasks and workers? | What is the difference between tasks and workers, what is the definition of tasks and how to determine the number of tasks?
| https://github.com/huggingface/datatrove/issues/195 | closed | [] | 2024-05-27T06:32:25Z | 2024-05-27T07:08:11Z | null | canghaiyunfan |
huggingface/transformers.js | 778 | Pipeline execution time with 'image-classification' pipeline | ### Question
While calling the 'image-classification' pipeline we pass the image url. So this does a fetch of the image. So will the time taken to process the image include the download time of the image? So if the network is slow this may impact the pipeline performance. Is there a way to use an image thats already been downloaded by the webpage for an image element? | https://github.com/huggingface/transformers.js/issues/778 | open | [
"question"
] | 2024-05-26T20:15:21Z | 2024-05-27T04:14:52Z | null | mram0509 |
huggingface/transformers | 31,039 | What if past_key_values is in model_kwargs but is None | https://github.com/huggingface/transformers/blob/4c6c45ba138202f42582b5cea98126af87195a95/src/transformers/generation/utils.py#L1317
This line fails for me when past_key_values is in model_kwargs but is None. Line 1321 raises an error
Could you advice?
Thank you | https://github.com/huggingface/transformers/issues/31039 | closed | [] | 2024-05-26T07:58:18Z | 2024-06-10T06:32:23Z | null | estelleafl |
huggingface/chat-ui | 1,174 | Unable to deploy space with chatUI, getting error ** Failed to connect to 127.0.0.1 port 8080 after 0 ms** | Hi guys, so i am trying to deploy space with chatui template and **abacusai/Smaug-Llama-3-70B-Instruct** model but i am getting following error again and again in container logs.
`
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 40 retries
Warning: left.
2024-05-26T07:02:16.945294Z INFO text_generation_launcher: Downloaded /data/models--abacusai--Smaug-Llama-3-70B-Instruct/snapshots/fbaa713bdcdc2a2f85bbbe5808ec7046700a36e5/model-00007-of-00030.safetensors in 0:00:29.
2024-05-26T07:02:16.945393Z INFO text_generation_launcher: Download: [7/30] -- ETA: 0:10:47.285711
2024-05-26T07:02:16.945714Z INFO text_generation_launcher: Download file: model-00008-of-00030.safetensors
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 39 retries
Warning: left.
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 38 retries
Warning: left.
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 37 retries
Warning: left.
2024-05-26T07:02:47.664282Z INFO text_generation_launcher: Downloaded /data/models--abacusai--Smaug-Llama-3-70B-Instruct/snapshots/fbaa713bdcdc2a2f85bbbe5808ec7046700a36e5/model-00008-of-00030.safetensors in 0:00:30.
2024-05-26T07:02:47.664376Z INFO text_generation_launcher: Download: [8/30] -- ETA: 0:10:27
2024-05-26T07:02:47.664710Z INFO text_generation_launcher: Download file: model-00009-of-00030.safetensors
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 36 retries
Warning: left.
{"t":{"$date":"2024-05-26T09:02:57.879+02:00"},"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":{"ts_sec":1716706977,"ts_usec":879791,"thread":"8:0x7f4c6fd8f640","session_name":"WT_SESSION.checkpoint","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 37, snapshot max: 37 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}}
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in 10 seconds. 35 retries
Warning: left.
`
please help me out thanks
and yes i've added ` HF_TOEKN ` secret too | https://github.com/huggingface/chat-ui/issues/1174 | open | [
"support",
"docker"
] | 2024-05-26T07:05:12Z | 2025-06-27T10:30:24Z | 5 | starlord263 |
huggingface/optimum | 1,876 | Unable to generate question-answering model for Llama and there is also no list of what are the supported models for question-answering | ### Feature request
Hi, I received this error:
ValueError: Asked to export a llama model for the task question-answering, but the Optimum ONNX exporter only supports the tasks feature-extraction, feature-extraction-with-past, text-generation, text-generation-with-past, text-classification for llama. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task question-answering to be supported in the ONNX export for llama.
I was trying to generate an ONNX model for QuanAI/llama-2-7b-question-answering.
I also tried to search for the supported question-answering models on https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model which had a broken link pointing to https://huggingface.co/exporters/task_manager (returns a 404). I am happy to consider other question-answering models instead of Llama if there is a list of what is available.
### Motivation
Unable to export Llama question-answering model
### Your contribution
Not sure how to contribute, I am a new user | https://github.com/huggingface/optimum/issues/1876 | open | [
"bug",
"onnx"
] | 2024-05-26T06:10:47Z | 2024-10-09T07:57:24Z | null | customautosys |
huggingface/transformers.js | 776 | How to point to a specific model path in order to use compressed models? (brotli) | ### Question
Hi,
I just can't find the configuration to point to a specific model file path to use .onnx.br instead of .onnx for example.
I can run the model (distilbert-base-cased-distilled-squad) offline without any issue and it works. But I want to deploy it compressed using brotli. All I can see in the config files is references to the folder of the model but not the actual file paths.
E.g "model_quantized.onnx"
Any help is appreciated. | https://github.com/huggingface/transformers.js/issues/776 | open | [
"question"
] | 2024-05-24T18:31:12Z | 2024-05-25T10:24:25Z | null | KamilCSPS |
huggingface/chat-ui | 1,169 | Help debugging "Sorry, something went wrong. Please try again." | I am a developer working on extending this project. Sometimes I get this error "Sorry, something went wrong. Please try again." I can't figure out how to debug it when it happens. What I want is for it to display the full error somehow, like with a console.log. Is there some way to do that? Or is the error saved in the mongodb? This will help me a lot with debugging. | https://github.com/huggingface/chat-ui/issues/1169 | closed | [] | 2024-05-24T18:30:08Z | 2024-06-17T12:47:03Z | 1 | loganlebanoff |
huggingface/datasets | 6,916 | ```push_to_hub()``` - Prevent Automatic Generation of Splits | ### Describe the bug
I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?
### Steps to reproduce the bug
1. Have a unsplit dataset
```python
Dataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], num_rows: 944685 })
```
2. Push it to huggingface
```python
dataset.push_to_hub(dataset_name)
```
3. On the hugging face dataset repo, the dataset then appears to be splited:

4. Indeed, when loading the dataset from this repo, the dataset is split in two testing and training set.
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Jetlime/NF-CSE-CIC-IDS2018-v2", streaming=True)
dataset
```
output:
```
IterableDatasetDict({
train: IterableDataset({
features: ['input', 'output', 'Attack', '__index_level_0__'],
n_shards: 2
})
test: IterableDataset({
features: ['input', 'output', 'Attack', '__index_level_0__'],
n_shards: 1
})
```
### Expected behavior
The dataset shall not be splited, as not requested.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1 | https://github.com/huggingface/datasets/issues/6916 | closed | [] | 2024-05-22T23:52:15Z | 2024-05-23T00:07:53Z | 0 | jetlime |
huggingface/peft | 1,750 | How to finetune embeddings and LM head as a single layer when they are tied? | I am looking to LoRA-finetune models like Gemma, which have tied embeddings.
But, I would also like to have the shared embeddings as trainable (the common embedding table corresponding to both input and output embeddings of the network).
How do I achieve this?
---
_Note:_ Passing both `["embed_tokens","lm_head"]` to `modules_to_save` will result in untying them, because PEFT will create separate tensor copies. Passing only `["embed_tokens"]` will result in only the input embeddings trainable (by making a separate PEFT copy), while the output embeddings being as it is (the original tensor). | https://github.com/huggingface/peft/issues/1750 | closed | [] | 2024-05-21T18:32:07Z | 2025-08-12T11:54:09Z | null | GokulNC |
huggingface/blog | 2,078 | Idefics2's perceiver how to make attentionamsk to None? | I set atttentionmask to None, but the model doesn't learned well, my inputs didn't padded so I dont want attention mask. How to resolve this?
I also tried add a all ones attnetionmask, but the result also very worse. | https://github.com/huggingface/blog/issues/2078 | open | [] | 2024-05-21T07:38:57Z | 2024-05-21T07:38:57Z | null | lucasjinreal |
huggingface/peft | 1,749 | how to fine tune LoRA HQQ? | ### Feature request
how to fine tune LoRA to HQQ?
### Motivation
how to fine tune LoRA to HQQ?
### Your contribution
how to fine tune LoRA to HQQ? | https://github.com/huggingface/peft/issues/1749 | closed | [] | 2024-05-21T02:56:18Z | 2024-06-29T15:03:18Z | null | NickyDark1 |
huggingface/trl | 1,650 | how to save v_head | currently, I use `ppo_trainer.save_pretrained` to save a model that is still in training, because the machine I used is rather unstable, and I would often need to resume retraining should it be interrupted. When I resume the training I got the following warning:
```
WARNING:root:A <class 'peft.peft_model.PeftModelForCausalLM'> model is loaded from 'RLGAF_gemma-7b-lima_sft_preprocessing_20epochs', and no v_head weight is found. This IS expected if you are not resuming PPO training.
```
I guess this is relevant to my case, since I need to resume PPO training. What is the proper way then to save the checkpoint of PPO training with the goal of resuming it later? | https://github.com/huggingface/trl/issues/1650 | closed | [] | 2024-05-20T17:06:00Z | 2025-04-11T10:14:36Z | null | zyzhang1130 |
huggingface/chat-ui | 1,153 | Can we use Hugging Face Chat with a Custom Server | Requirement:
I have a custom API which takes in the inputs queries and passes it through a RAG pipeline and finally to llm and returns the result.
Question is, can I integrate it with Chat-UI (utilizing just chat-ui frontend and my custom backend). If yes, is there any documentation around it. As per what I understood till now, it looks like it is possible, but I have to make a lot of changes in the UI code itself to accommodate this. What I can see is that the UI is tightly coupled with the text generation from models and doesn't fully support calling an API directly without making code changes.
Are there any docs for this?
Also, can we use any other db other than mongodb? | https://github.com/huggingface/chat-ui/issues/1153 | closed | [] | 2024-05-20T16:44:01Z | 2024-09-03T07:52:18Z | 9 | snps-ravinu |
huggingface/nanotron | 176 | Where is the "nanotron format" defined? | I see that any(?) hf model can be converted to nanotron format with this [script](https://github.com/huggingface/nanotron/blob/main/examples/llama/convert_hf_to_nanotron.py).
Is there documentation describing this format?
Can any model that may be loaded with AutoModelForCausalLM be converted to nanotron format for training?
| https://github.com/huggingface/nanotron/issues/176 | closed | [] | 2024-05-20T13:54:52Z | 2024-05-21T17:22:50Z | null | RonanKMcGovern |
huggingface/chat-ui | 1,151 | Can I change localhost to remote IP? | I am running Chat-UI in local, but I want to change localhost to IP, I am unable to find this configguration in the code. Can anyone help? | https://github.com/huggingface/chat-ui/issues/1151 | closed | [] | 2024-05-20T05:34:23Z | 2024-05-20T07:01:30Z | 1 | snps-ravinu |
huggingface/candle | 2,197 | How to slice a tensor? | tch has the function `slice` that return a tensor slice. Is there a corresponding function for candle? | https://github.com/huggingface/candle/issues/2197 | closed | [] | 2024-05-20T00:55:08Z | 2024-05-20T01:46:58Z | null | Gadersd |
huggingface/tokenizers | 1,534 | How to allow the merging of consecutive newline tokens \n when training a byte-level bpe tokenizer? | Hello, I'm currently working on training a byte-level BPE tokenizer using the Huggingface tokenizers library. I've created a simple training script, a sample corpus, and provided the output produced by this script. My aim is to understand why consecutive newline tokens `\n` are not being merged into a single token `\n\n` during the tokenization process. Below are the details:
```python
from tokenizers import (
Tokenizer,
pre_tokenizers,
models,
decoders,
trainers,
processors,
)
files = ["demo_corpus.txt"]
tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.Sequence([
pre_tokenizers.Digits(individual_digits=True),
pre_tokenizers.ByteLevel(add_prefix_space=False, use_regex=True)
])
tokenizer.decoder = decoders.ByteLevel()
tokenizer.post_processor = processors.ByteLevel()
trainer = trainers.BpeTrainer(
initial_alphabet=pre_tokenizers.ByteLevel.alphabet(),
vocab_size=2000,
special_tokens=[
"<pad>", "<|beginoftext|>", "<|endoftext|>"
]
)
tokenizer.train(files, trainer)
test_text = "#include <set>\n\n\n\n\n"
print("pre-tokenize spans:", tokenizer.pre_tokenizer.pre_tokenize_str(test_text))
ids = tokenizer.encode(test_text).ids
print(f"tokens: {[tokenizer.decode([tid]) for tid in ids]}")
```
demo_corpus.txt:
```
#include <cstdio>
#include <vector>
#include <set>
using namespace std;
int main(){
int N, A[100000], p = 0;
multiset<int> S;
scanf("%d", &N);
int p0 = 0, q0 = 1, q = N-1;
vector<int> result;
for(int i: result)
printf("%d\n", i);
}
```
output of training script:
```
pre-tokenize spans: [('#', (0, 1)), ('include', (1, 8)), ('Ġ<', (8, 10)), ('set', (10, 13)), ('>', (13, 14)), ('ĊĊĊĊĊ', (14, 19))]
tokens: ['#', 'include', ' <', 'set', '>', '\n', '\n', '\n', '\n', '\n']
```
the following is tokens produced by llama3 tokenizer:
```python
tokenizer = LlamaTokenizerFast.from_pretrained("my llama3 vocab path")
test_text = "#include <set>\n\n\n\n\n"
print([tokenizer.decode([tid]) for tid in tokenizer(test_text)["input_ids"]])
# output
# ['<|begin_of_text|>', '#include', ' <', 'set', '>\n\n\n\n\n']
```
| https://github.com/huggingface/tokenizers/issues/1534 | open | [
"bug"
] | 2024-05-18T03:11:35Z | 2025-07-07T09:34:16Z | null | liuslnlp |
huggingface/transformers | 30,886 | How to get the data seen by the model during training? | Hi! I haven't been able to find an answer to my question so opening an issue here. I'm fine-tuning the GPT-2 XL model using the trainer for 10 epochs and I'd like to save the data seen by the model during each epoch. More specifically, I want to save the data seen by the model every 242 steps. For instance, data seen from step 1 to step 242, step 243 to step 484, and so on until the end of the 10th epoch. I'm a bit confused about how to do this since the data is shuffled after each epoch. Is it possible to use `TrainerCallback` here?
These are my training args
` training_args = TrainingArguments(
f"models/XL",
evaluation_strategy = "steps",
learning_rate=2e-5,
weight_decay=0.01,
push_to_hub=False,
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
save_strategy="epoch",
save_steps = 242,
fp16=True,
report_to="none",
logging_strategy="steps",
logging_steps=100,
)`
I'd appreciate any directions. Thanks :) | https://github.com/huggingface/transformers/issues/30886 | closed | [] | 2024-05-17T21:32:50Z | 2024-05-20T17:26:29Z | null | jaydeepborkar |
huggingface/optimum | 1,859 | Improve inference time TrOCR | I have a fine tuning TrOCR model, and i'm using
`from optimum.onnxruntime import ORTModelForVision2Seq`
how i can then make the inferation faster, when some one make a request in a endpoint api ? , i already using async for multi request | https://github.com/huggingface/optimum/issues/1859 | closed | [
"question",
"inference",
"Stale"
] | 2024-05-16T13:31:53Z | 2024-12-18T02:06:21Z | null | CrasCris |
huggingface/chat-ui | 1,148 | Chat-ui Audit Logs | Hello,
Is there a way to log the username, sessionID, conversation ID, what question was sent in some type of log in chat-ui ? Or just the username and the question?
How can we accomplish this?
Thanks | https://github.com/huggingface/chat-ui/issues/1148 | open | [] | 2024-05-16T11:13:30Z | 2024-05-21T18:48:17Z | 5 | Neb2653 |
huggingface/diffusers | 7,957 | How to implement `IPAdapterAttnProcessor2_0` with xformers | I want to fine-tune IP-adapter model with xformers, but I did not find the implementation of the xformers version corresponding to IPAdapterAttnProcessor2_0. I want to implement attention processor in xformers, are the following two lines of code the only difference between the two versions?
In `XFormersAttnProcessor`:
```python
hidden_states = xformers.ops.memory_efficient_attention(
query, key, value, attn_bias=attention_mask, op=self.attention_op, scale=attn.scale
)
```
In `AttnProcessor2_0`:
```python
hidden_states = F.scaled_dot_product_attention(
query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
)
``` | https://github.com/huggingface/diffusers/issues/7957 | closed | [] | 2024-05-16T08:54:07Z | 2024-05-23T13:03:42Z | null | JWargrave |
huggingface/OBELICS | 12 | How to use LDA for topic modeling | Thanks for your work again!
In the paper the topic modeling of OBELICS is implemented using LDA, and I am wondering what is the specific LDA model was used, what setting was used to train the model, and most importantly, how the topic was derived from the key words and weights(like using LLMs)? Thank you for answering! | https://github.com/huggingface/OBELICS/issues/12 | open | [] | 2024-05-16T03:56:29Z | 2024-06-11T16:27:12Z | null | jrryzh |
huggingface/transformers.js | 765 | Can you use all transformers models with transformers.js? | ### Question
Hi,
can you use [all transformers models ](https://huggingface.co/models?library=transformers&sort=trending)(which seem to be listed under the python library) also in transformers.js? If yes, how so? Just download and provide the local path? I'm working in nodejs right now.
For example I'd like to use something like [Llama 3](https://huggingface.co/meta-llama/Meta-Llama-3-8B) with Transformers.js.
If that doesn't work, what would be the strongest general purpose LLM available for transformers.js right now (text generation, something like chatgpt, gemini, ...)?
Greetings & thanks a lot! | https://github.com/huggingface/transformers.js/issues/765 | open | [
"question"
] | 2024-05-15T19:35:28Z | 2024-05-15T21:21:57Z | null | Sir-hennihau |
huggingface/datasets | 6,899 | List of dictionary features get standardized | ### Describe the bug
Hi, i’m trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature.
How can I keep the same set of keys as in the original list for each dictionary under a feature?
### Steps to reproduce the bug
```
from datasets import Dataset
# Define a function to generate a sample with "tools" feature
def generate_sample():
# Generate random sample data
sample_data = {
"text": "Sample text",
"feature_1": []
}
# Add feature_1 with random keys for this sample
feature_1 = [{"key1": "value1"}, {"key2": "value2"}] # Example feature_1 with random keys
sample_data["feature_1"].extend(feature_1)
return sample_data
# Generate multiple samples
num_samples = 10
samples = [generate_sample() for _ in range(num_samples)]
# Create a Hugging Face Dataset
dataset = Dataset.from_list(samples)
dataset[0]
```
```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}```
### Expected behavior
```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}```
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.0
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | https://github.com/huggingface/datasets/issues/6899 | open | [] | 2024-05-15T14:11:35Z | 2025-04-01T20:48:03Z | 2 | sohamparikh |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.